Monitor Disk I/O

[/share/MD0_DATA] # dstat -tdD total,md0 30
----system---- -dsk/total----dsk/md0--
  date/time   | read  writ: read  writ
25-02 15:06:03|  18M 2937k:  17M 2367k
25-02 15:06:33|  11M   11M:9976k 9446k
...

-t for timestamps
-d for disk statistics
-D to specify the exact devices to report
30 to average over 30 seconds. The display is updated every second, but only once per 30 seconds a new line will be started.

[/share/MD0_DATA] # iostat -x 1
                             extended device statistics                       
device mgr/s mgw/s    r/s    w/s    kr/s    kw/s   size queue   wait svc_t  %b 
sdx        0     0    0.0    0.0     1.6     0.0   74.9   0.0   13.9   7.6   0 
sda       25   124   35.5    4.9  3132.5   512.9   90.3   1.0   23.6   3.6  15 
sdb       25   123   33.6    4.5  3115.8   516.4   95.4   0.4   11.7   3.6  14 
sdc       25   123   35.3    5.0  3121.1   512.9   90.1   0.9   22.8   3.5  14 
sdd       25   123   33.4    4.5  3109.8   510.2   95.5   0.5   12.3   3.6  14 
sde       25   123   35.4    5.0  3122.5   513.0   90.1   0.9   21.6   3.0  12 
sdf       25   123   33.9    4.5  3111.8   510.2   94.3   0.4   10.3   3.0  11 
md9        0     0    0.1    0.0     1.5     0.1   17.9   0.0    0.0   0.0   0 
md13       0     0    2.1    1.8   123.9     7.2   34.0   0.0    0.0   0.0   0 
md6        0     0    0.0    0.2     0.2     0.8    4.0   0.0    0.0   0.0   0 
md0        0     0  334.2   52.8 18498.3  2479.4   54.2   0.0    0.0   0.0   0 
                             extended device statistics       

Logstash Patterns / Grock

So I was working on logstash and didn’t like the huge / worthless messages.

Reference on what patterns already exist:
Grock Patterns Reference

An amazing tool for figuring out your pattern:
http://grokdebug.herokuapp.com/

I modified

root logstash:/etc/logstash/conf.d# vi 10-syslog.conf 

to look like

filter {
   if [type] == "syslog" 
   {
      if [host] == "10.0.2.3"  
      {
         grok 
         {
            remove_tag => "_grokparsefailure"
            add_tag => "networkadmin"
         }
      }

      else if [host] == "10.0.2.1"  
      {
         grok 
         {
            match => { "message" => "%{IPTABLES}"}
            patterns_dir => ["/var/lib/logstash/etc/grok"]
            remove_tag => ["_grokparsefailure"]
            add_tag => ["ddwrt"]
         }
         if [src_ip]  
         {
            geoip 
            {
               source => "src_ip"
               target => "geoip"
               add_field => [ "[geoip][src][coordinates]", "%{[geoip][longitude]}" ]
               add_field => [ "[geoip][src][coordinates]", "%{[geoip][latitude]}"  ]
            }
            mutate 
            {
               convert => [ "[geoip][coordinates]", "float" ]
            }
         }   
   
         if [dst_ip]  
         {
            geoip 
            {
               source => "dst_ip"
               target => "geoip"
               add_field => [ "[geoip][dst][coordinates]", "%{[geoip][longitude]}" ]
               add_field => [ "[geoip][dst][coordinates]", "%{[geoip][latitude]}"  ]
            }
            mutate 
            {
               convert => [ "[geoip][coordinates]", "float" ]
            }
         }  
         # http://www.networkassassin.com/elk-for-network-operations/
         #Geolocate logs that have SourceAddress and if that SourceAddress is a non-RFC1918 address or APIPA address
         if [src_ip] and [src_ip] !~ "(^127\.0\.0\.1)|(^10\.)|(^192\.168\.)" 
         {
            geoip 
            {
               database => "/opt/logstash/vendor/geoip/GeoLiteCity.dat"
               source => "src_ip"
               target => "SourceGeo"
            }
            #Delete 0,0 in SourceGeo.location if equal to 0,0
            if ([SourceGeo.location] and [SourceGeo.location] =~ "0,0") {
               mutate {
                  replace => [ "SourceGeo.location", "" ]
               }
            }
         }
         
         #Geolocate logs that have DestinationAddress and if that DestinationAddress is a non-RFC1918 address or APIPA address
         if [dst_ip] and [dst_ip] !~ "(^127\.0\.0\.1)|(^10\.)|(^192\.168\.)" 
         {
            geoip 
            {
               database => "/opt/logstash/vendor/geoip/GeoLiteCity.dat"
               source => "dst_ip"
               target => "DestinationGeo"
            }
            #Delete 0,0 in DestinationGeo.location if equal to 0,0
            if ([DestinationGeo.location] and [DestinationGeo.location] =~ "0,0") 
            {
               mutate 
               {
                  replace => [ "dst_ip.location", "" ]
               }
            }
         }
      }
      else
      {
         grok 
         {
            remove_tag => "_grokparsefailure"
            add_tag => "syslog from what IP???????"
         }
      }
   }
   else {
      grok {
         match => ["message", "%{GREEDYDATA:syslog_message}"]
         overwrite => ["message"]
         add_tag => "not syslog"
         #add_field => [ "received_at", "%{timestamp}" ]
         #add_field => [ "received_from", "%{host}" ]
      }
  }
}

Then bounce the service

root logstash:/etc/logstash/conf.d# service logstash restart; tail -f /var/log/logstash/logstash.log
logstash stop/waiting
logstash start/running, process 5248
{:timestamp=>"2015-02-17T18:15:17.043000-0800", :message=>"Using milestone 1 input plugin 'lumberjack'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.4.2-modified/plugin-milestones", :level=>:warn}
{:timestamp=>"2015-02-17T18:15:17.174000-0800", :message=>"Using milestone 1 filter plugin 'syslog_pri'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.4.2-modified/plugin-milestones", :level=>:warn}
{:timestamp=>"2015-02-17T18:15:17.973000-0800", :message=>"Using milestone 1 input plugin 'syslog'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.4.2-modified/plugin-milestones", :level=>:warn}
{:timestamp=>"2015-02-17T18:16:03.604000-0800", :message=>"Using milestone 1 input plugin 'lumberjack'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.4.2-modified/plugin-milestones", :level=>:warn}
{:timestamp=>"2015-02-17T18:16:03.732000-0800", :message=>"Using milestone 1 filter plugin 'syslog_pri'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.4.2-modified/plugin-milestones", :level=>:warn}
{:timestamp=>"2015-02-17T18:16:04.527000-0800", :message=>"Using milestone 1 input plugin 'syslog'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.4.2-modified/plugin-milestones", :level=>:warn}

Now I can filter my traffic & map it in Kibana

elasticsearch won’t start and leaves no logs

I was installing elasticsearch via logstash & grafana setup. However, upon going to the url for grafana, I had Upgrade Required Your version of Elasticsearch is too old. Kibana requires Elasticsearch 0.90.9 or above. at the top of the screen.

I goto the box where I’m installing and find the process is not running.

                                                                                                           [ OK ] 
root logstash:~# ps -ef | grep lastic
root      2201  1588  0 18:44 pts/0    00:00:00 grep --color=auto lastic

So I look in init script & find the log dir is /var/log/elasticsearch.

root logstash:~# grep LOG_DIR /etc/init.d/elasticsearch 
LOG_DIR=/var/log/$NAME
...

I look in the log dir and there is nothing.

root logstash:~# ls -latr /var/log/elasticsearch/
total 24
drwxrwxr-x 12 root          syslog         4096 Feb 13 18:09 ..
drwxr-xr-x  2 elasticsearch elasticsearch  4096 Feb 13 18:39 .

WTF!? How do I debug this?!?!

Then I found this.

So I edit my init script to display my startup command.

root logstash:~# vi /etc/init.d/elasticsearch 

I add the log_daemon_msg as below:

# Start Daemon
log_daemon_msg "sudo -u $ES_USER $DAEMON $DAEMON_OPTS"
start-stop-daemon --start -b --user "$ES_USER" -c "$ES_USER" --pidfile "$PID_FILE" --exec $DAEMON -- $DAEMON_OPTS
log_end_msg $?

Now when I start elasticsearch I see the exact command being run to kick it off. I will use this to run elasticsearch EXACTLY as the init script does so I can figure out what is wrong.

root logstash:~# service elasticsearch start
 * Starting Elasticsearch Server                                                                                                                       * sudo -u elasticsearch /usr/share/elasticsearch/bin/elasticsearch -d -p /var/run/elasticsearch.pid -Des.default.config=/etc/elasticsearch/elasticsearch.yml -Des.default.path.home=/usr/share/elasticsearch -Des.default.path.logs=/var/log/elasticsearch -Des.default.path.data=/var/lib/elasticsearch -Des.default.path.work=/tmp/elasticsearch -Des.default.path.conf=/etc/elasticsearch                                                              [ OK ] 

OK Great. Now that I have the command, I can run it.

root logstash:~# sudo -u elasticsearch  /usr/bin/java -Xms256m -Xmx1g -Xss256k -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Delasticsearch -Des.foreground=yes -Des.path.home=/usr/share/elasticsearch -cp :/usr/share/elasticsearch/lib/elasticsearch-1.1.1.jar:/usr/share/elasticsearch/lib/*:/usr/share/elasticsearch/lib/sigar/* org.elasticsearch.bootstrap.Elasticsearch
log4j:WARN No appenders could be found for logger (node).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
{1.1.2}: Initialization Failed ...
- ElasticsearchIllegalStateException[Failed to obtain node lock, is the following location writable?: [/usr/share/elasticsearch/data/elasticsearch]]
	IOException[failed to obtain lock on /usr/share/elasticsearch/data/elasticsearch/nodes/49]
		IOException[Cannot create directory: /usr/share/elasticsearch/data/elasticsearch/nodes/49] 

AHHA!!! I can’t create a dir under nodes. The process is running as the elasticsearch user. Who owns the parent dir?

root logstash:~# ls -latr /usr/share/elasticsearch
total 36
-rw-r--r--   1 root root 8093 May 22  2014 README.textile
-rw-r--r--   1 root root  150 May 22  2014 NOTICE.txt
-rw-r--r--   1 root root 2141 May 22  2014 core-signatures.txt
drwxr-xr-x 114 root root 4096 Feb 13 16:50 ..
drwxr-xr-x   3 root root 4096 Feb 13 17:28 data
drwxr-xr-x   3 root root 4096 Feb 13 17:47 lib
drwxr-xr-x   2 root root 4096 Feb 13 17:47 bin
drwxr-xr-x   5 root root 4096 Feb 13 17:47 .

root owns it, but the process is being run as the elasticsearch user. Therefore, let’s chown the dir so elasticsearch user can write to it:

root logstash:~# chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/data

And start it again:

root logstash:~# service elasticsearch start
 * Starting Elasticsearch Server                                                                                                                       * sudo -u elasticsearch /usr/share/elasticsearch/bin/elasticsearch -d -p /var/run/elasticsearch.pid -Des.default.config=/etc/elasticsearch/elasticsearch.yml -Des.default.path.home=/usr/share/elasticsearch -Des.default.path.logs=/var/log/elasticsearch -Des.default.path.data=/var/lib/elasticsearch -Des.default.path.work=/tmp/elasticsearch -Des.default.path.conf=/etc/elasticsearch                                                              [ OK ] 
root logstash:~# sudo -u elasticsearch /usr/share/elasticsearch/bin/elasticsearch -d -p /var/run/elasticsearch.pid -Des.default.config=/etc/elasticsearch/elasticsearch.yml -Des.default.path.home=/usr/share/elasticsearch -Des.default.path.logs=/var/log/elasticsearch -Des.default.path.data=/var/lib/elasticsearch -Des.default.path.work=/tmp/elasticsearch -Des.default.path.conf=/etc/elasticsearch
root logstash:~# {1.1.2}: Setup Failed ...
- SettingsException[Failed to load settings from [file:/etc/elasticsearch/elasticsearch.yml]]
	ScannerException[while scanning a simple key; could not found expected ':';  in 'reader', line 380, column 1:

A simple problem to fix. I’m missing a space between the : and value.

root logstash:~# vi /etc/elasticsearch/elasticsearch.yml 

replace:

script.disable_dynamic:true

with:

script.disable_dynamic: true

And it’s now running!

                                                                                           [ OK ] 
root logstash:~# service elasticsearch start
 * Starting Elasticsearch Server                                                                                                                       * sudo -u elasticsearch /usr/share/elasticsearch/bin/elasticsearch -d -p /var/run/elasticsearch.pid -Des.default.config=/etc/elasticsearch/elasticsearch.yml -Des.default.path.home=/usr/share/elasticsearch -Des.default.path.logs=/var/log/elasticsearch -Des.default.path.data=/var/lib/elasticsearch -Des.default.path.work=/tmp/elasticsearch -Des.default.path.conf=/etc/elasticsearch                                                              [ OK ] 
root logstash:~# ps -ef | grep elasticsearch
elastic+  7125     1 96 19:02 ?        00:00:09 /usr/lib/jvm/java-7-oracle/bin/java -Xms2g -Xmx2g -Xss256k -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Delasticsearch -Des.pidfile=/var/run/elasticsearch.pid -Des.path.home=/usr/share/elasticsearch -cp :/usr/share/elasticsearch/lib/elasticsearch-1.1.2.jar:/usr/share/elasticsearch/lib/*:/usr/share/elasticsearch/lib/sigar/* -Des.default.config=/etc/elasticsearch/elasticsearch.yml -Des.default.path.home=/usr/share/elasticsearch -Des.default.path.logs=/var/log/elasticsearch -Des.default.path.data=/var/lib/elasticsearch -Des.default.path.work=/tmp/elasticsearch -Des.default.path.conf=/etc/elasticsearch org.elasticsearch.bootstrap.Elasticsearch
root      7141  1446  0 19:02 pts/0    00:00:00 grep --color=auto elasticsearch

CentOS – Install Puppet Agent

  1. Enable base repo for yum
    [root xenserver ~]# vi /etc/yum.repos.d/CentOS-Base.repo
    ...
    [base]
    ...
    enabled=1
    
    [root xenserver ~]# rpm -Uvh http://dl.fedoraproject.org/pub/epel/5/i386/epel-release-5-4.noarch.rpm
    
  2. Install Puppet
    [root xenserver ~]#  yum install -y puppet
    
  3. Enable Service
    [root xenserver ~]# chkconfig puppet on
    
  4. If your master is not called puppet, set it in your conf file
    [root xenserver ~]# vi /etc/puppet/puppet.conf 
    [agent]
    server = puppet-master.h8n.lan
    ...
    
  5. Start your puppet service.
    [root xenserver ~]# /etc/init.d/puppet start
    Starting puppet agent:                                     [  OK  ]
    
  6. Now goto your puppet master so you can accept the cert of the new client.
    root puppet-master:~# puppet cert list
      "xenserver" (SHA256) C6:68:BD:94:6D:1A:19:AB:38:3E:AD:EC:33:3D:B4:E0:5D:02:B6:C9:76:16:BE:C3:81:A3:9F:6D:A0:51:BD:DC
    
    root puppet-master:~# puppet cert sign xenserver
    Notice: Signed certificate request for xenserver
    Notice: Removing file Puppet::SSL::CertificateRequest xenserver at '/var/lib/puppet/ssl/ca/requests/xenserver.pem'
    

Ubuntu – Install Puppet Agent

  1. Download the Puppet Labs package
    user puppet-agent:# cd ~; wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb
    
  2. Install the Puppet Labs package
    user puppet-agent:# sudo dpkg -i puppetlabs-release-trusty.deb
    
  3. Update apt’s available packages
    user puppet-agent:# sudo apt-get update
    
  4. Install the Puppet Agent package
    user puppet-agent:# sudo apt-get install puppet
    
  5. Modify puppet default file.
    user puppet-agent:# sudo vi /etc/default/puppet
    
    1. Enable the Puppet Agent by changing START from “no” to “yes”
      START=yes
      
  6. Modify puppet.conf
    user puppet-agent:# sudo vi /etc/puppet/puppet.conf
    
    1. Delete the templatedir line and the [master] section from puppet.conf
      sudo vi /etc/puppet/puppet.conf
      
    2. Tell your agent where it’s master is. This step is not required if your master is called puppet, but this will only happen wiht micro networks.
      [agent]
      server = puppet-master.my.lan
      
  7. Start the puppet agent
    user puppet-agent:# sudo service puppet start
    
  8. Now goto your puppet master so you can accept the cert of the new client.
    root puppet-master:# puppet cert list
    "puppet-agent" (SHA256) XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX
    
    root puppet-master:# puppet cert sign puppet-agent
    Notice: Signed certificate request for puppet-agent
    Notice: Removing file Puppet::SSL::CertificateRequest puppet-agent at '/var/lib/puppet/ssl/ca/requests/puppet-agent.pem'
    

Source

Ubuntu Install Zabbix Client

  1. Update your apt-get sources to include recent zabbix
    sudo vi /etc/apt/sources.list
    
    # Zabbix Application PPA
    deb http://ppa.launchpad.net/tbfr/zabbix/ubuntu precise main
    deb-src http://ppa.launchpad.net/tbfr/zabbix/ubuntu precise main
    
  2. Add the PPA’s key so that apt-get trusts the source:
    sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys C407E17D5F76A32B
    
  3. Update your repo & install the agent.
    sudo apt-get update
    sudo apt-get install zabbix-agent
    
  4. Configure to point to your Zabbix server & know the hostname of client.
    vi /etc/zabbix/zabbix_agentd.conf
    ...
    Server=10.0.0.9
    ...
    Hostname=docker.foo.lan
    ...
    
  5. Now bounce the agent.
    Source