ansible when with_items – ignore one group

I’m setting up icinga & using https://github.com/Icinga/icinga2-ansible

However, it’s putting a config on the icinga server to monitor itself, which is causing

information/ConfigCompiler: Compiling config file: /etc/icinga2/conf.d/pmlgra-03.domain.conf
information/ConfigCompiler: Compiling config file: /etc/icinga2/conf.d/satellite.conf
information/ConfigCompiler: Compiling config file: /etc/icinga2/conf.d/services.conf
information/ConfigCompiler: Compiling config file: /etc/icinga2/conf.d/slave-6.domain.conf
information/ConfigCompiler: Compiling config file: /etc/icinga2/conf.d/templates.conf
information/ConfigCompiler: Compiling config file: /etc/icinga2/conf.d/timeperiods.conf
information/ConfigCompiler: Compiling config file: /etc/icinga2/conf.d/users.conf
critical/config: Error: Object 'dmlici-02.domain' of type 'Host' re-defined: in /etc/icinga2/conf.d/hosts.conf: 18:1-18:20; previous definition: in /etc/icinga2/conf.d/dmlici-02.domain.conf: 2:1-2:35
Location: in /etc/icinga2/conf.d/hosts.conf: 18:1-18:20
/etc/icinga2/conf.d/hosts.conf(16):  */
/etc/icinga2/conf.d/hosts.conf(17): 

So this ansible with_items and when fixed it. Note: My icinga server is in a group called icinga

- name: Copy Host Definitions
  template: src=hosts_template.j2
            dest={{ icinga2_hosts_dir }}/{{ hostvars[item]['inventory_hostname'] }}.conf
            owner=root 
            group=root 
            mode=0644
  with_items: groups['all']
  when: "'icinga' not in hostvars[item]['group_names']"

Monitor Disk I/O

[/share/MD0_DATA] # dstat -tdD total,md0 30
----system---- -dsk/total----dsk/md0--
  date/time   | read  writ: read  writ
25-02 15:06:03|  18M 2937k:  17M 2367k
25-02 15:06:33|  11M   11M:9976k 9446k
...

-t for timestamps
-d for disk statistics
-D to specify the exact devices to report
30 to average over 30 seconds. The display is updated every second, but only once per 30 seconds a new line will be started.

[/share/MD0_DATA] # iostat -x 1
                             extended device statistics                       
device mgr/s mgw/s    r/s    w/s    kr/s    kw/s   size queue   wait svc_t  %b 
sdx        0     0    0.0    0.0     1.6     0.0   74.9   0.0   13.9   7.6   0 
sda       25   124   35.5    4.9  3132.5   512.9   90.3   1.0   23.6   3.6  15 
sdb       25   123   33.6    4.5  3115.8   516.4   95.4   0.4   11.7   3.6  14 
sdc       25   123   35.3    5.0  3121.1   512.9   90.1   0.9   22.8   3.5  14 
sdd       25   123   33.4    4.5  3109.8   510.2   95.5   0.5   12.3   3.6  14 
sde       25   123   35.4    5.0  3122.5   513.0   90.1   0.9   21.6   3.0  12 
sdf       25   123   33.9    4.5  3111.8   510.2   94.3   0.4   10.3   3.0  11 
md9        0     0    0.1    0.0     1.5     0.1   17.9   0.0    0.0   0.0   0 
md13       0     0    2.1    1.8   123.9     7.2   34.0   0.0    0.0   0.0   0 
md6        0     0    0.0    0.2     0.2     0.8    4.0   0.0    0.0   0.0   0 
md0        0     0  334.2   52.8 18498.3  2479.4   54.2   0.0    0.0   0.0   0 
                             extended device statistics       

Logstash Patterns / Grock

So I was working on logstash and didn’t like the huge / worthless messages.

Reference on what patterns already exist:
Grock Patterns Reference

An amazing tool for figuring out your pattern:
http://grokdebug.herokuapp.com/

I modified

root logstash:/etc/logstash/conf.d# vi 10-syslog.conf 

to look like

filter {
   if [type] == "syslog" 
   {
      if [host] == "10.0.2.3"  
      {
         grok 
         {
            remove_tag => "_grokparsefailure"
            add_tag => "networkadmin"
         }
      }

      else if [host] == "10.0.2.1"  
      {
         grok 
         {
            match => { "message" => "%{IPTABLES}"}
            patterns_dir => ["/var/lib/logstash/etc/grok"]
            remove_tag => ["_grokparsefailure"]
            add_tag => ["ddwrt"]
         }
         if [src_ip]  
         {
            geoip 
            {
               source => "src_ip"
               target => "geoip"
               add_field => [ "[geoip][src][coordinates]", "%{[geoip][longitude]}" ]
               add_field => [ "[geoip][src][coordinates]", "%{[geoip][latitude]}"  ]
            }
            mutate 
            {
               convert => [ "[geoip][coordinates]", "float" ]
            }
         }   
   
         if [dst_ip]  
         {
            geoip 
            {
               source => "dst_ip"
               target => "geoip"
               add_field => [ "[geoip][dst][coordinates]", "%{[geoip][longitude]}" ]
               add_field => [ "[geoip][dst][coordinates]", "%{[geoip][latitude]}"  ]
            }
            mutate 
            {
               convert => [ "[geoip][coordinates]", "float" ]
            }
         }  
         # http://www.networkassassin.com/elk-for-network-operations/
         #Geolocate logs that have SourceAddress and if that SourceAddress is a non-RFC1918 address or APIPA address
         if [src_ip] and [src_ip] !~ "(^127\.0\.0\.1)|(^10\.)|(^192\.168\.)" 
         {
            geoip 
            {
               database => "/opt/logstash/vendor/geoip/GeoLiteCity.dat"
               source => "src_ip"
               target => "SourceGeo"
            }
            #Delete 0,0 in SourceGeo.location if equal to 0,0
            if ([SourceGeo.location] and [SourceGeo.location] =~ "0,0") {
               mutate {
                  replace => [ "SourceGeo.location", "" ]
               }
            }
         }
         
         #Geolocate logs that have DestinationAddress and if that DestinationAddress is a non-RFC1918 address or APIPA address
         if [dst_ip] and [dst_ip] !~ "(^127\.0\.0\.1)|(^10\.)|(^192\.168\.)" 
         {
            geoip 
            {
               database => "/opt/logstash/vendor/geoip/GeoLiteCity.dat"
               source => "dst_ip"
               target => "DestinationGeo"
            }
            #Delete 0,0 in DestinationGeo.location if equal to 0,0
            if ([DestinationGeo.location] and [DestinationGeo.location] =~ "0,0") 
            {
               mutate 
               {
                  replace => [ "dst_ip.location", "" ]
               }
            }
         }
      }
      else
      {
         grok 
         {
            remove_tag => "_grokparsefailure"
            add_tag => "syslog from what IP???????"
         }
      }
   }
   else {
      grok {
         match => ["message", "%{GREEDYDATA:syslog_message}"]
         overwrite => ["message"]
         add_tag => "not syslog"
         #add_field => [ "received_at", "%{timestamp}" ]
         #add_field => [ "received_from", "%{host}" ]
      }
  }
}

Then bounce the service

root logstash:/etc/logstash/conf.d# service logstash restart; tail -f /var/log/logstash/logstash.log
logstash stop/waiting
logstash start/running, process 5248
{:timestamp=>"2015-02-17T18:15:17.043000-0800", :message=>"Using milestone 1 input plugin 'lumberjack'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.4.2-modified/plugin-milestones", :level=>:warn}
{:timestamp=>"2015-02-17T18:15:17.174000-0800", :message=>"Using milestone 1 filter plugin 'syslog_pri'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.4.2-modified/plugin-milestones", :level=>:warn}
{:timestamp=>"2015-02-17T18:15:17.973000-0800", :message=>"Using milestone 1 input plugin 'syslog'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.4.2-modified/plugin-milestones", :level=>:warn}
{:timestamp=>"2015-02-17T18:16:03.604000-0800", :message=>"Using milestone 1 input plugin 'lumberjack'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.4.2-modified/plugin-milestones", :level=>:warn}
{:timestamp=>"2015-02-17T18:16:03.732000-0800", :message=>"Using milestone 1 filter plugin 'syslog_pri'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.4.2-modified/plugin-milestones", :level=>:warn}
{:timestamp=>"2015-02-17T18:16:04.527000-0800", :message=>"Using milestone 1 input plugin 'syslog'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.4.2-modified/plugin-milestones", :level=>:warn}

Now I can filter my traffic & map it in Kibana