osx display nameserver via cmd line

[user@macmini ~]#  scutil --dns | grep 'nameserver\[[0-9]*\]'
  nameserver[0] : 209.222.18.222
  nameserver[1] : 209.222.18.218
  nameserver[0] : 209.222.18.222
  nameserver[1] : 209.222.18.218
  nameserver[0] : 209.222.18.222
  nameserver[1] : 209.222.18.218

Self signed root cert with multdomain cert & sha-256

Self signed root cert with multdomain cert & sha-256
—-
Prep by creating dirs

mkdir -p /Users/user/Documents/multidomain/root_cert/private/
mkdir -p /Users/user/Documents/multidomain/star_devwest_foobar_com/

Root Certs
Create Root Key

user@greenscar root_cert $ openssl req \
-x509 \
-new \
-nodes \
-days 3650 \
-newkey rsa:2048 \
-sha256 \
-subj "/C=US/ST=California/L=San\ Jose/O=Cloud\ Cruiser\ Inc./CN=*.foobar.com" \
-keyout /Users/user/Documents/multidomain/root_cert/private/root_ca.key

Create Self Signed Root Cert

openssl req \
-x509 \
-sha256 \
-new \
-nodes \
-days 3650 \
-key /Users/user/Documents//SHA-256/root_cert/private/root_ca.key \
-subj "/C=US/ST=California/L=San\ Jose/O=Cloud\ Cruiser\ Inc./CN=*.foobar.com" \
-out /Users/user/Documents/multidomain/root_cert/root_ca.crt      

————————————————
Per environment certs
CD to cert dir

user@greenscar star_devwest_foobar_com $ cd /Users/user/Documents/multidomain/star_devwest_foobar_com

Create Private Key

openssl genrsa \
-out /Users/user/Documents/multidomain/star_devwest_foobar_com/star_devwest_foobar_com.key \
2048

Generate CSR

user@greenscar SHA-256 $ cd /Users/user/Documents/multidomain/star_devwest_foobar_com
openssl req -new \
-config /Users/user/Documents/multidomain/foobar.com.cnf \
-key /Users/user/Documents/multidomain/star_devwest_foobar_com/star_devwest_foobar_com.key \
-sha256 \
-out /Users/user/Documents/multidomain/star_devwest_foobar_com/star_devwest_foobar_com.csr \
-subj "/C=US/ST=California/L=San\ Jose/O=FooBar\ Inc./CN=devwest.foobar.com" 

Create files with all domains you want supported

echo "subjectAltName=DNS:devwest.foobar.com,DNS:*.devwest.foobar.com">cert_extensions

Checkout our new CSR
openssl req -text -noout -in star_devwest_foobar_com.csr

Sign cert via self signed root cert

openssl x509 -req \
-in /Users/user/Documents/multidomain/star_devwest_foobar_com/star_devwest_foobar_com.csr \
-CA /Users/user/Documents/multidomain/root_cert/root_ca.crt \
-CAkey /Users/user/Documents/multidomain/root_cert/private/root_ca.key \
-CAcreateserial \
-sha256 \
-extfile cert_extensions \
-out /Users/user/Documents/multidomain/star_devwest_foobar_com/star_devwest_foobar_com.crt \
-days 3650

Upload Cert

user@greenscar star_devwest_foobar_com $ aws iam delete-server-certificate --server-certificate-name star_devwest_foobar_com
user@greenscar star_devwest_foobar_com $  aws iam upload-server-certificate --server-certificate-name star_devwest_foobar_com  --certificate-body file://star_devwest_foobar_com.crt --private-key file://star_devwest_foobar_com.key

Ubuntu – edit iptables

As a CentOS user, Ubuntu was driving me crazy with no /etc/sysconfig/iptables and this odd workaronud where you are to create multiple files to load on boot in order to save your iptables. Then I found iptables-persistent

  1. Install iptables-persistent
    root monitoring:~# apt-get install iptables-persistent
    
  2. Now configure your iptables (for v4… if using v6, replace end of filename)
    root monitoring:~# vi /etc/iptables/rules.v4
    

Now when I reboot, the appropriate rules are in place.

Monitor Disk I/O

[/share/MD0_DATA] # dstat -tdD total,md0 30
----system---- -dsk/total----dsk/md0--
  date/time   | read  writ: read  writ
25-02 15:06:03|  18M 2937k:  17M 2367k
25-02 15:06:33|  11M   11M:9976k 9446k
...

-t for timestamps
-d for disk statistics
-D to specify the exact devices to report
30 to average over 30 seconds. The display is updated every second, but only once per 30 seconds a new line will be started.

[/share/MD0_DATA] # iostat -x 1
                             extended device statistics                       
device mgr/s mgw/s    r/s    w/s    kr/s    kw/s   size queue   wait svc_t  %b 
sdx        0     0    0.0    0.0     1.6     0.0   74.9   0.0   13.9   7.6   0 
sda       25   124   35.5    4.9  3132.5   512.9   90.3   1.0   23.6   3.6  15 
sdb       25   123   33.6    4.5  3115.8   516.4   95.4   0.4   11.7   3.6  14 
sdc       25   123   35.3    5.0  3121.1   512.9   90.1   0.9   22.8   3.5  14 
sdd       25   123   33.4    4.5  3109.8   510.2   95.5   0.5   12.3   3.6  14 
sde       25   123   35.4    5.0  3122.5   513.0   90.1   0.9   21.6   3.0  12 
sdf       25   123   33.9    4.5  3111.8   510.2   94.3   0.4   10.3   3.0  11 
md9        0     0    0.1    0.0     1.5     0.1   17.9   0.0    0.0   0.0   0 
md13       0     0    2.1    1.8   123.9     7.2   34.0   0.0    0.0   0.0   0 
md6        0     0    0.0    0.2     0.2     0.8    4.0   0.0    0.0   0.0   0 
md0        0     0  334.2   52.8 18498.3  2479.4   54.2   0.0    0.0   0.0   0 
                             extended device statistics       

Logstash Patterns / Grock

So I was working on logstash and didn’t like the huge / worthless messages.

Reference on what patterns already exist:
Grock Patterns Reference

An amazing tool for figuring out your pattern:
http://grokdebug.herokuapp.com/

I modified

root logstash:/etc/logstash/conf.d# vi 10-syslog.conf 

to look like

filter {
   if [type] == "syslog" 
   {
      if [host] == "10.0.2.3"  
      {
         grok 
         {
            remove_tag => "_grokparsefailure"
            add_tag => "networkadmin"
         }
      }

      else if [host] == "10.0.2.1"  
      {
         grok 
         {
            match => { "message" => "%{IPTABLES}"}
            patterns_dir => ["/var/lib/logstash/etc/grok"]
            remove_tag => ["_grokparsefailure"]
            add_tag => ["ddwrt"]
         }
         if [src_ip]  
         {
            geoip 
            {
               source => "src_ip"
               target => "geoip"
               add_field => [ "[geoip][src][coordinates]", "%{[geoip][longitude]}" ]
               add_field => [ "[geoip][src][coordinates]", "%{[geoip][latitude]}"  ]
            }
            mutate 
            {
               convert => [ "[geoip][coordinates]", "float" ]
            }
         }   
   
         if [dst_ip]  
         {
            geoip 
            {
               source => "dst_ip"
               target => "geoip"
               add_field => [ "[geoip][dst][coordinates]", "%{[geoip][longitude]}" ]
               add_field => [ "[geoip][dst][coordinates]", "%{[geoip][latitude]}"  ]
            }
            mutate 
            {
               convert => [ "[geoip][coordinates]", "float" ]
            }
         }  
         # http://www.networkassassin.com/elk-for-network-operations/
         #Geolocate logs that have SourceAddress and if that SourceAddress is a non-RFC1918 address or APIPA address
         if [src_ip] and [src_ip] !~ "(^127\.0\.0\.1)|(^10\.)|(^192\.168\.)" 
         {
            geoip 
            {
               database => "/opt/logstash/vendor/geoip/GeoLiteCity.dat"
               source => "src_ip"
               target => "SourceGeo"
            }
            #Delete 0,0 in SourceGeo.location if equal to 0,0
            if ([SourceGeo.location] and [SourceGeo.location] =~ "0,0") {
               mutate {
                  replace => [ "SourceGeo.location", "" ]
               }
            }
         }
         
         #Geolocate logs that have DestinationAddress and if that DestinationAddress is a non-RFC1918 address or APIPA address
         if [dst_ip] and [dst_ip] !~ "(^127\.0\.0\.1)|(^10\.)|(^192\.168\.)" 
         {
            geoip 
            {
               database => "/opt/logstash/vendor/geoip/GeoLiteCity.dat"
               source => "dst_ip"
               target => "DestinationGeo"
            }
            #Delete 0,0 in DestinationGeo.location if equal to 0,0
            if ([DestinationGeo.location] and [DestinationGeo.location] =~ "0,0") 
            {
               mutate 
               {
                  replace => [ "dst_ip.location", "" ]
               }
            }
         }
      }
      else
      {
         grok 
         {
            remove_tag => "_grokparsefailure"
            add_tag => "syslog from what IP???????"
         }
      }
   }
   else {
      grok {
         match => ["message", "%{GREEDYDATA:syslog_message}"]
         overwrite => ["message"]
         add_tag => "not syslog"
         #add_field => [ "received_at", "%{timestamp}" ]
         #add_field => [ "received_from", "%{host}" ]
      }
  }
}

Then bounce the service

root logstash:/etc/logstash/conf.d# service logstash restart; tail -f /var/log/logstash/logstash.log
logstash stop/waiting
logstash start/running, process 5248
{:timestamp=>"2015-02-17T18:15:17.043000-0800", :message=>"Using milestone 1 input plugin 'lumberjack'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.4.2-modified/plugin-milestones", :level=>:warn}
{:timestamp=>"2015-02-17T18:15:17.174000-0800", :message=>"Using milestone 1 filter plugin 'syslog_pri'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.4.2-modified/plugin-milestones", :level=>:warn}
{:timestamp=>"2015-02-17T18:15:17.973000-0800", :message=>"Using milestone 1 input plugin 'syslog'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.4.2-modified/plugin-milestones", :level=>:warn}
{:timestamp=>"2015-02-17T18:16:03.604000-0800", :message=>"Using milestone 1 input plugin 'lumberjack'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.4.2-modified/plugin-milestones", :level=>:warn}
{:timestamp=>"2015-02-17T18:16:03.732000-0800", :message=>"Using milestone 1 filter plugin 'syslog_pri'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.4.2-modified/plugin-milestones", :level=>:warn}
{:timestamp=>"2015-02-17T18:16:04.527000-0800", :message=>"Using milestone 1 input plugin 'syslog'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.4.2-modified/plugin-milestones", :level=>:warn}

Now I can filter my traffic & map it in Kibana

Linux Port Querying

[root ip-10-249-66-147 bin]# lsof -i tcp:8009
COMMAND  PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
java    1368 root   40u  IPv6 185235      0t0  TCP *:8009 (LISTEN)
[root ip-10-249-66-147 bin]# netstat -talnp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name   
tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN      1132/sshd           
...
LISTEN      1368/java          
tcp        0      0 :::8009                     :::*                        LISTEN      1368/java           
tcp        0      0 :::8080                     :::*                        LISTEN      1368/java           
tcp        0      0 ::ffff:127.0.0.1:5201       :::*                        LISTEN      25725/java