python commit and push to git in Jenkins

I’ve looked all over the internet and can’t find anything regarding pushing to a remote repo via python…. particularly in Jenkins where your upstream is not set.

So I finally found the only helpful link and it was saying: don’t use gitpython.

But after trying multiple methods, I finally got this working. Note: It uses a few args from Jenkins && the point of this script is nothing more than to modify a file and commit it. With this functionality, I can now do real work.

import git
import os
import time
import yaml
import sys
import re
from git import Repo
print os.environ["GIT_BRANCH"]
workspace=os.environ['WORKSPACE']
m = re.search("origin/(.*)", os.environ["GIT_BRANCH"])
if m:
    git_branch = m.group(1)
else:
    sys.exit("COULD NOT LOAD SOURCE BRANCH")
# UPDATE VERSION FILE
with open(workspace + '/deploy_automation/config/int/versions.yaml', 'r') as f:
    versions_yaml = yaml.load(f)
versions_yaml["a.component"] = time.time()
with open(workspace + '/deploy_automation/config/int/versions.yaml', 'w') as f:
    yaml.dump(versions_yaml, f, default_flow_style=False)
# OTHER WAY
git_repo = Repo(workspace + "/deploy_automation")
git_repo.git.status()
git_repo.git.add(workspace + '/deploy_automation/config/int/versions.yaml')
git_repo.git.config('--global', "user.name", "user name")
git_repo.git.config('--global', "user.email", "user@domain.com")
git_repo.git.status()
git_repo.git.commit(m=' DEPLOY SCRIPT Updating versions.yaml for ENV jamestest2 and Service test')
git_repo.git.push('--set-upstream', 'origin', git_branch)

					

Multiline find / replace

I need to configure Jenkins jobs, which are currently configured to run anywhere to work on a specific label. Sure I could do it via the gui but in a SOA, I don’t want to manually do this in 100+ jobs.
So the files currently has in it:

   </scm>
   <canRoam>true</canRoam>
   <disabled>false</disabled>

I want it to be:

   </scm>
   <canRoam>false</canRoam>
   <disabled>false</disabled>

I run:

server:~/jenkins/jobs> perl -pi -e 'BEGIN{undef $/;} s/<\/scm>.+?true<\/canRoam>/<\/scm>\nbuild<\/assignedNode>\nfalse<\/canRoam>/smg' app*/config.xml

Many thanks to aks and StackOverflow for the help

rename cmd on osx

One thing I missed when moving to osx was the rename cmd. Sure you can you mv but when I’m dealing with thousands of files, rename makes it much easier.

To get it on mac, all you need is

  1. Install Homebrew
  2. Install rename
    brew install rename 
    

That simple!!!

AWS / Ansible Dynamic Inventory

I’ve been working to manage dynamic inventory in AWS for Ansible deploys… then I came across this stack overflow link & ches’ answer.

Ansible looks for executables and flat files in a directory and merges their results.

=> tree inventory/staging
inventory/staging
-- base
-- ec2.ini
-- ec2.py
-- group_vars -> ../group_vars

The base file looks like:

=>  more inventory/staging/base
[localhost]
# I need to tell Ansible which Python on my system has boto for AWS
127.0.0.1 ansible_python_interpreter=/usr/local/bin/python

# The EC2 plugin will populate these groups, but we need to add empty entries
# here to make aliases for them below.
[tag_Stage_staging]
[tag_Role_webserver]

[staging:children]
tag_Stage_staging

[webservers:children]
tag_Role_webserver

You then just point to the directory for inventory:

$ ansible -i inventory/staging webservers -m ec2_facts
# OR
$ export ANSIBLE_HOSTS=inventory/staging
$ ansible webservers -m ec2_facts

AWS via Ansible – use private key

With AWS ssh, you need to use a private key. When working on a new script, I didn’t want to deal with my private account having a “build box” which was already on the VPC. So I was using my box & giving the destination a public IP. I know… totally insecure but considering I was killing the VM every few minutes I didn’t care.

So to call ansible-playbook & provide a private key:

ansible-playbook -i envs/localhost elasticsearch.yml -vvvv --private-key=~/.ssh/mykeyname.pem 

aws cmd line tool – use profiles

I’ve got a home account & a work account. I need to easily swap between the 2.
Add 2 sets of creds
~/.aws/credentials

[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

[work]
aws_access_key_id=AKIAI44QH8DHBEXAMPLE
aws_secret_access_key=je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY

Add 2 sets of region / outputs
~/.aws/config

[default]
region=us-west-2
output=json

[profile work]
region=us-east-1
output=text

Then to use a profile:

export AWS_PROFILE=work

OR

aws ec2 describe-instances --profile work

SOURCE

Ubuntu – edit iptables

As a CentOS user, Ubuntu was driving me crazy with no /etc/sysconfig/iptables and this odd workaronud where you are to create multiple files to load on boot in order to save your iptables. Then I found iptables-persistent

  1. Install iptables-persistent
    root monitoring:~# apt-get install iptables-persistent
    
  2. Now configure your iptables (for v4… if using v6, replace end of filename)
    root monitoring:~# vi /etc/iptables/rules.v4
    

Now when I reboot, the appropriate rules are in place.

Monitor Disk I/O

[/share/MD0_DATA] # dstat -tdD total,md0 30
----system---- -dsk/total----dsk/md0--
  date/time   | read  writ: read  writ
25-02 15:06:03|  18M 2937k:  17M 2367k
25-02 15:06:33|  11M   11M:9976k 9446k
...

-t for timestamps
-d for disk statistics
-D to specify the exact devices to report
30 to average over 30 seconds. The display is updated every second, but only once per 30 seconds a new line will be started.

[/share/MD0_DATA] # iostat -x 1
                             extended device statistics                       
device mgr/s mgw/s    r/s    w/s    kr/s    kw/s   size queue   wait svc_t  %b 
sdx        0     0    0.0    0.0     1.6     0.0   74.9   0.0   13.9   7.6   0 
sda       25   124   35.5    4.9  3132.5   512.9   90.3   1.0   23.6   3.6  15 
sdb       25   123   33.6    4.5  3115.8   516.4   95.4   0.4   11.7   3.6  14 
sdc       25   123   35.3    5.0  3121.1   512.9   90.1   0.9   22.8   3.5  14 
sdd       25   123   33.4    4.5  3109.8   510.2   95.5   0.5   12.3   3.6  14 
sde       25   123   35.4    5.0  3122.5   513.0   90.1   0.9   21.6   3.0  12 
sdf       25   123   33.9    4.5  3111.8   510.2   94.3   0.4   10.3   3.0  11 
md9        0     0    0.1    0.0     1.5     0.1   17.9   0.0    0.0   0.0   0 
md13       0     0    2.1    1.8   123.9     7.2   34.0   0.0    0.0   0.0   0 
md6        0     0    0.0    0.2     0.2     0.8    4.0   0.0    0.0   0.0   0 
md0        0     0  334.2   52.8 18498.3  2479.4   54.2   0.0    0.0   0.0   0 
                             extended device statistics       

Logstash Patterns / Grock

So I was working on logstash and didn’t like the huge / worthless messages.

Reference on what patterns already exist:
Grock Patterns Reference

An amazing tool for figuring out your pattern:
http://grokdebug.herokuapp.com/

I modified

root logstash:/etc/logstash/conf.d# vi 10-syslog.conf 

to look like

filter {
   if [type] == "syslog" 
   {
      if [host] == "10.0.2.3"  
      {
         grok 
         {
            remove_tag => "_grokparsefailure"
            add_tag => "networkadmin"
         }
      }

      else if [host] == "10.0.2.1"  
      {
         grok 
         {
            match => { "message" => "%{IPTABLES}"}
            patterns_dir => ["/var/lib/logstash/etc/grok"]
            remove_tag => ["_grokparsefailure"]
            add_tag => ["ddwrt"]
         }
         if [src_ip]  
         {
            geoip 
            {
               source => "src_ip"
               target => "geoip"
               add_field => [ "[geoip][src][coordinates]", "%{[geoip][longitude]}" ]
               add_field => [ "[geoip][src][coordinates]", "%{[geoip][latitude]}"  ]
            }
            mutate 
            {
               convert => [ "[geoip][coordinates]", "float" ]
            }
         }   
   
         if [dst_ip]  
         {
            geoip 
            {
               source => "dst_ip"
               target => "geoip"
               add_field => [ "[geoip][dst][coordinates]", "%{[geoip][longitude]}" ]
               add_field => [ "[geoip][dst][coordinates]", "%{[geoip][latitude]}"  ]
            }
            mutate 
            {
               convert => [ "[geoip][coordinates]", "float" ]
            }
         }  
         # http://www.networkassassin.com/elk-for-network-operations/
         #Geolocate logs that have SourceAddress and if that SourceAddress is a non-RFC1918 address or APIPA address
         if [src_ip] and [src_ip] !~ "(^127\.0\.0\.1)|(^10\.)|(^192\.168\.)" 
         {
            geoip 
            {
               database => "/opt/logstash/vendor/geoip/GeoLiteCity.dat"
               source => "src_ip"
               target => "SourceGeo"
            }
            #Delete 0,0 in SourceGeo.location if equal to 0,0
            if ([SourceGeo.location] and [SourceGeo.location] =~ "0,0") {
               mutate {
                  replace => [ "SourceGeo.location", "" ]
               }
            }
         }
         
         #Geolocate logs that have DestinationAddress and if that DestinationAddress is a non-RFC1918 address or APIPA address
         if [dst_ip] and [dst_ip] !~ "(^127\.0\.0\.1)|(^10\.)|(^192\.168\.)" 
         {
            geoip 
            {
               database => "/opt/logstash/vendor/geoip/GeoLiteCity.dat"
               source => "dst_ip"
               target => "DestinationGeo"
            }
            #Delete 0,0 in DestinationGeo.location if equal to 0,0
            if ([DestinationGeo.location] and [DestinationGeo.location] =~ "0,0") 
            {
               mutate 
               {
                  replace => [ "dst_ip.location", "" ]
               }
            }
         }
      }
      else
      {
         grok 
         {
            remove_tag => "_grokparsefailure"
            add_tag => "syslog from what IP???????"
         }
      }
   }
   else {
      grok {
         match => ["message", "%{GREEDYDATA:syslog_message}"]
         overwrite => ["message"]
         add_tag => "not syslog"
         #add_field => [ "received_at", "%{timestamp}" ]
         #add_field => [ "received_from", "%{host}" ]
      }
  }
}

Then bounce the service

root logstash:/etc/logstash/conf.d# service logstash restart; tail -f /var/log/logstash/logstash.log
logstash stop/waiting
logstash start/running, process 5248
{:timestamp=>"2015-02-17T18:15:17.043000-0800", :message=>"Using milestone 1 input plugin 'lumberjack'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.4.2-modified/plugin-milestones", :level=>:warn}
{:timestamp=>"2015-02-17T18:15:17.174000-0800", :message=>"Using milestone 1 filter plugin 'syslog_pri'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.4.2-modified/plugin-milestones", :level=>:warn}
{:timestamp=>"2015-02-17T18:15:17.973000-0800", :message=>"Using milestone 1 input plugin 'syslog'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.4.2-modified/plugin-milestones", :level=>:warn}
{:timestamp=>"2015-02-17T18:16:03.604000-0800", :message=>"Using milestone 1 input plugin 'lumberjack'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.4.2-modified/plugin-milestones", :level=>:warn}
{:timestamp=>"2015-02-17T18:16:03.732000-0800", :message=>"Using milestone 1 filter plugin 'syslog_pri'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.4.2-modified/plugin-milestones", :level=>:warn}
{:timestamp=>"2015-02-17T18:16:04.527000-0800", :message=>"Using milestone 1 input plugin 'syslog'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.4.2-modified/plugin-milestones", :level=>:warn}

Now I can filter my traffic & map it in Kibana