Friday, June 5, 2015

ruby job builder dsl is now opensource

I am pleased that one of my work in Wonga the rubyjobbuilderdsl is now opensource It is a small internal domain specific language that can be used to automate provision and ongoing maintenance of Jenkins ecosystem. 
It is used quite extensively  within Wonga for a while and proves to be effective and convenient way to maintain complex Jenkins system with hundred of interconnected jobs you can read more about it in

Wednesday, February 25, 2015

Why bundler and chef-solo can solve your dependencies nightmare

The problem
One of problems when using chef as automation tool for provisioning is manage dependencies of upstream cookbooks (as consequence of reuse cookbooks). In typical scenario a cookbook A depends on B which in turn depends on C and D.
During development berkshelf can be  used to manage dependencies tree of a single cookbook, which is quite handy. With berkshelf we can pass our cookbook to others, who can then test the cookbook using exact versions of its direct and indirect dependencies, which save others from headache due to version's mismatch.
Unfortunately there is no such facility during actual deployment, which is no arguably critical to success for large scale deployment of chef across multiple teams who constantly add, modify, reuse chef cookbooks.
When deploying a cookbook in production, we want that chef run exact versions of the cookbook and all of its dependencies as they were tested (predictability is obviously important in production).
So far chef-client and chef-server fail to meet this critical requirement. Using
         berks apply ENVIRONMENT 
to specify version of in environment can solve the problem only partly. This is because people usually put more than one cookbooks into the run list of single node (either directly or via role). This may cause version conflict if different cookbooks in run list depends on different version of the same upstream cookbook (the problem that is known as dependency islands). In addition there is no effective way to enforce that we use the same version of ruby, chef-client, gems that are referenced  by some of the cookbooks in the dependency tree in production as in test. I myself had a problem with different version of chef-client in different environments.

The solution
One quite elegant solution for the problem mentioned is using combination of bundler, chef-solo and single top level environment cookbook.
We use bundler to enforce version of ruby, berkshelf,  chef-solo. The bundler Gemfile is e.g.
    source ""
    gem "chef"
    gem "berkshelf"
    ruby "2.0.0"
To avoid dependency islands we create a single top level cookbook per node, that so called environment cookbook is managed by berkshelf, which enforces same version of all cookbook's dependencies in both test and production environment. To provision a node, we pull the desire version of top level cookbook from e.g. a web server or git sever to the node, then run
     bundle install --deployment
     bundle exec chef-solo -c config -j solo.json
which is the same as when running an ruby application.
The benefit of this approach is beyond solving dependencies issue, it reduce steps required to deploy a chef cookbook by eliminating chef-server completely and make running cookbook in development and test exact the same as in production.

Tuesday, September 10, 2013

Joining Wonga

After 5 years with ING, I decided to change the course of my professional journey and join Wonga, an UK Startups behind cool technology in micro lending business.  Myself and my family will move to Dublin Ireland where I will work in Wonga Dublin Technology Center.

Friday, August 16, 2013

Some useful tips for Graphite

I used to work with Graphite for a while, I can say that it is one of  my favorite tools in devops cultural movement where spreading operation's awareness among operation, development teams and other stakeholders is key factor.
Graphite stores time series data and provides powerful set of functions to manipulate with data, which is very useful for learning about trend of resource consumption as well finding problematic one among large number physical/logical resources.

In general installation is not difficult. To set it up just follow the installation document.

How it work
Graphite is written entirely in Python. From run time point of view Graphite consist of  1) Carbon Cache (developed using Twisted) and  2) WebApp (developed using Django).  Metric data are stored in file system , one metric per file in a format designed specially for time series data.
We get the graph by sending a HTTP request to URL of the WebApp. We use parameters of URL to tell Graphite what kind of graph of which time period we want.  We can not only request graph of ordinary metrics but  by using Graphite functions also to combine, filter, transform these metrics to generate rich, powerful graphs useful for planning and troubleshooting.
The WebApp read data of metrics involved from corresponding per metric data file and also from Carbon Cache for those values that have not yet written to the data file.
We send metrics to the Carbon Cache process, which is responsible for writing data into data file and serving query from WebApp. To scale up, many Carbon Caches may be configured behind a Carbon relay.

Getting Metrics into Graphite
It is pretty simple, just send metric in form of  metric_name metric_value seconds_from_epoch to default Graphite Carbon default port 2003 e.g.
$echo "cpu_load_per_min.`hostname`  `uptime | awk '{print $10}'` `date +%s`" | nc localhost 2003;
There is no need to create metric definition in advance, just send data to Graphite and it will create a metric for you if it does not yet exists. Graphite uses pattern of metric name defined in storage-schemas.conf to determine how often (in term of seconds) one data point of certain metric is stored. For example if the rule say that  a metric is once per minute and we send two values of the same metric at the same minute then the second one will overwrite the first one. It is also obvious that Graphite does not support finer resolution than one second.
Metric name can be composed of node separated by dot which allows us to organize metrics into hierarchy enabling combination them using Graphite functions.  Suppose we have N httpd servers and we want to track their consumption of network bandwidth in term of megabyte incoming and outgoing each minute, we can use the following naming convention
front.mb_out_per_min.host0,..., front.mb_out_per_min.hostN

Asking questions

There may questions that can be asked about the system. Given the example above, we can draw a graph of total of in coming or network bandwidth per min in mega byte by enter function sumSeries 
To give a nice legend to a graph we can decorate original expression with function alias
alias(target=sumSeries(*),"incoming traffic in mb")
Every time series data is a trending line however , I feel that some time it is useful to draw two lines one current and other from the past. So we see the difference over period of time let say one day. The function for doing that is timeShift. Let compare the difference between today traffic with same day last week.
target=alias(sumSeries(*),"now") &
target=alias(timeShift(sumSeries(*),"-1w"),"now - 1 week")
Because volume of traffic depends on day of week (weekend traffic is much lower), it is good idea compare one graph with other on the same week day.
Compare different metrics may be useful in case we want to know why e.g. value of a high level metric (e.g. response time of an URL) now is higher than in the past. Try to relate the high level metric to several low level metrics can help use to pinpoint where is the issue. Because different metrics has different scale the function secondYAXIS 
target=alias(averageSeries(*),"incoming traffic in mb") &
target=alias(secondYAXIS(sumSeries(*))),"total requests")

Performance hacks
Having dozens of dashboards with hundreds of graphs each requires hundreds or thousands of metrics when rendering can bring down the Graphite server to knee.  The following hacks may be helpful for such situation.
When rendering a graph, WebApp identify metrics involved and for each metric WebApp find the corresponding file name, read its content of the file then query the metric data from Carbon Cache finally merges the two result.
As cost of remote query to Carbon Cache is non negligible, a typical requested graph requiring hundreds of metrics will result in the same number of remote query being sent to Carbon Cache, which consumes substantial amount  CPU. To mitigate it I have patched both WebApp and Carbon Cache of Graphite version 0.9.x so WebApp will send a bulk of request for metrics in single call to Carbon Cache see my commits on github
Both Carbon Cache and WebApp generate a lot of IO. If the IO system is not fast enough, the server may sometimes hang waiting for completion of IO. We can  improve the situation a bit by tuning the file system where Graphite stores their metrics files. My favorite options when mounting the file system is
$cat /etc/fstab
/dev/sdb  /graphite-storage  ext3  defaults,noatime,data=writeback,barrier=0  1 2
The noatime option instruct file system not to change access time (changing access time modify disk block containing inode) of a file when someone read it. Other options trade off safety of file system for performance see ext3 document for detail.


Wednesday, February 20, 2013

Making chef-client safer

One of main issues we are facing when using chef in our environment is that a configuration file generated by either chef template, cookbook_file or file may exists in incomplete state during short period of time when chef-client modifying it. Processes accessing the configuration file during this period may fail strangely leaving no clear trace.

Let look at a very simple hypothetical example. We create chef template that generates /etc/hosts using data specified in role. At one time we add more machines into this /etc/hosts. What happens behind scene is that chef-client create new temporary file with required content, compare checksum of it with one of actual /etc/hosts and overwrite the /etc/hosts if it is not equals to the recent generated temporary file. During period of overwriting, any processes may see incomplete /etc/hosts, thus may not be able to resolve a hostname even though it already exists in the original file.

Luckily there is a fix for this problem. We can monkey patch chef template, cookbook_file and file providers to follow the well known pattern, create and write to temporary file then use File.rename to rename the temporary file to the final file.

The ruby File.rename uses syscall rename, which guarantees that any processes accessing the configuration file at any point of time always see either complete old or complete new version of the file.

The File.rename requires both old name and new name being in the same mounted filesystem. So oneway to make sure that File.rename will success is to create temporary file in the same directory as one of an overwritten file, which can archive easily by passing the directory of the overwritten file as tmpdir to, [tmpdir = Dir.tmpdir], options).

Sunday, June 24, 2012

Deterministic Chef template

It is very common that in chef recipe, we generate configuration file using template. Let look a simple example

template "/etc/hosts" do
source "hosts.erb"
owner "root"
group "root"
mode 0644


<% if @node[:hosts][:entries] %>
<% @node[:hosts][:entries].each do |ip, name| %>
<%= ip %> <%= name %>
<% end %>
<% end %>

In which we use fill data in form of hash structure with key as ip and value as hostname to the template.

The caveat here is that by using hash data structure we can potentially get different /etc/hosts each time we run chef client even though there is no change of data. This is due to no order guarantee of elements in hash data structure.

It does matter or not depending on the context. In case of /etc/hosts the order of host in that file may not be important. But a change notification triggers other action like start/stop other services, it can create un-wanted outcome. In addition there is some sort of compliance file monitoring tool like tripwire in place, it may create annoying alarms.

Anyway we can simply avoid it by create more consistent template in case of /etc/hosts, rewrite the template as following


<% if @node[:hosts][:entries] %>
<% @node[:hosts][:entries].keys.sort.each do |ip| %>
<% name = @node[:hosts][:entries][ip] %>
<%= ip %> <%= name %>
<% end %>
<% end %>

Sunday, April 8, 2012

Dump backtrace of all threads in ruby

I have created a few lines of code that allow me to dump stacktrace/backtrace of all threads of a running ruby process. If ruby support Thread#backtrace then it will print backtrace of all threads otherwise it will print of current running thread.
To use it first create a file ruby_backtrace.rb with the following content
require 'pp'

def backtrace_for_all_threads(signame)"/tmp/ruby_backtrace_#{}.txt","a") do |f|
      f.puts "--- got signal #{signame}, dump backtrace for all threads at #{}"
      if Thread.current.respond_to?(:backtrace)
        Thread.list.each do |t|
          f.puts t.inspect
          PP.pp(t.backtrace.delete_if {|frame| frame =~ /^#{File.expand_path(__FILE__)}/},
               f) # remove frames resulting from calling this method
          PP.pp(caller.delete_if {|frame| frame =~ /^#{File.expand_path(__FILE__)}/},
               f) # remove frames resulting from calling this method

Signal.trap(29) do
Then require this file to your ruby script you want to inspect e.g. t2.rb
require 'thread'
require './ruby_backtrace'

def foo

def bar
   sleep 100

thread1 = do

thread2 = do
   sleep 100

Finally run the script, send INFO signal to it and look at file ruby_backtrace_pid.txt, where pid is process id
$ ruby t2.rb &
[2] 4719
$ kill -29 4719
$ kill -29 4719
$ cat /tmp/ruby_backtrace_4719.txt 
--- got signal INFO, dump backtrace for all threads at 2012-04-07 17:33:14 +0200
["t2.rb:21:in `call'", "t2.rb:21:in `join'", "t2.rb:21:in `
'"] # ["t2.rb:9:in `bar'", "t2.rb:5:in `foo'", "t2.rb:13:in `block in
'"] # ["t2.rb:17:in `block in
'"] --- got signal INFO, dump backtrace for all threads at 2012-04-07 17:33:15 +0200 # ["t2.rb:21:in `call'", "t2.rb:21:in `join'", "t2.rb:21:in `
'"] # ["t2.rb:9:in `bar'", "t2.rb:5:in `foo'", "t2.rb:13:in `block in
'"] # ["t2.rb:17:in `block in

Wednesday, August 10, 2011

Craft a solid chef-server

The out of box chef server installation is NOT enough to support large scale deployment . To handle hundreds of nodes we need to craft it a bit. I highlight here some configuration tweaks
1. Run multi instances with Apache or Nginx http server as front end.
chef-server-api and chef-server-webui is single process, the only way to scale them are run multi instances of them behind the front end reverse http proxy server.
Also the front end http server can be configured to perform ssl encryption making communication between nodes and chef-server-api secure. Note that chef-server-webui is also chef-client, so use URL of the reverse proxy server to communicate with chef-server-api.
2. Use ruby 1.9
chef server components are hungry to CPU, use ruby 1.9 can boost performance as much as twice comparing to ruby 1.8.
3. Couchdb and big hard disk
Avoid un necessary headache by installing latest version of Couchdb. The version that comes with OS is usually old, buggy, often crashes under high load with hundred GB database. Hard disk is cheap, keep space for Couchdb data files, log files always free around 30%.
Also learn Erlang a bit so you can look and understand crash dump file from Couchdb as well as change its configuration.
4. Solr server
Chef-solr is de facto Solr server for full text search, The default config is pretty small for any serious operation. As it is java application based on Lucene, we need increase java heap size. You also need modify the Solr config (see this tip) to make full text index run faster. Believe me , you will have to rebuild solr index more than you think and proper solr config will save days of frustrated waiting.
5. Rabbitmq-server
Follow the same advice as for Couchdb. Rabbitmq-server is also Erlang application. It need enough memory to work reliably. Give it few GB of RAM if you do not want it crash and then will have to rebuild solr index.
6. Not enough 
Distribute load by running chef-client at different deterministic time avoiding run chef-client of all nodes at the same time.
Consider running each of chef-server component on separate box even one component in many boxes. But don't do it if you have not try all above options as the more complex configuration, the more time you have to spend on it and you end up to serve the chef instead of it serves your infrastructure.
7. Put chef-server configuration in a cookbook
Create your own cookbook to automate all the changes you make on chef-server so you can recreate exact new chef-server using chef-solo in matter of minute (when needed). At the end, who will believe that you will be able to automate the infrastructure if you can not automate your own stuff.

Wednesday, May 25, 2011

Migrate configuration to Chef

One of big headache in implementation Chef or any automate configuration management tool is migration. We usually have to face with a large legacy configuration that is poorly designed and implemented, which make thing worse.
I want to share some of our own experiences in doing migration of a relatively large configuration in term of both scale and complexity.
The tools and techniques
They are
1. a decent, fast and rock solid version control system: I recommend git.
2. semantic diff tools: diff of text file is OK, but not enough, depending on format of your configuration (XML, properties, yaml, JSON), you may need to write your own semantic diff, that is able to make intelligent diff of these formats.
3. source code of these applications that use these configuration: the migration involve legacy applications, access to source code is absolute need to order to do a refactoring.
4. collaboration: with the new tool developers need to change/cleanup the code that access configuration data, operation people need to change their practices, we all need to learn how to use, what to trust, where we need to pay attention.
Shadow configuration file
In order to minimize impact existing system, at the beginning we let chef to create shadow configurations, which then are verified with existing configuration using diff tools. The shadow configuration generated by chef will replace the actual one only after we make sure that they are semantically the same from the point of view of the program using it.
In shadow configuration we can have a configuration file with different prefix/postfix or different directory. e.g. if the actual configuration file is /etc/hosts then the shadow one will be /etc/hosts.chef or /etc-chef/hosts or /root-chef/etc/hosts. I personally prefer the complete shadow directory because it is easier to check, remove when needed.
The cookbook need to have one attribute (can be name of directory where we are going to generate configuration files) saying if we are going to generate shadow or actual configuration.
Conflicting changes
In ideal world, we just has one configuration management system and one administrator who modify the system. But reality is different. There are usually more actors that make the change. So there is potential conflict. E.g. Chef modify one file and then an administrator or scripts or other configuration system don't know and also modify it. The worst case is that everyone assume that the file has the content he expect and thing get broken. These scripts that work for many years suddenly fails.
How to minimize the conflict?. Communicate well with other team members what are managed by Chef, which they should modify through Chef and not manually or by other tools. Various methods can be used e.g. a) wiki pages, b) put a clear notice as comment at the beginning of all files managed by Chef such as "This file is generated by Chef, you shall not modify it as it will be overwritten" c) notify all people each time a file is modified by Chef.
Top-down vs. bottom-up
There is basically two approaches to start the migration project 1) top-down and 2) bottom-up.
1. In top-down approach, you start with create a cookbook/recipe for one configuration item (e.g. syslog-ng, the cookbook will be fairly generic so it is usable by all environments) . Then extract specific data (e.g. ip of syslog server, source of log) from configuration file from each server and put them as attributes into a role. After finishing one configuration item, you continue with other until you have all you need in Chef.
2. With bottom-up approach, we first create one generic cookbook/recipe per server role (e.g. middleware) then you add all configuration files from servers of all environments to it. The structure of cookbook files can be like
      middleware/files/default   # files that are same for all hosts of all env.
      middleware/files/default/dev   #  files that are same for all hosts of development env.
      middleware/files/default/dev/host1   # files that are specific for host1 of evelopment env. 
The recipe can be easily developed to copy these configuration files to a server depending on environment and hostname (node[:hostname]). After everything are in Chef and work well, we will start refactor and split this big generic cookbook to many smaller cookbooks, adding more roles and make thing easily to reuse. This step is kind of continual improvement process aiming at making our chef repository better and better.

In our project we have taken the first approach because it seems more naturally at first sight, however later we change to the second one, which I think have many benefits.
With the second approach, after we put all things in the Chef (can be done pretty quickly), we can announce that from now all changes has to be done through Chef. The very first result is seen immediately: a) there is single point of making change b) change is well audited and communicated as Chef repository is under version control system c)conflicting changes is minimized