Testing clusters with infrataster

Infrataster is described by its author as an ‘Infrastructure Acceptance Testing Framework’. In this post i’ll show how it can be combined with chef-provisioning to test clusters built in Vagrant (although chef-provisioning has drivers for many other systems as well including most major cloud providers).

This post is largely based off the code found in the DynInc github repository. It includes a test_cluster rake task that will spin up two web servers and a test client then run a basic test against each. I’ve covered briefly how to spin up a simple cluster with chef-provisioning in a previous post so I won’t go into it here.

Once your cluster is up and running all that is required is to tell Infrataster about the servers and then write some tests. The first step is to define the servers. If you’ve used the example repository above then you should have servers that were setup by chef-provisioning using chef-zero. The node data for those servers will be stored in the chef_repo of the chef-zero server which will be in provision/repo for my example. We can load this data into Infrataster using the infrataster-plugin-chef gem. The example spec_helper below defines one one node but in the example repository there is a full spec_helper.rb.

This is all that is needed to prepare for infratasting. The final step is to write some tests.

These are the only steps required to get basic functional testing of clusters. For a full example including multiple servers and ssh_exec tests see the chef_infrataster example repository. The possibilities for testing multi-node clusters are almost endless with infrataster, the pluggable architecture allows you to add new types of tests easily. There are already gems available for testing things like DNS servers, MySQL or Postgresql servers and others, they are also very simple to write. See https://github.com/dyninc/infrataster-plugin-ldap for a very basic example of a plugin.

Posted in Uncategorized | Leave a comment

Building clusters with chef-provisioning

Chef has the wonderful test-kitchen tool for testing an individual cookbook on an individual server, but what if your application requires two or more servers to run? Perhaps a load balancer and two application servers? Right now there isn’t a clear solution to this problem, at least that I have found, so I will describe one approach I have been experimenting with here.

The first part of the problem is creating the cluster and chef-provisioning (the new name for chef-metal) can help with this. Chef-provisioning adds resources for machines and machine_batches to Chef so that you can create them the same way you create users or install packages.

Before you can use chef-provisioning to create the machines you’ll need to provide it with some data. Chef-provisioning will use chef-zero to converge chef-client on your newly created nodes and in order for that to work you’ll need to provide the required cookbooks and tell chef where to find them. For now the easiest way to do this is to create a chef-repo folder with a cookbooks folder inside it and place the cookbooks in there.

~/w/demo> mkdir -p repo/cookbooks
~/w/demo> cd repo/cookbooks
~/w/demo/repo/cookbooks> knife cookbook site download apt
~/w/demo/repo/cookbooks> tar -xzf apt-2.6.1.tar.gz

Once you’ve created the cookbooks folder you need to tell chef-zero where to find it. To do this you can create a client.rb that contains the ‘chef_repo_path’ setting pointing to your new folder and pass it to the chef-client commands we are going to use later.

~/w/demo> echo "chef_repo_path File.join(File.expand_path(File.dirname(__FILE__)), 'repo')" > client.rb

With the above setup, let’s have a look at a basic example of using chef provisioning to setup a single server.

Once created as provision.rb you can execute chef-client to provision the server for you.

~/w/demo> chef-client -z provision.rb -c client.rb

When executed with chef client, the provision.rb will create a single vm in vagrant and run the apt::default recipe on it. In the real world, you might separate the definition of the driver for vagrant from the description of the machine you wanted to create. This would allow you to have multiple different drivers e.g. for vagrant, ec2 & vmware for a single machine.

What about the example above with a load balancer and two web application servers? Let’s look at a more complicated example.

~/w/demo> chef-client -z vagrant.rb provision-cluster.rb -c client.rb

Here I have split out the vagrant driver definition into a separate file as mentioned above. Then in the provision-cluster.rb which contains the machine definitions I have defined two batches of servers. Machines in the same batch are designed to be configured in parallel. In the example above we set up two web servers in the first batch then once they are running we setup the load balancer in the second batch.

Thats all for now but for an even more complete example complete with infrataster tests check out https://github.com/dyninc/chef_infrataster.

Last updated: February 3, 2015 at 17:58 pm

Posted in Chef | Tagged , , , | Leave a comment

Sniffing GMLAN with a Raspberry Pi

GMLAN is the system used in most modern GM cars to control the cars many functions like electric windows, air conditioning, lights, even things like the accelerator position. There are many off the shelf ways to talk with the main CANBUS of the car and software to do things like read your RPM or reset your fault light. However, GMLAN exists on a different physical pin and is a single wire CANBUS unlike those other systems which have a CANH and a CANL line. I wrote a post previously on getting an Arduino to work with the GMLAN on a Vauxhall Vectra and the same principles apply to the pinout and cable design here as well as grounding the CANL line. You can read that article here for more details on the physical cable and pin outs required.

I used a ‘PICAN’ add on board for my Pi to connect to the GM single wire bus. Because this board uses the MCP2515 chip the first step to to compile the kernel with the appropriate SPI and MCP2515 drivers enabled. The process is documented slightly confusingly here http://elinux.org/RPi_CANBus. I also used the spi-config kernel module available here https://github.com/msperl/spi-config to make life a little easier.

Once all the modules are compiled you need to load them in the right order to get everything up and running. Because i’m using the PICAN to to listen for GMLAN I set the speed of the SPI to 16Mhz and the speed of the can bitrate to 33.333.

root@raspberrypi:~# modprobe spi-bcm2708
root@raspberrypi:~# insmod spi-config.ko devices=bus=0:cs=0:modalias=mcp2515:speed=16000:gpioirq=25:pd=20:pds32-0=16000000:pdu32-4=0x2002:force_release
root@raspberrypi:~# modprobe mcp2515
root@raspberrypi:~/work# ip link set can0 type can bitrate 33333

This is basically all that is required to get the setup able to send and receive on the GMLAN bus. You can use the binaries in the can-utils suite to dump (candump) and send (cansend) messages. I also strongly recommend checking out the GMLAN_Bible for a list of addresses and payloads for various functions people have discovered.

Last updated: February 3, 2015 at 17:59 pm

Posted in Car, Raspberry Pi | Tagged , , , , | Leave a comment

Running your own community site API

Berkshelf has a ‘site location’ option for use in the Berksfile which allows you to source cookbooks from the Opscode community site API directly. This is great if you want access to a lot of cookbooks easily but as a business its likely you want to control what cookbooks are uploaded to your chef servers, especially in production. In my previous post I spoke about creating a repository of all your cookbooks with Jenkins and Berkshelf and thanks to the Opscode cookbook API documentation presenting this as a site location for Berkshelf isn’t too hard.

To get you started I have published a basic working Sinatra implementation of a community cookbook API on the DynInc Github account here. This implementation uses a regular Chef API endpoint as used by knife to source the cookbooks. Using an internal verified cookbook repository makes it ideal for use when coupled with the methods described in my previous post for creating a tested internal cookbook repository. It is especially useful if you want to deploy your chef servers or organisations using Jenkins and Berkshelf using your internal cookbook repository as a source and you can read more on that in another of my posts.

Posted in Chef | Tagged , , | Leave a comment

Managing a chef server or organisation with berkshelf

In my previous post I talked about creating a private cookbook repository with Jenkins and Berkshelf. In this post i’ll discuss briefly how to use very similar methods to manage open source chef servers, hosted chef accounts or private chef organisations.

At Dyn we use private chef and store each of our private chef organisations in version control using Github. Each repository is very lightweight and contains only the bare minimum of information required for Jenkins to assemble the required cookbooks, data bags, roles and environments and upload them to the destination organisation after running any necessary tests. The repo structure looks a lot like a basic berks layout i’ve included it below with some of the cruft removed. We use the berksfile to specify what cookbooks at what versions should be uploaded into the target organisation.


This should look fairly familiar to existing berkshelf users with a couple of exceptions so i’ll jump straight in to explaining what happens to translate this into a up to date chef organisation on a private chef server with a couple of initial notes…

  • We keep all of our data bags, roles and environments in a separate repository, we version that repository using thor-scmversion so it’s versions look much the same as a cookbook
  • We do the same thing with these ‘organisation’ repositories, they are versioned with thor-scmversion each time we deploy to production we bump the version number.
  • Stage One: Checkout the chef-data from git into the working directory using the value stored in chef-data.version as the tag to checkout. This ensures that we always get the correct chef data in each organisation and can be running different versions in different places e.g. test, staging, prod.

    Stage Two: Run berks install in the working directory to ensure all required cookbooks and dependencies are up to date within the ‘berkshelf’. We use the cookbookrepo I described in my previous post to source the cookbooks.

    Stage Three: As the first step of synchronising the chef data on the server we delete any roles, environments, data bags and data bag items that have been removed in this version of the chef data and are no longer required.

    Stage Four: Upload new and modified data bags, roles and environments to the chef server. We store the destination config for knife in .chef/knife.rb so we suffix our knife commands with -c .chef/knife.rb to ensure we act on the correct chef server.

    Stage Five: Run berks upload using the target information stored within the git repo. We store a berksconfig.json in the .berkshelf folder of the checkout. This specifies the destination we are uploading to so the command run is “berks install -c .berkshelf/target.json”

    Stage Six: Finally once everything is completed and the chef server is up to date we run “thor version:bump auto –default patch” to mark the version in git. This way if we wanted to revert for any reason we know we can go back to the previous tag in the repository and get the state exactly as it used to be.

    Finally altogether as a script for Jenkins to run

    bundle exec berks install
    if [ -f chef-data.version ]; then
    git clone git@some.git.server:Organisation/dyn_chef_data.git
    TAG=`cat chef-data.version`
    cd dyn_chef_data
    git checkout $TAG;
    $WORKSPACE/dyn_chef_data/scripts/delete_orphaned.rb roles
    $WORKSPACE/dyn_chef_data/scripts/delete_orphaned.rb environments
    $WORKSPACE/dyn_chef_data/scripts/delete_orphaned.rb data_bags
    for i in `ls data_bags`; do
    jsonlint data_bags/$i/*.json
    bundle exec knife data bag create $i -c $WORKSPACE/.chef/target.rb
    bundle exec knife data bag from file $i data_bags/$i -c $WORKSPACE/.chef/target.rb
    for i in `ls roles`; do
    bundle exec knife role from file $i roles/$i -c $WORKSPACE/.chef/target.rb
    for i in `ls environments`; do
    bundle exec knife environment from file $i environments/$i -c $WORKSPACE/.chef/target.rb
    bundle exec berks upload -c .berkshelf/target.json

    Posted in Chef | Tagged , , | Leave a comment

    Using Jenkins and Berkshelf to create a cookbook repository

    Berkshelf suggests the idea of using your chef server as an artifact server so that cookbooks and dependencies can be pulled directly from the chef server using the chef_api location. As they note, this is especially useful if your organisation has its own internal cookbooks. In my new position at Dyn we have a number of internal cookbooks, such as those that define our base operating system configuration, that we need to keep track of. We have configured a pipeline that takes changes from proposed to production and i’ll outline the full process over the next few posts here.

    The first stage of the process is to test cookbooks and proposed changes, our tool of choice is Jenkins. Berkshelf really helps us out here by taking care of a lot of the heavy lifting, combined with tools like thor-scmversion its possible to get a high degree of automation into the release process.

    The workflow for cookbooks is fairly straightforward and should be familiar. We use Github and features are developed in branches. Once an engineer is happy with their changes they send in a pull request. This triggers a build in Jenkins which performs some lightweight rudimentary checks, foodcritic correctness, are the dependencies resolvable (berks install), nothing too intensive. We use the Jenkins Pull Request Builder plugin which feeds back into the pull request the result of the build for everyone to see.

    Once everyone is happy the pull request is merged to master which triggers a full build of master in Jenkins. The test regimen is slightly more intensive now, all the pull request checks run again alongside some more involved checks.

    If the build succeeds the final post build tasks are triggered. The first of these uses thor-scmversion to increment the version number of the cookbook. We default to a patch level increment (0.0.X) but this can be overridden by engineers adding the #minor or #major tags to a commit message. Once the version is incremented the cookbook can then be uploaded to an organisation on our chef server dedicated as a cookbook repository. From here they can be pulled in using the chef_api location within a Berksfile for use in other cookbooks and deployment in testing or production organisations.

    My next post will focus on how we use the same workflow to version whole chef organisations with scmversion and Berkshelf.

    Posted in Chef | Tagged , , , | Leave a comment

    Ultrasonic ranging with a Raspberry Pi using Python and an SRF01

    The GPIO pins on the RPi bring a range of possibilities for interfacing with sensors. Once such possible sensor is the SRF01 ultrasonic ranger. Because the SRF01 has a serial interface in order to talk to it using the GPIO pins a quick reconfiguration is needed when using linux because the serial interface is normally configured for boot logging and serial console.

    The first change is in /boot/cmdline.txt. You need to remove the serial console from the command line, the part you are looking to remove should look like this…

    console=ttyAMA0,115200 kgdboc=ttyAMA0,115200

    The second change is in /etc/inittab. You need to comment out or remove the serial console here as well. Look for and comment or remove the following line…

    T0:23:respawn:/sbin/getty -L ttyAMA0 115200 vt100

    Once this is done your RPi is ready to talk serial over the GPIO pins. For a hardware setup I used the arrangement below. The adaptor i’m using to get the GPIO pins onto the breadboard is a commonly available Adafruit T-Cobbler.

    Breadboard Picture

    SRF01 connected to a breadboard with a T-Cobbler

    Generally the setup is very simple ignoring the LED which is a multicolour type run off three of the other GPIO pins. The wires are connected to 3.3V (red), ground (black) and the yellow wire is a single cable for TX and RX which is why the diode is included.

    For the software side I used Python, if you want to use something like C then there is a code example on the Robot Electronics website. The code is a very simple test to initialise and take a reading from the SRF01 but provides all the basic building blocks to build upon.

    Posted in Raspberry Pi | Tagged , , , | Leave a comment

    New IPMI knife plugin

    I’ve just published to my Github and to Rubygems a knife plugin that integrates IPMI power control into knife.

    Rubygems: https://rubygems.org/gems/knife-ipmi

    Github: https://github.com/Afterglow/knife-ipmi

    Available ipmi subcommands: (for details, knife SUB-COMMAND –help)

    knife ipmi power off NODE
    knife ipmi power on NODE
    knife ipmi power reset NODE
    knife ipmi power soft NODE
    knife ipmi power status NODE

    The plugin relies on the ruby-ipmitool gem for issuing commands, the caveat being that for it to work under Ruby 1.9.x you will need some fixes which have been merged on Github but are not currently present on Rubygems (https://github.com/ehowe/ruby-ipmitool/pull/5). I also suggest you run the ohai-ipmi plugin on your nodes to populate the node[ipmi][address] property. The final part of the puzzle is setting knife[:ipmi_user] and knife[:ipmi_pass] in your knife.rb. Obviously to work well this plugin relies on you having a set IPMI user on all your machines for knife to use to login, fortunately my chef-ipmi cookbook can help you both with installing the plugin and creating the user.

    Posted in Chef, Gems, Knife, Ruby | Tagged , , , , | Leave a comment

    Configuring hardware with Chef

    I published a couple of cookbooks on my Github recently for provisioning bare metal with Chef. That is to say configuring IPMI and RAID controllers, creating arrays that sort of thing. I use the cookbooks in conjunction with chef-solo to configure new servers in a bootstrap environment.

    To date i’ve published two cookbooks, one for IPMI and one for RAID. Both are built on top of existing open source ohai plugins that have been extended if necessary. The list of supported hardware is a little narrow as I only really have one type of hardware to test on right now but the beauty of IPMI is that it will probably work on anything that supports the standard, likewise the RAID cookbook should work on any MegaCLI compatible controller. Extending it to further controllers should just require additional providers.

    Whats available today:

    https://github.com/Afterglow/chef-ipmi – A simple cookbook with LWRPs to configure basic details of an IPMI controller including setting up networking information and managing users. An example set of node attributes is included in the readme to show how to configure these settings.

    https://github.com/Afterglow/chef-raid – This cookbook provides resources for RAID arrays and a Megaraid provider to allow for configuring LSI controllers. It should be fairly trivial to expand this to support other controller types such as HP’s SmartArray platform with additional providers. The cookbook can reset and configure RAID controllers configuring various types of array including RAID10 and RAID50 spans (in MegaRAID parlance).

    Pull requests for features or fixes are welcomed through GitHub.

    Posted in Chef | Tagged , , | Leave a comment

    Talking to GMLAN on my Vectra-C with my Arduino

    So I thought I would open out this new blog with a success story. I got my Arduino Uno with Sparkfun canbus shield to talk to my car (03 plate Vauxhall Vectra-C).

    To do this there are a couple of things I found out from reading around the subject and I have recorded them here for posterity.

    You need a special cable to talk to the GMLAN with the Sparkfun shield. I used this one and wired it up per the diagram here with two exceptions. I connected the CAN-H line to pin 1 on the OBD connector and I left the CAN-L line disconnected. For reference the pins on the ODB connector are…


        Pin 1 - SW-LS-CAN (33kB) or DW-FT-CAN (+) (<125kB)
        Pin 2 - J1850
        Pin 3 - MS-CAN (+) (95kB)
        Pin 4 - Battery - (Chassis Ground)
        Pin 5 - Signal Ground
        Pin 6 - ISO 15765 HS-CAN (+) (500kB)
        Pin 7 - K-Line
        Pin 8 -
        Pin 9 - DW-FT-CAN (-) (<125kB)
        Pin 10 - PWM
        Pin 11 - MS-CAN (-) (95kB)
        Pin 12 - K-Line (KW82 Prot.)
        Pin 13 - Reserved
        Pin 14 - ISO 15765 HS-CAN (-) (500kB)
        Pin 15 - L-Line
        Pin 16 - Battery + (Constant 12V Power)


    You need to connect the CAN-L of the MCP2515 on the shield to GND because this is single wire CAN (SWCAN). The easiest way to do this is probably by soldering a header onto the 5V/GND/CAN-H/CAN-L part of the shield and then linking the CAN-L to GND there. I guess you could also connect the CAN-L line to ground in the OBD connector if you preferred.

    With this taken into account you just need some code that works. There is some excellent discussion and example code on the Carmodder forums here. The code in that thread is a good starting point as it mostlydemonstrates how to correctly init the MCP2515 at the correct speed as well has how to send and receive messages. Some caveats…

    Most of the discussion on that thread is about 29bit CAN and my Vectra is only 11bit so I have to use the (commented out) 11bit can send function in the mcp2515.pde file. There are also some other differences including smaller 2 byte IDs, no dedicated priority byte and 8 bytes for data.

    I don’t claim to understand the SPI init code but it did not work for me it just froze my Arduino, instead of the provided initSPI I used this..




    I replaced the 11bit send function with one that accepted a variable length array for the data instead of fixed D1,D2,D3 args


    //Send Standard 11 bit Packet
    // byte D0,byte D1,byte D2,byte D3,byte D4,byte D5,byte D6,byte D7)
    byte can_send_11bit_message(uint16_t id, int length, byte packetdata[])
      //See that buffer is free
      byte status;
      SPI_ReadWrite(0xA0);    //Read Status
      //Read Status devuelve
      //Bit0.- CANINTF.RX0IF
      //BIT1.- CANINTF.RX1IF
      //BIT3.- CANINTF.TX0IF
      //BIT5.- CANINTF.TX1IF
      //BIT7.- CANINTF.TX2IF
      //Search for a free buffer
      byte buffer_free;
      byte address;
      if (bit_is_clear(status,2)){
        address=0x31;    //Registro Standard ID buffer 0
      }else if (bit_is_clear(status,4)){
        address=0x41;    //Registro Standard ID buffer 1
      }else if (bit_is_clear(status,6)){
        address=0x51;    //Registro Standard ID buffer 2
        return 0;    //No hay buffer libre, no se ha transmitido msje  
      //Set priorities on sending buffers
      //Is independent of the priorities of the bus can => (greater priority = lowest id)
      //It always puts the current buffer (final Msg) with the lowest priority
      switch (address){
        case 0x31:
      mcp2515_write_register(address+1, (uint8_t) (id< Registro TXBnDm
      int c = 0;
      while (c < length) {    
        Serial.print("Wrote byte ");
        Serial.print(" to address ");
    //  mcp2515_write_register(address+c+5,D0);   //Byte0
    //  mcp2515_write_register(address+6,D1);   //Byte1
    //  mcp2515_write_register(address+7,D2);   //Byte2
    //  mcp2515_write_register(address+8,D3);   //Byte3
    //  mcp2515_write_register(address+9,D4);   //Byte4  
    //  mcp2515_write_register(address+10,D5);  //Byte5
    //  mcp2515_write_register(address+11,D6);  //Byte6
    //  mcp2515_write_register(address+12,D7);  //Byte7  
      // Enviar mensaje CAN
      SPI_ReadWrite(0x80 | buffer_free);  //RTS(Message Request To Send)
      return 1;


    So armed with the working send and receive functions for the carmodder forums we can log data from the canbus and also inject our own packets. I used a small uSD card for logging that seemed easiest.


    void processMessage() {
      Serial.print(" ");
      File myFile = SD.open("output.txt", FILE_WRITE);
      if (myFile) {
        myFile.print(" : ");
        myFile.print(heady, HEX);
        myFile.print(" | ");
        myFile.print(datalength, DEC);
        myFile.print(" | ");
        for (int i=0; i < datalength; i++) {
          myFile.print(message[i], HEX);
          myFile.print(" ");


    The code above writes every line to a file on the SD card including a timecode, the header, the length and the data bytes. Like this..


    108670 : 622 | 8 | 0 0 40 0 0 0 0 0 
    108692 : 360 | 3 | 0 0 0 
    108712 : 340 | 2 | 0 0 
    108730 : 626 | 8 | 0 11 0 0 0 0 0 0 
    108751 : 440 | 8 | A8 6C 40 FF 0 70 50 C 
    108775 : 170 | 1 | 0 
    109383 : 400 | 5 | 2 0 0 0 0 
    109619 : 230 | 5 | 5 0 40 0 0 
    109640 : 405 | 6 | 20 0 0 0 0 0 
    109691 : 420 | 5 | C2 0 10 0 0 
    109710 : 23A | 3 | 0 0 0 
    109731 : 305 | 3 | 0 81 90 
    109757 : 525 | 1 | 0 
    109775 : 445 | 2 | 0 62 
    110191 : 360 | 3 | 0 0 0 
    110211 : 350 | 2 | 2 0 
    110230 : 340 | 2 | 0 0 
    110247 : 170 | 1 | 0 
    110383 : 400 | 5 | 2 0 0 0 0


    To send messages is also relatively straightforward…


    byte packet[] = {0x00, 0x04, 0x00, 0xF3, 0x5C};
    if (can_send_11bit_message(0x0415, sizeof(packet), packet)) {
       Serial.println("Sent packet succesfully");


    On my Vectra-C the CAN packet above operates the passenger front electric window. I tested this by binding the commands for up and down on the window to the joystick on the canbus shield.

    Posted in Arduino, Car | Tagged , , , | 1 Comment