Category Archives: Programming

First robot remote driving test

I programmed some remote control software using a Golang receiving program on the robot and a ruby control client using my gamepad ruby gem and an xbox1 controller. It worked OK. It was a bit jerky, there’s no PWM so no acceleration, it’s either go or stop; anything not totally rigid on the robot wobbles. Also the position of the camera doesn’t show enough of the robot so it’s hard to get a real idea of where the robot is.

I was filming, the robot was being controlled my my wife, Morwenna, from upstairs.

The robot is also prone to shed a track if the “half turn” is used too much, that is one track forwards or backwards, the other one stationary. I can fix this in software if I can work out a way to do PWM on the robot that doesn’t run the Raspberry Pi CPU.

Tagged ,

Secret project

Tiny motor

One of my new motors. It’s about 10mm in diameter

I’ve started work on a top-secret project. I can’t really hide the fact that it’s going to be a robot, but I’m not going to say what it is, at least not just yet.

So, last night I was designing a 3d printed mount for the tiny 3-6V motors I bought and I started to wonder if I could cobble something together using my old technic lego. I dug out the lego, but on top of that was my dusty old meccano set, even better!

WIthin a short amount of time I had some motor mounts and a frame made, including tensioning springs for the caterpillar tracks. All that was left was to take it for a spin. I hooked it up to my Raspberry Pi via my Custard Pi breakout board, a ULN2803A and a custom voltage regulator circuit.

Meccano wheel mount

Motor mount

Seeing if it drives in a straight line:

Hooked up to the Raspberry Pi, controlled by microswitches. You can see the top of the Custard Pi poking out over the mess of wires that is my breadboard and the cheap wireless dongle/antenna I got from eBay. The voltage regulator circuit makes an appearance being dragged along behind:

MotorPiTX

MotorPiTX kit

Right now it only goes forwards because I didn’t have the circuitry for anything else, but I got my MotorPiTX in the mail this morning so that will change soon.

Tagged ,

go-piglow, a lib for controlling the piglow in Golang

piglow

A few days ago I got a Piglow. It’s a fairly useless but fun addon board for the Raspberry Pi that has 18 individual user controllable LEDs arranged in Arms/Legs/Tentacles (whatever you want to call them).

There are example programs out there to control the LEDs, but they are all in Python, and on my Pi they are all fairly slow so I wrote my own lib for Go:

https://github.com/wjessop/go-piglow

The API is fairly strigthtforward, this sample program just turns on and off some of the LEDS:

The lib API allows for controlling individual LEDs, the colour rings, the tentacles, or to display a value bar-graph style on each tentacle.

I wrote some more complex example programs to go with the lib to demo these capabilities. A simple program to flash the LEDs, a CPU meter that displays 1, 5 and 15 minute load average on each of the tentacles, and the most fun, a disco program, here is me demonstrating them:

Right now i’m running this program on my Pi to slowly fade between the colour rings.

Creating BERT dicts in Go

Creating BERT dicts in Go

I’ve been learning Go recently and have written a program to connect to an existing service (written in Ruby) that sends and receives messages serialised as BERT terms.

I’m posting this partly because I had quite a lot of fun figuring it out and partly to document creating BERT dicts in Go should anyone else need to do this in the future and hit the same issues I did.

Why BERT?

I’m a big fan of BERT. It’s compact, flexible, and there are good libs available for serialisation/de-serialisation. So far I’ve exclusively been using the bert gem (written by Tom Preston-Werner, author of the BERT spec).

Creating BERT dicts

One of the great features of BERT is the complex types it supports, including dicts. The equivalent to a dict in Ruby would be a hash, in Go a map. They are really simple to create in Ruby:

require 'bert'
BERT.encode({"key" => "val"})
=> "\x83h\x03d\x00\x04bertd\x00\x04dictl\x00\x00\x00\x01h\x02m\x00\x00\x00\x03keym\x00\x00\x00\x03valj"

We can pull this apart and see exactly what the bert gem did to our data. Let’s dump the string to an array of 8-bit unsigned integers:

BERT.encode({"key" => "val"}).unpack("C*")
=> [131, 104, 3, 100, 0, 4, 98, 101, 114, 116, 100, 0, 4, 100, 105, 99, 116, 108, 0, 0, 0, 1, 104, 2, 109, 0, 0, 0, 3, 107, 101, 121, 109, 0, 0, 0, 3, 118, 97, 108, 106]

It’s hard to see exactly what happened, but with the BERT docs and the erlang External Term Format docs we can see how the hash got encoded.

magic| tuple |  atom   |       bert     |            |    dict        |  list 1 elem    |      list      |   atom  |      key      |   atom  |      |  val       | nil | nil
131, 104, 3, 100, 0, 4, 98, 101, 114, 116, 100, 0, 4, 100, 105, 99, 116, 108, 0, 0, 0, 1, 108, 0, 0, 0, 2, 100, 0, 3, 107, 101, 121, 100, 0, 3,       118, 97, 108, 106, 106

If the formatting of that breakdown is messed up here’s a raw gist that may be clearer.

What you can see here are what the bytes represent (you can see the breakdown of each data type on the External Term Format docs). This is great, but why write a blog post just about dicts? Well, they’re easy to create in Ruby:

BERT.encode(:complex => {"key" => [:data, {:structures => "are easy to serialise"}]})
=> "\x83h\x03d\x00\x04bertd\x00\x04dictl\x00\x00\x00\x01h\x02d\x00\acomplexh\x03d\x00\x04bertd\x00\x04dictl\x00\x00\x00\x01h\x02m\x00\x00\x00\x03keyl\x00\x00\x00\x02d\x00\x04datah\x03d\x00\x04bertd\x00\x04dictl\x00\x00\x00\x01h\x02d\x00\nstructuresm\x00\x00\x00\x15are easy to serialisejjjj"

but it’s not so obvious in Go, and I hit some issues when trying to create them.

Serialising to BERT in Golang

Serialising data to BERT/BERP in Go is pretty easy for simple cases using the gobert lib:

package main

import (
    "fmt"
    "bytes"
    "github.com/sethwklein/gobert"
)

func main() {
    var buf = new(bytes.Buffer)
    bert.MarshalResponse(buf, bert.Atom("foo"))
    for _, b := range(buf.Bytes()) {
        fmt.Printf("%d ", b)
    }
    fmt.Println()
}

This gives us:

0 0 0 7 131 100 0 3 102 111 111

If we run that through the Ruby lib decoder we get:

> BERT.decode([131, 100, 0, 3, 102, 111, 111].pack("C*"))
 => :foo

(The Ruby bert lib decodes atoms to symbols).

Serialising to BERT dicts in Golang

However, there is a little more effort involved serialising more complex data structures, in particular dicts, as I found out.

You might have thought that you could just pass in a map:

package main

import (
    "fmt"
    "bytes"
    "github.com/sethwklein/gobert"
)

func main() {
    message := map[string]string{"key1": "val1", "key2": "val2"}

    var buf = new(bytes.Buffer)
    bert.MarshalResponse(buf, message)
    for _, b := range(buf.Bytes()) {
        fmt.Printf("%d ", b)
    }
    fmt.Println()
}

We get the output:

0 0 0 1 131

Well, that doesn’t work. What you end up with is a one byte long BERP. It seems that gobert doesn’t automatically serialise maps. No problem, we’ll build one up manually. A quick look at the BERT documentation shows the format of a dict:

“Dictionaries (hash tables) are expressed via an array of 2-tuples representing the key/value pairs. The KeysAndValues array is mandatory, such that an empty dict is expressed as {bert, dict, []}. Keys and values may be any term. For example, {bert, dict, [{name, <<“Tom”>>}, {age, 30}]}.”

So let’s create this special structure manually.

package main

import (
    "fmt"
    "bytes"
    "github.com/sethwklein/gobert"
)

func main() {
    message1 := []bert.Term{bert.Atom("key1"), bert.Atom("val1")}
    message2 := []bert.Term{bert.Atom("key2"), bert.Atom("val3")}
    keys_and_values := []bert.Term{message1, message2}

    dict := []bert.Term{bert.BertAtom, bert.Atom("dict"), keys_and_values}

    var buf = new(bytes.Buffer)
    bert.MarshalResponse(buf, dict)
    for _, b := range(buf.Bytes()) {
        fmt.Printf("%d ", b)
    }
    fmt.Println()
}

The result:

0 0 0 51 131 104 3 100 0 4 98 101 114 116 100 0 4 100 105 99 116 104 2 104 2 100 0 4 107 101 121 49 100 0 4 118 97 108 49 104 2 100 0 4 107 101 121 50 100 0 4 118 97 108 51

It looks better, but it doesn’t decode, using Ruby:

> BERT.decode([131, 104, 3, 100, 0, 4, 98, 101, 114, 116, 100, 0, 4, 100, 105, 99, 116, 104, 2, 104, 2, 100, 0, 4, 107, 101, 121, 49, 100, 0, 4, 118, 97, 108, 49, 104, 2, 100, 0, 4, 107, 101, 121, 50, 100, 0, 4, 118, 97, 108, 51].pack("C*"))
TypeError: Invalid dict spec, not an erlang list

We’re still missing something. Let’s compare the output of the Ruby bert lib to the output of gobert for the same data structure:

> BERT.encode({:key1 => :val1, :key2 => :val2}).unpack("C*")
 => [131, 104, 3, 100, 0, 4, 98, 101, 114, 116, 100, 0, 4, 100, 105, 99, 116, 108, 0, 0, 0, 2, 104, 2, 100, 0, 4, 107, 101, 121, 49, 100, 0, 4, 118, 97, 108, 49, 104, 2, 100, 0, 4, 107, 101, 121, 50, 100, 0, 4, 118, 97, 108, 50, 106]

We’re definitely missing some data in the gobert output.

If you follow along the byte sequences you can see that they start off the same until the 18th byte. In the Ruby output this is ‘108’, or LIST_EXT. In the gobert output it’s 104, a SMALL_TUPLE_EXT. We can see where this difference happens in encode.go in the gobert lib (in the writeTag func):

case reflect.Slice:
    writeSmallTuple(w, v)
case reflect.Array:
    writeList(w, v)

Let’s decode the BERT data to see where the diversion happens in the underlying data structures:

magic| tuple  |  atom   |       bert       |   atom   |    dict
  131, 104, 3, 100, 0, 4, 98, 101, 114, 116, 100, 0, 4, 100, 105, 99, 116

We can see that the “bert” and “dict” atoms are encoded the same, but the keys_and_values array is getting encoded as a SMALL_TUPLE_EXT by gobert when we wanted a LIST_EXT. If we look back at the gobert code we can see that the decision to use SMALL_TUPLE_EXT over LIST_EXT is dependent on a slice or array being present. We can use the go “reflect” package to look at the arrays/slices we are creating and see what they are:

package main

import (
    "fmt"
    "reflect"
    "github.com/sethwklein/gobert"
)

func main() {
    array := [2]bert.Term{}
    slice := []bert.Term{}

    array_val := reflect.ValueOf(array)
    slice_val := reflect.ValueOf(slice)
    fmt.Printf("array is a: %v\n", array_val.Kind())
    fmt.Printf("slice is a: %v\n", slice_val.Kind())
}

array is a: array
slice is a: slice

The fix

So, in order to fix our data structure to get gobert to correctly encode the dict we need to change the keys_and_values slice to an array:

package main

import (
    "fmt"
    "bytes"
    "github.com/sethwklein/gobert"
)

func main() {
    message1 := []bert.Term{bert.Atom("key1"), bert.Atom("val1")}
    message2 := []bert.Term{bert.Atom("key2"), bert.Atom("val3")}
    keys_and_values := [2]bert.Term{message1, message2} // Now an array

    dict := []bert.Term{bert.BertAtom, bert.Atom("dict"), keys_and_values}

    var buf = new(bytes.Buffer)
    bert.MarshalResponse(buf, dict)
    for _, b := range(buf.Bytes()) {
        fmt.Printf("%d ", b)
    }
    fmt.Println()
}

The result:

0 0 0 55 131 104 3 100 0 4 98 101 114 116 100 0 4 100 105 99 116 108 0 0 0 2 104 2 100 0 4 107 101 121 49 100 0 4 118 97 108 49 104 2 100 0 4 107 101 121 50 100 0 4 118 97 108 51 106

But more importantly, can we decode the data we encoded?

> BERT.decode([131, 104, 3, 100, 0, 4, 98, 101, 114, 116, 100, 0, 4, 100, 105, 99, 116, 108, 0, 0, 0, 2, 104, 2, 100, 0, 4, 107, 101, 121, 49, 100, 0, 4, 118, 97, 108, 49, 104, 2, 100, 0, 4, 107, 101, 121, 50, 100, 0, 4, 118, 97, 108, 51, 106].pack("C*"))
 => {:key1=>:val1, :key2=>:val3}

Yes!

Announcing the (Unofficial) Yahoo groups public data API

The what?

All Yahoo groups have public metadata. The number of members, the category, various email addresses etc.

Yahoo doesn’t provide an API to this publicly available data (you can see it by visiting one of the group pages). Getting information about any particular group in your programs is hard.

I’ve filled this gap by releasing a third-party API to get the publicly available Yahoo groups metadata.

JSON API

The API itself provides a really simple interface for getting group data in JSON format, just stick the urlencoded URL of the Yahoo group you are interested in on the end of the (Unofficial) Yahoo groups public data API URL and request it. You get JSON back.

The URL you request looks like this:

http://yahoo-group-data.herokuapp.com/api/v1/group/http%3A%2F%2Ftech.groups.yahoo.com%2Fgroup%2FOneStopCOBOL%2F

…and the JSON you get back looks like this:

{
    "private": false,
    "not_found": false,
    "age_restricted": false,
    "name": "OneStopCOBOL",
    "description": "OneStopCOBOL - Official COBOL group",
    "post_email": "OneStopCOBOL@yahoogroups.com",
    "subscribe_email": "OneStopCOBOL-subscribe@yahoogroups.com",
    "owner_email": "OneStopCOBOL-owner@yahoogroups.com",
    "unsubscribe_email": "OneStopCOBOL-unsubscribe@yahoogroups.com",
    "language": "English",
    "num_members": 151,
    "category": "COBOL",
    "founded": "2008-06-24"
}

You can try it out and get sample code over at the homepage of the (Unofficial) Yahoo groups public data API.

Motivation

I run the Recycling Group Finder, a site that makes extensive use of Yahoo Groups data. The (Unofficial) Yahoo groups public data API is an abstraction of the functionality I wrote to get group data for that site. I just figured it might be useful to other people.

Ruby’s Queue class, and ordered processing

I was writing a Ruby script recently that needed to download 43 2GB chunks of a database backup from a remote source, then decrypt each chunk, then finally concatenate the decrypted files together.

I knew I wanted to use threads to do this as it would speed up the overall process a great deal, and the downloading and decryption can be done in any order, it doesn’t matter if chunk 5 is downloaded before or after chunk 35, and the same with decryption. Those processes all operate on discrete files on the filesystem.

Where order does matter however is when the script is concatenating the files together into the final output file (in this case an lzop archive).

While looking how to handle this I discovered Ruby’s Queue class which “…provides a way to synchronize communication between threads“. Great, that’s exactly what I needed.

In my script I set up two thread-pools, one for downloading and one for decrypting, each with it’s own queue. At the start of the script I push all the download jobs on the download queue. The download thread pool workers download them then push them onto the decrypt queue. The decrypt queue can then get to work. It flows a little like this:

[download queue] -> [download pool] -> [decrypt queue] -> [decrypt pool]

However one last step remained, the concatenation. I used a queue again for this but needed to handle the jobs in order or I would end up with a useless lzop archive, so I came up with the following code to help with this:

You can see from the output that though the work units appear on the queue in any order, they will always be processed in the correct order:

[1.9.2] ~ $ ruby queue.rb
popping the stack
vals is now [16]
popping the stack
vals is now [1, 16]
popping the stack
vals is now [1, 11, 16]
popping the stack
vals is now [1, 11, 16, 19]
popping the stack
vals is now [1, 6, 11, 16, 19]
popping the stack
vals is now [1, 6, 11, 16, 18, 19]
popping the stack
vals is now [1, 6, 11, 15, 16, 18, 19]
popping the stack
vals is now [1, 6, 8, 11, 15, 16, 18, 19]
popping the stack
vals is now [0, 1, 6, 8, 11, 15, 16, 18, 19]
Processing 0
Processing 1
popping the stack
vals is now [5, 6, 8, 11, 15, 16, 18, 19]
popping the stack
vals is now [3, 5, 6, 8, 11, 15, 16, 18, 19]
popping the stack
vals is now [3, 5, 6, 8, 11, 14, 15, 16, 18, 19]
popping the stack
vals is now [3, 5, 6, 8, 10, 11, 14, 15, 16, 18, 19]
popping the stack
vals is now [3, 5, 6, 7, 8, 10, 11, 14, 15, 16, 18, 19]
popping the stack
vals is now [3, 5, 6, 7, 8, 9, 10, 11, 14, 15, 16, 18, 19]
popping the stack
vals is now [2, 3, 5, 6, 7, 8, 9, 10, 11, 14, 15, 16, 18, 19]
Processing 2
Processing 3
popping the stack
vals is now [4, 5, 6, 7, 8, 9, 10, 11, 14, 15, 16, 18, 19]
Processing 4
Processing 5
Processing 6
Processing 7
Processing 8
Processing 9
Processing 10
Processing 11
popping the stack
vals is now [14, 15, 16, 17, 18, 19]
popping the stack
vals is now [14, 15, 16, 17, 18, 19, 20]
popping the stack
vals is now [13, 14, 15, 16, 17, 18, 19, 20]
popping the stack
vals is now [12, 13, 14, 15, 16, 17, 18, 19, 20]
Processing 12
Processing 13
Processing 14
Processing 15
Processing 16
Processing 17
Processing 18
Processing 19
Processing 208
Processing 19
Processing 20

The code to do something similar:

Tagged

Fixing the Gemfile not found (Bundler::GemfileNotFound) error

I was working on an app (bundler, unicorn, Rails3) that had a strange deploy issue. Five deploys (using capistrano) after our unicorn processes had started unicorn would fail to restart, this is the capistrano output:

* executing `deploy:restart'
* executing `unicorn:restart'
* executing "cd /u/apps/dash/current && unicornctl restart"
servers: ["stats-01"]
[stats-01] executing command
** [out :: stats-01] Restarting pid 15160...
** [out :: stats-01] PID 15160 has not changed, so the deploy may have failed. Check the unicorn log for issues.

I checked the unicorn log for details:

I, [2011-08-02T15:59:32.498371 #11790] INFO -- : executing ["/u/apps/dash/shared/bundle/ruby/1.9.1/bin/unicorn", "/u/apps/dash/current/config.ru", "-Dc", "/u/apps/dash/current/config/unicorn.conf.rb", "-E", "production"] (in /u/apps/dash/releases/20110802155921)
/opt/ruby/lib/ruby/gems/1.9.1/gems/bundler-1.0.15/lib/bundler/definition.rb:14:in `build': /u/apps/dash/releases/20110802152815/Gemfile not found (Bundler::GemfileNotFound)
from /opt/ruby/lib/ruby/gems/1.9.1/gems/bundler-1.0.15/lib/bundler.rb:136:in `definition'
from /opt/ruby/lib/ruby/gems/1.9.1/gems/bundler-1.0.15/lib/bundler.rb:124:in `load'
from /opt/ruby/lib/ruby/gems/1.9.1/gems/bundler-1.0.15/lib/bundler.rb:107:in `setup'
from /opt/ruby/lib/ruby/gems/1.9.1/gems/bundler-1.0.15/lib/bundler/setup.rb:17:in `'
from :29:in `require'
from :29:in `require'
E, [2011-08-02T15:59:32.682270 #11225] ERROR -- : reaped # exec()-ed

Sure enough there’s the exception, “Gemfile not found (Bundler::GemfileNotFound)“, and the file referenced (/u/apps/dash/releases/20110802152815/Gemfile) didn’t exist. The directory that was being looked in (20110802152815) was from a previous deploy and had been rotated off the filesystem. We keep five historical deploys so that explained why the problem only happened five deploys after a full unicorn restart.

I suspected an environment variable was getting set somewhere, and never updated, so I added some debugging to our unicorn.conf.rb file:

ENV.each{ |k, v| puts "#{k}:tt#{v}" }

I then restarted the unicorns fully and tailed the unicorn log file while deploying the app. Sure enough one of the environment variables stuck out:

BUNDLE_GEMFILE: /u/apps/dash/releases/20110802165726/Gemfile

I deployed again, it remained the same, still pointing to /u/apps/dash/releases/20110802165726/Gemfile. I continued to deploy until release 20110802165726 was rotated off the filesystem and up pops the error again. This looks like the problem.

I committed a change to our unicorn.conf.rb that set the BUNDLE_GEMFILE variable explicitly in the before_exec block:

before_exec do |server|
ENV['BUNDLE_GEMFILE'] = "/u/apps/dash/current/Gemfile"

end

Over 5 deploys later and the env var is still set to /u/apps/dash/current/Gemfile and there are no more errors. Let me know if you found this useful!

Notes

  • There may be other issues that cause errors of this type, this was just the solution that worked for us, YMMV.
  • There may be better places to set the environment variable other than the unicorn.conf.rb, I’m open to suggestions (we’re using bluepill, I may be able to set it there for intance).
Update: I’ve changed this on our systems so the environment variable is set in bluepill, it works the same.
Tagged , ,

Rails 2 -> 3 undefined method `html_safe’ for nil:NilClass error

I am converting the Recycling Group Finder site from Rails 2 to Rails 3 and though it has mostly gone to plan I was temporarily held up by this error which I was getting on some pages:

The error was hard to track down as the error message wasn’t very descriptive, but in the end it turned out to be caused by a comment. I am using content_for blocks to generate sections of page content and for a long section I had added a comment to the end of the block to help be know which block was closing:

It turns out that this ‘# some_section’ comment was the problem, possibly because of the change to erubis in Rails 3. Removing the comment caused the page to start working again:

I hope this page helps short-cut the debugging for anyone else that is bitten by this issue.

Tagged , ,

Running Unicorn under supervise/daemontools

I’m running a Rails3 app under daemontools, this is how I did it.

First, install then start daemontools and Unicorn. This is left as an exercise for the reader, it was easy under CentOS 5. You should end up with a /service directory on your server.

Run “mkdir /service/my_service_name && cd /service/my_service_name && ls”. If you have the right processes running you should have a “supervise” subdirectory automatically created for you under your new directory.

You should be in the “/service/my_service_name” directory. In order to get anything to run you need to create a “run” script. Here’s mine, you can copy and modify it for your own use. If you’re running Unicorn and you improve the script I’d appreciate feedback in the comments:

#!/bin/sh
cd /home/www/foo/current && unicorn_rails -E staging -c /home/www/foo/shared/config/unicorn.conf

Make sure the script is executable. If you have done everything right you should have a Unicorn process running. You should see something similar to this in the output of “ps auxww -H”:

If that’s not the sort of thing you see, check the logs. Lastly I added this cap snippet to kill the process on deploy and let supervise handle re-spawning it:

Done!

Why daemontools?

There is some propaganda on the daemontools site, but I used it because it’s quick to get going and in my experience reliable. I use monit a lot at Engine Yard, but there are no recent monit packages easily available for CentOS that I have found. One drawback is that you don’t get any of the resource monitoring that monit provides, such as http checks, memory checks etc. You’d have to implement this yourself, but in this instance that’s the tradeoff you make for simplicity.

CentOS 5 update:

If you get an error that looks like this:

Follow these instructions to fix the issue:

Simple http ping program in Ruby

Just a little http ping program I wrote to check request latency during my testing for zero-downtime rails deploys with Unicorn.

Example:

pinky:~ will$ ruby httping.rb willj.net
200 :   0.000000   0.000000   0.000000 (  0.761449)
200 :   0.010000   0.000000   0.010000 (  0.535888)
200 :   0.000000   0.010000   0.010000 (  0.800904)
200 :   0.000000   0.000000   0.000000 (  0.530763)
200 :   0.000000   0.000000   0.000000 (  0.811362)
200 :   0.000000   0.000000   0.000000 (  0.557995)
200 :   0.000000   0.010000   0.010000 (  0.774484)

Tagged
Follow

Get every new post delivered to your Inbox.