Category Archives: sysadmin

Fixing the chef “Error Syncing Cookbooks – EOFError: end of file reached” error

I ran into a problem recently with some corrupt chef cookbooks after a recent chef upgrade, I was getting an “Error Syncing Cookbooks – EOFError: end of file reached” error on every chef run on all nodes:

================================================================================

Error Syncing Cookbooks:
================================================================================

Unexpected Error:
-----------------
EOFError: end of file reached

[2013-01-21T18:17:27+00:00] ERROR: Running exception handlers
[2013-01-21T18:17:27+00:00] FATAL: Saving node information to /var/cache/chef/failed-run-data.json
[2013-01-21T18:17:27+00:00] ERROR: Exception handlers complete
[2013-01-21T18:17:27+00:00] FATAL: Stacktrace dumped to /var/cache/chef/chef-stacktrace.out
[2013-01-21T18:17:27+00:00] FATAL: EOFError: end of file reached

I’d already deleted the cookbooks:

knife cookbook delete cookbook_name

and re-uploaded them, but this didn’t work. It took a purge to completely delete the cookbook from the chef server:

knife cookbook delete --purge cookbook_name

In the end I had so many cookbooks that were somehow corrupt that I bulk purged them and re-uploaded them all:

knife cookbook bulk delete --purge '.+'

Tagged

Testing http over socket connections with socat

Sometimes I need to test an http server that is listening on a unix socket. It’s really easy to do this using socat, but the socat man page is pretty long. Here it is for anyone who needs it in the future, and me when I inevitably forget.

In this case the server is unicorn, but this will work for any http server listening on a socket, for instance thin. The lines beginning with “–>” are lines I typed (the 4 lines at the start), remove the “–>” when you try this.

$ socat - UNIX-CONNECT:/u/apps/app/shared/sockets/unicorn.sock,crnl
-->GET /session/new HTTP/1.1
-->Host: thehostname.com
-->X-Forwarded-Proto: https
-->
HTTP/1.1 200 OK
Date: Fri, 02 Dec 2011 14:37:23 GMT
Status: 200 OK
Connection: close
Strict-Transport-Security: max-age=31536000
Content-Type: text/html; charset=utf-8
X-UA-Compatible: IE=Edge,chrome=1
ETag: "2346c47c7cb3bc37729e42fc8b20c821"
Cache-Control: max-age=0, private, must-revalidate
Set-Cookie: _x_session=blablabla; path=/; HttpOnly; secure
X-Request-Id: c0a374f460d1b1205df450ab77dd2328
X-Runtime: 0.159219

<!DOCTYPE html>
<html lang="en" data-behavior="wallpaper">
<head>
etc.

For those interested in the relevance of the crnl option at the end of the socket path, this from the man page:

Converts the default line termination character NL (‘n’, 0x0a)
to/from CRNL (“rn”, 0x0d0a) when writing/reading on this
channel (example). Note: socat simply strips all CR characters.

Tagged

Ruby’s Queue class, and ordered processing

I was writing a Ruby script recently that needed to download 43 2GB chunks of a database backup from a remote source, then decrypt each chunk, then finally concatenate the decrypted files together.

I knew I wanted to use threads to do this as it would speed up the overall process a great deal, and the downloading and decryption can be done in any order, it doesn’t matter if chunk 5 is downloaded before or after chunk 35, and the same with decryption. Those processes all operate on discrete files on the filesystem.

Where order does matter however is when the script is concatenating the files together into the final output file (in this case an lzop archive).

While looking how to handle this I discovered Ruby’s Queue class which “…provides a way to synchronize communication between threads“. Great, that’s exactly what I needed.

In my script I set up two thread-pools, one for downloading and one for decrypting, each with it’s own queue. At the start of the script I push all the download jobs on the download queue. The download thread pool workers download them then push them onto the decrypt queue. The decrypt queue can then get to work. It flows a little like this:

[download queue] -> [download pool] -> [decrypt queue] -> [decrypt pool]

However one last step remained, the concatenation. I used a queue again for this but needed to handle the jobs in order or I would end up with a useless lzop archive, so I came up with the following code to help with this:

You can see from the output that though the work units appear on the queue in any order, they will always be processed in the correct order:

[1.9.2] ~ $ ruby queue.rb
popping the stack
vals is now [16]
popping the stack
vals is now [1, 16]
popping the stack
vals is now [1, 11, 16]
popping the stack
vals is now [1, 11, 16, 19]
popping the stack
vals is now [1, 6, 11, 16, 19]
popping the stack
vals is now [1, 6, 11, 16, 18, 19]
popping the stack
vals is now [1, 6, 11, 15, 16, 18, 19]
popping the stack
vals is now [1, 6, 8, 11, 15, 16, 18, 19]
popping the stack
vals is now [0, 1, 6, 8, 11, 15, 16, 18, 19]
Processing 0
Processing 1
popping the stack
vals is now [5, 6, 8, 11, 15, 16, 18, 19]
popping the stack
vals is now [3, 5, 6, 8, 11, 15, 16, 18, 19]
popping the stack
vals is now [3, 5, 6, 8, 11, 14, 15, 16, 18, 19]
popping the stack
vals is now [3, 5, 6, 8, 10, 11, 14, 15, 16, 18, 19]
popping the stack
vals is now [3, 5, 6, 7, 8, 10, 11, 14, 15, 16, 18, 19]
popping the stack
vals is now [3, 5, 6, 7, 8, 9, 10, 11, 14, 15, 16, 18, 19]
popping the stack
vals is now [2, 3, 5, 6, 7, 8, 9, 10, 11, 14, 15, 16, 18, 19]
Processing 2
Processing 3
popping the stack
vals is now [4, 5, 6, 7, 8, 9, 10, 11, 14, 15, 16, 18, 19]
Processing 4
Processing 5
Processing 6
Processing 7
Processing 8
Processing 9
Processing 10
Processing 11
popping the stack
vals is now [14, 15, 16, 17, 18, 19]
popping the stack
vals is now [14, 15, 16, 17, 18, 19, 20]
popping the stack
vals is now [13, 14, 15, 16, 17, 18, 19, 20]
popping the stack
vals is now [12, 13, 14, 15, 16, 17, 18, 19, 20]
Processing 12
Processing 13
Processing 14
Processing 15
Processing 16
Processing 17
Processing 18
Processing 19
Processing 208
Processing 19
Processing 20

The code to do something similar:

Tagged

Fixing the Gemfile not found (Bundler::GemfileNotFound) error

I was working on an app (bundler, unicorn, Rails3) that had a strange deploy issue. Five deploys (using capistrano) after our unicorn processes had started unicorn would fail to restart, this is the capistrano output:

* executing `deploy:restart'
* executing `unicorn:restart'
* executing "cd /u/apps/dash/current && unicornctl restart"
servers: ["stats-01"]
[stats-01] executing command
** [out :: stats-01] Restarting pid 15160...
** [out :: stats-01] PID 15160 has not changed, so the deploy may have failed. Check the unicorn log for issues.

I checked the unicorn log for details:

I, [2011-08-02T15:59:32.498371 #11790] INFO -- : executing ["/u/apps/dash/shared/bundle/ruby/1.9.1/bin/unicorn", "/u/apps/dash/current/config.ru", "-Dc", "/u/apps/dash/current/config/unicorn.conf.rb", "-E", "production"] (in /u/apps/dash/releases/20110802155921)
/opt/ruby/lib/ruby/gems/1.9.1/gems/bundler-1.0.15/lib/bundler/definition.rb:14:in `build': /u/apps/dash/releases/20110802152815/Gemfile not found (Bundler::GemfileNotFound)
from /opt/ruby/lib/ruby/gems/1.9.1/gems/bundler-1.0.15/lib/bundler.rb:136:in `definition'
from /opt/ruby/lib/ruby/gems/1.9.1/gems/bundler-1.0.15/lib/bundler.rb:124:in `load'
from /opt/ruby/lib/ruby/gems/1.9.1/gems/bundler-1.0.15/lib/bundler.rb:107:in `setup'
from /opt/ruby/lib/ruby/gems/1.9.1/gems/bundler-1.0.15/lib/bundler/setup.rb:17:in `'
from :29:in `require'
from :29:in `require'
E, [2011-08-02T15:59:32.682270 #11225] ERROR -- : reaped # exec()-ed

Sure enough there’s the exception, “Gemfile not found (Bundler::GemfileNotFound)“, and the file referenced (/u/apps/dash/releases/20110802152815/Gemfile) didn’t exist. The directory that was being looked in (20110802152815) was from a previous deploy and had been rotated off the filesystem. We keep five historical deploys so that explained why the problem only happened five deploys after a full unicorn restart.

I suspected an environment variable was getting set somewhere, and never updated, so I added some debugging to our unicorn.conf.rb file:

ENV.each{ |k, v| puts "#{k}:tt#{v}" }

I then restarted the unicorns fully and tailed the unicorn log file while deploying the app. Sure enough one of the environment variables stuck out:

BUNDLE_GEMFILE: /u/apps/dash/releases/20110802165726/Gemfile

I deployed again, it remained the same, still pointing to /u/apps/dash/releases/20110802165726/Gemfile. I continued to deploy until release 20110802165726 was rotated off the filesystem and up pops the error again. This looks like the problem.

I committed a change to our unicorn.conf.rb that set the BUNDLE_GEMFILE variable explicitly in the before_exec block:

before_exec do |server|
ENV['BUNDLE_GEMFILE'] = "/u/apps/dash/current/Gemfile"

end

Over 5 deploys later and the env var is still set to /u/apps/dash/current/Gemfile and there are no more errors. Let me know if you found this useful!

Notes

  • There may be other issues that cause errors of this type, this was just the solution that worked for us, YMMV.
  • There may be better places to set the environment variable other than the unicorn.conf.rb, I’m open to suggestions (we’re using bluepill, I may be able to set it there for intance).
Update: I’ve changed this on our systems so the environment variable is set in bluepill, it works the same.
Tagged , ,

Setting up PostgreSQL for Ruby on Rails development on OS X

One of the reasons people used to give for using MySQL over PostgreSQL (just ‘Postgres’ from here on in) was that Postgres was considered hard to install. It’s a shame, because it’s a great database (I’ve been using it for personal and some work projects for years, like my current side project, sendcat). Luckily it’s now really simple to get it going on your Mac to give it a try. This is how you do it.

What this guide is

This is a guide to getting PostgreSQL running locally on your Mac, then configuring Rails to use that for development.

What this guide is not

  • An advanced PostgreSQL guide.
  • Suitable for using in production.
  • Anything to do with why you might want to use PostgreSQL over any other database.

Installation

You can get binaries for most systems from the Postgresql site, but it’s even easier if you’ve got homebrew installed, if you haven’t got homebrew it’s worth it, pick it up here. I’m going to assume you are installing from Homebrew for this post, but you should find the information useful even if you are installing directly or using Macports.

With homebrew just run:

$ brew install postgres

You will get a load of output, but the most important part is this:

If this is your first install, create a database with:
    initdb /usr/local/var/postgres

If this is your first install, automatically load on login with:
    mkdir -p ~/Library/LaunchAgents
    cp /usr/local/Cellar/postgresql/9.0.4/org.postgresql.postgres.plist ~/Library/LaunchAgents/
    launchctl load -w ~/Library/LaunchAgents/org.postgresql.postgres.plist

If this is an upgrade and you already have the org.postgresql.postgres.plist loaded:
    launchctl unload -w ~/Library/LaunchAgents/org.postgresql.postgres.plist
    cp /usr/local/Cellar/postgresql/9.0.4/org.postgresql.postgres.plist ~/Library/LaunchAgents/
    launchctl load -w ~/Library/LaunchAgents/org.postgresql.postgres.plist

Or start manually with:
    pg_ctl -D /usr/local/var/postgres -l /usr/local/var/postgres/server.log start

And stop with:
    pg_ctl -D /usr/local/var/postgres stop -s -m fast

If you want to install the postgres gem, including ARCHFLAGS is recommended:
    env ARCHFLAGS="-arch x86_64" gem install pg

There’s a lot to read, but don’t worry, you don’t need most of the information there. You can get to that information again by running:

brew info postgres

As the instructions say, if this is your first install, create a database with:

$ initdb /usr/local/var/postgres

Do this now. You should see output like this:

$ initdb /usr/local/var/postgres
The files belonging to this database system will be owned by user "will".
This user must also own the server process.

The database cluster will be initialized with locale en_GB.UTF-8.
The default database encoding has accordingly been set to UTF8.
The default text search configuration will be set to "english".

creating directory /usr/local/var/postgres ... ok
creating subdirectories ... ok
selecting default max_connections ... 20
selecting default shared_buffers ... 2400kB
creating configuration files ... ok
creating template1 database in /usr/local/var/postgres/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the -A option the
next time you run initdb.

Success. You can now start the database server using:

    postgres -D /usr/local/var/postgres
or
    pg_ctl -D /usr/local/var/postgres -l logfile start

Again, there’s a lot of output, but you can pretty much ignore most of it.

Startup/Shutdown

Next, as the instructions suggest you can set Postgres to start and stop automatically when your mac starts. Run these  three commands to have this happen (Postgres will start when you run the last command so there is no need to manually start it if you do this):

mkdir -p ~/Library/LaunchAgents
cp /usr/local/Cellar/postgresql/9.0.4/org.postgresql.postgres.plist ~/Library/LaunchAgents/
launchctl load -w ~/Library/LaunchAgents/org.postgresql.postgres.plist

I’ve done this because I use Postgres for all my personal projects. If you’re just experimenting and want to control when it is running you can start and stop Postgres with these commands (perhaps with a shell alias). EDIT: Someone on the Hacker News thread suggested Lunchy for managing launchctl stuff, I’ve not tried it, but it looks useful.

Start Postgres:

pg_ctl -D /usr/local/var/postgres -l /usr/local/var/postgres/server.log start

Stop Postgres:

pg_ctl -D /usr/local/var/postgres stop -s -m fast

That’s it, Postgres is up and running. You can see it in the process list. Run “ps auxwww | grep postgres” and you should see output like this:

$ ps auxwww | grep postgres
will     33206   0.4  0.0  2435116    528 s004  S+    6:52pm   0:00.00 grep postgres
will     33011   0.0  0.0  2445360    880   ??  Ss    6:41pm   0:00.14 postgres: writer process
will     33007   0.0  0.1  2445360   2412   ??  S     6:41pm   0:00.25 /usr/local/Cellar/postgresql/9.0.4/bin/postgres -D /usr/local/var/postgres -r /usr/local/var/postgres/server.log
will     33014   0.0  0.0  2441392    420   ??  Ss    6:41pm   0:00.03 postgres: stats collector process
will     33013   0.0  0.0  2445492   1460   ??  Ss    6:41pm   0:00.03 postgres: autovacuum launcher process
will     33012   0.0  0.0  2445360    504   ??  Ss    6:41pm   0:00.10 postgres: wal writer process

Create a user and database

Now that the Postgres server is running we need to create a database for use in our rails app. This is really simple using the shell commands that ship with Postgres. First lets create a new user. Running the createuser command you will get an interactive prompt asking some questions about the user, answering ‘n’ is OK for all of them:

$ createuser shawsome
Shall the new role be a superuser? (y/n) n
Shall the new role be allowed to create databases? (y/n) n
Shall the new role be allowed to create more new roles? (y/n) n

Next create the two databases you will need, development and test. Here you can see the options are given on the command line, the -O specifies the owner of the database (the user we just created) and -U specified the character encoding scheme to be used in the database.

$ createdb -Oshawsome -Eutf8 shawsome_development
$ createdb -Oshawsome -Eutf8 shawsome_test

You can verify everything worked by connecting. Postgres ships with a shell just as MySQL does, it’s called ‘psql’. Run the following command, you should find yourself at a database prompt:

$ psql -U shawsome shawsome_development
psql (9.0.4)
Type "help" for help.

shawsome_development=>

That’s the DB all done with. Hit ctrl-d to exit the shell. Next install the postgres gem.

$ sudo env ARCHFLAGS="-arch x86_64" gem install --no-ri --no-rdoc pg
Building native extensions.  This could take a while...
Successfully installed pg-0.11.0
1 gem installed

For Macports you might have more luck with:

$ sudo env ARCHFLAGS="-arch x86_64" gem install pg -- --with-pg-config=/opt/local/lib/postgresql84/bin/pg_config

If you want to read further on these commands check out the docs for createuser, createdb and psql.

Create the Rails app

Now we need to create the app. Run “rails new” but specify –database=postgresql to get a database.yml pre-configured. We won’t need to edit the database.yml from the generated file, but it does contain some information that could be useful if you’re using Macports so open it up and see what got generated.

$ rails new shawsome --database=postgresql
… Much output

Head into the new app and create a scaffold. It’s not going to be anything fancy, just enough to get some data into the database:

$ cd shawsome
$ rails g scaffold Post title:string author:string body:text
      invoke  active_record
      create    db/migrate/20110528180734_create_posts.rb
      create    app/models/post.rb
      invoke    test_unit
      create      test/unit/post_test.rb
      create      test/fixtures/posts.yml
       route  resources :posts
      invoke  scaffold_controller
      create    app/controllers/posts_controller.rb
      invoke    erb
      create      app/views/posts
      create      app/views/posts/index.html.erb
      create      app/views/posts/edit.html.erb
      create      app/views/posts/show.html.erb
      create      app/views/posts/new.html.erb
      create      app/views/posts/_form.html.erb
      invoke    test_unit
      create      test/functional/posts_controller_test.rb
      invoke    helper
      create      app/helpers/posts_helper.rb
      invoke      test_unit
      create        test/unit/helpers/posts_helper_test.rb
      invoke  stylesheets
      create    public/stylesheets/scaffold.css

You will now have a migration. We’re going to edit it a bit from the default to add some sensible restrictions and an index.

Run your super shiny migration:

$ rake db:migrate
(in /Users/will/shawsome)
==  CreatePosts: migrating ====================================================
-- create_table(:posts)
NOTICE:  CREATE TABLE will create implicit sequence "posts_id_seq" for serial column "posts.id"
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "posts_pkey" for table "posts"
   -> 0.0060s
-- add_index(:posts, :author)
   -> 0.0033s
==  CreatePosts: migrated (0.0097s) ===========================================

Now start up a Rails server.

$ rails s

Poking around in the database

Add a few posts and you can re-run the database shell (see above) and start poking around. You can use ? to get help in the shell, but we will jump straight to dt to get a list of tables:

shawsome_development=> dt
               List of relations
 Schema |       Name        | Type  |  Owner
--------+-------------------+-------+----------
 public | posts             | table | shawsome
 public | schema_migrations | table | shawsome
(2 rows)

Great, our posts table is there, lets take a look at the posts table in more detail:

shawsome_development=> d posts
                                     Table "public.posts"
   Column   |            Type             |                     Modifiers
------------+-----------------------------+----------------------------------------------------
 id         | integer                     | not null default nextval('posts_id_seq'::regclass)
 title      | character varying(255)      | not null
 body       | text                        |
 author     | character varying(255)      | not null default 'Anonymous'::character varying
 created_at | timestamp without time zone |
 updated_at | timestamp without time zone |
Indexes:
    "posts_pkey" PRIMARY KEY, btree (id)
    "index_posts_on_author" btree (author)

You can see we get a fair amount of detail here, including column type, null/not null flags, and default values, as well as any indexes on the table. Lets select some data. This should be familiar to anyone who has used a relational database before (hint: try tab completion, it’s really good in the Postgres shell)

shawsome_development=> select id, title, author, created_at from posts;
 id |     title      |  author   |         created_at
----+----------------+-----------+----------------------------
  1 | Book 1         | Anonymous | 2011-05-28 18:09:13.965425
  2 | Bobski's dream | Anonymous | 2011-05-28 18:09:30.122767
(2 rows)

Done!

What now?

Check out the Postgres docs, they’re really good, and go forth and develop excellent sites on top of PostreSQL!

Tagged , , ,

Running Unicorn under supervise/daemontools

I’m running a Rails3 app under daemontools, this is how I did it.

First, install then start daemontools and Unicorn. This is left as an exercise for the reader, it was easy under CentOS 5. You should end up with a /service directory on your server.

Run “mkdir /service/my_service_name && cd /service/my_service_name && ls”. If you have the right processes running you should have a “supervise” subdirectory automatically created for you under your new directory.

You should be in the “/service/my_service_name” directory. In order to get anything to run you need to create a “run” script. Here’s mine, you can copy and modify it for your own use. If you’re running Unicorn and you improve the script I’d appreciate feedback in the comments:

#!/bin/sh
cd /home/www/foo/current && unicorn_rails -E staging -c /home/www/foo/shared/config/unicorn.conf

Make sure the script is executable. If you have done everything right you should have a Unicorn process running. You should see something similar to this in the output of “ps auxww -H”:

If that’s not the sort of thing you see, check the logs. Lastly I added this cap snippet to kill the process on deploy and let supervise handle re-spawning it:

Done!

Why daemontools?

There is some propaganda on the daemontools site, but I used it because it’s quick to get going and in my experience reliable. I use monit a lot at Engine Yard, but there are no recent monit packages easily available for CentOS that I have found. One drawback is that you don’t get any of the resource monitoring that monit provides, such as http checks, memory checks etc. You’d have to implement this yourself, but in this instance that’s the tradeoff you make for simplicity.

CentOS 5 update:

If you get an error that looks like this:

Follow these instructions to fix the issue:

Simple http ping program in Ruby

Just a little http ping program I wrote to check request latency during my testing for zero-downtime rails deploys with Unicorn.

Example:

pinky:~ will$ ruby httping.rb willj.net
200 :   0.000000   0.000000   0.000000 (  0.761449)
200 :   0.010000   0.000000   0.010000 (  0.535888)
200 :   0.000000   0.010000   0.010000 (  0.800904)
200 :   0.000000   0.000000   0.000000 (  0.530763)
200 :   0.000000   0.000000   0.000000 (  0.811362)
200 :   0.000000   0.000000   0.000000 (  0.557995)
200 :   0.000000   0.010000   0.010000 (  0.774484)

Tagged

Upgrading PHP 5.1 to 5.2 on CentOS 5.4 for SugarCRM

If you have the misfortune to need to install SugarCRM on Centos 5.4 you will need PHP 5.2 minimum, unfortunately the current version of CentOS (5.4) only comes with PHP 5.1. Follow these instructions to upgrade to PHP 5.2, they worked fine for me, I just wish I’d found them earlier.

Tagged , , ,

Protecting yourself against the WordPress login page exploit

Anyone that runs a wordpress blog will hopefully be aware of the recent exploit against the login page:

“You can abuse the password reset function, and bypass the first step and
then reset the admin password…”

and

“An attacker could exploit this vulnerability to compromise the admin
account of any wordpress/wordpress-mu <= 2.8.3″

There’s no fix in any released version yet but you can protect yourself with a bit of Apache config until one is released. Just add this to your wordpress virtualhost replacing “you.re.ip.add” with the IP address you want to access the login page from:

<Location /wp-login.php>
Order deny,allow
Deny from all
Allow from you.re.ip.add
</Location>

This will present any user not accessing your login page form that IP with a 403 Forbidden error. If you want to block all IPs until a fix comes out just miss out the Allow line:

<Location /wp-login.php>
Order deny,allow
Deny from all
</Location>

Tagged , ,

How to stop running out of memory when working on your server

It’s a fairly simple thing to do, but I have seen a lot of people drive their servers really far into swap and kill performance due to an administrative action they are performing in the shell. Just open up another terminal on your server and run:

# watch free -m

This is pretty useful if you’ve not got much free memory to play with and you’re installing a gem, using irb or syncing portage or whatever and as long as you keep an eye on it you can terminate your process if it starts to eat too far into swap.

Follow

Get every new post delivered to your Inbox.