Chef Blogs

Chef Metal 0.2 Release

John Keiser | Posted on | Releases

Introducing Chef Metal 0.2! Chef Metal is a framework that lets you manage your clusters with Chef the same way you manage machines: with recipes. Combined with the power of Chef, Metal’s machine resource helps you to describe, version, deploy and manage everything from simple to complex clusters with a common set of tools.

To get it, gem install chef-metal. To hack, go to https://github.com/opscode/chef-metal. Currently supported provisioners include LXC containers, EC2, DigitalOcean, and Vagrant. If you just want to skip all that, follow the quick start in the README.

The current release is an alpha. You can see a lot of our plans, and some concrete examples, in the requirements doc. Issues can be filed here.

Imagine …

Imagine you didn’t have to try to remember a bunch of arcane, one-off commands, in the correct order, to provision and bootstrap your infrastructure.

Imagine instead, you write a recipe describing your many servers with directives like this:

machine 'server_of_doom' do
  recipe 'apache2'
  recipe 'my_server'
end

And then you deploy it to ec2 by running that recipe along with this one:

with_fog_ec2_provisioner

Now you have your cluster on ec2. Run it with a different provisioner, and presto, your cluster is on VMs too!

It’s All Chef

The best part: since it’s Chef, it gets all the benefits and flexibility of Chef.

It’s Convergent

Because it’s in Chef, Metal is convergent. If something fails, re-run it. If you just realized you need to run a new recipe everywhere or run a new machine, just change the Metal recipe and re-run. Only new machines are created, only new changes applied.

It’s Code

One of Chef’s advantages is Ruby: you can describe the order you want things set up, and you can even loop! Take a look at this:

1.upto(10) do |i|
  machine "hadoop#{i}" do
    recipe "hadoop"
  end
end

Chef Metal lets you do minor orchestration scenarios, too. Witness two servers, neither of which can come up without both already existing:

# Define the first machine without 'theserver', which cannot run until both machines are defined
machine 'server_a' do
  recipe 'base_recipes'
end
# Define the second machine
machine 'server_b' do
  recipe 'base_recipes'
  recipe 'theserver'
end
# Now 'theserver' on the first machine will work
machine 'server_a' do
  recipe 'theserver'
end

And of course, client/server relationships

It’s Powerful

Your process of deployment may involve more than just setting up machines. It may involve copying something from one machine to another, or grabbing some keys from a database. Wouldn’t it be nice to have a remote_file resource handy to download the right boxes before bootstrap? Since it’s Chef, you have all the resources available to fill out your cluster.

remote_file 'mytarball.tgz' do
  url 'https://myserver.com/mytarball.tgz'
end
# Create the machine
machine 'x'
# Upload the tarball
machine_file '/tmp/mytarball.tgz' do
  machine 'x'
  path 'mytarball.tgz'
  action :upload
end
# Get it running
machine 'x' do
  recipe 'untarthatthing'
end

It’s Cron-able

If you’re running these recipes anyway, why not consider running your Metal cluster definition continuously from a machine inside your cluster? When you want to change the cluster, just upload the Metal recipe and your machine will run it and make the changes. Let the cluster self-heal when machines go offline (there’s that idempotency)! Consider auto-scaling: bump up the number of servers when you notice that CPU monitor getting a bit too hot.

# NOTE This is a terrible auto-scaling algorithm for demonstration purposes, you can do better!
# Bump the number of clients if cpu across the cluster is pegged
average_cpu = get_average_cpu("tags:client")
if average_cpu > 0.9
  num_clients++
end

# Declare the clients
1.upto(num_clients) do |i|
  machine "client#{i}" do
    recipe 'myclient'
    tag 'client'
  end
end

Contact Us

Please follow along at https://github.com/opscode/chef-metal. If you are interested in hacking on chef-metal, contact jkeiser@opscode.com.