Chef Blogs

Immutable Infrastructure: Practical or Not?

Julian Dunn | Posted on

With the continued popularity of Docker and containerization generally, the concept of immutable infrastructure has again come to the fore. Immutable infrastructure is generally defined as a stack that you build once (be it a virtual machine image, container image, or something else), run one or many instances of, and never change again. The deployment model is to terminate the instance/container and start over from step one: build a new image and throw old instances away.

Many people have been curious about Chef’s position on immutable infrastructure and on Docker specifically. Well, here it is:

  • We believe that containers are likely to be used both as lightweight virtual machines and for per-process isolation, and we will support both.
  • We also believe that configuration management is complementary to, rather than contraindicated by, containerization.

On the last point, we’re pleased that others, especially at Docker, are of the same opinion. If you haven’t seen it yet, check out Eric Windisch’s presentation from DevOpsDays Pittsburgh, which validates our approach.

This part of the page will be loaded later.

Our nascent chef-init and knife-container projects are allowing us to do a lot of interesting research & development work here at Chef on how to make our approach a reality. We’ll be making more announcements once those projects are ready.

In the interim, I also wanted to take a moment to dispel a few myths about immutable infrastructure and why we feel that full immutability is not the most practical approach.

In the Early Days of Computing…

The earliest computers were the ultimate in immutable infrastructure. When you had an IBM 1401 — which, by the way, is an impressive chunk of equipment restored to working order at the Computer History Museum in Silicon Valley — and you needed to enter your entire program on a stack of punch cards, one line at a time, you were pretty much guaranteed that you had an immutable application stack. Nothing was going to go wrong with your program — as long as the card reader was working that day!

As computers evolved, however, we deliberately gave up immutability for greater flexibility & usability. Thus full-blown operating systems like UNIX were born, with entire application stacks and middleware running on top of them. The world was good, because you no longer needed to restart the 1401 from instruction 0 whenever you wanted to make a change in your program, or when the aforementioned card reader ate half your program. So in a way, it’s a little disappointing to see some part of the world obsessed with the notion that immutability is the solution to eliminating deploy-time and run-time application errors.

From Where Do the Benefits of Immutability Supposedly Arise?

Developers frequently take the viewpoint that “the environment” is what breaks their applications. There is certainly some truth to that, otherwise configuration management tools like Chef would never have arisen. For a developer who doesn’t see the need to learn a configuration management tool, containerization seems like the perfect solution by making the whole application equivalent to the runtime image.

Let’s look back a little bit at our experience operating PaaSes, however. In IT, you don’t get something for nothing; you naturally trade off flexibility with manageability. If an immutable container instance goes berserk, how do you debug it? How do you manage it? A glib answer is “just kill the container,” but what if it happens again? (For more on this topic, read John Vincent’s pull-no-punches blog post last week criticizing PaaSes as a panacea to everything that ails us.)

Immutability says nothing about operational concerns. It simply states that you built the original runtime correctly. And the operational cost to containerization is not only in the management of the runtime, but in the management of the toolchain as well: to correctly build, spin up, and tear down the immutable containers while providing 24×7 service to customers.

In the worst case, immutability can actually be orthogonal to DevOps culture. Instead of throwing WAR files over the fence (and might I say, a WAR file is itself an immutable artifact), developers can throw immutable images. So unless there’s some other benefit, we haven’t really solved anything except to bloat the size of the artifact one throws over the wall.

Systems Are Not Really Immutable; We Just Pretend They Are

Full immutability is actually a mirage: no system can be made fully immutable. Your customer data, for example, is constantly being appended to and mutated, and can’t simply be blown away and rebuilt from a known good image, because there is no such thing. Even etcd (and solutions like it), as a way of distributing configuration changes to running containers, is by definition a tool that allows for consistent, structured data mutation. If you didn’t ever need to change the keys and values in etcd, what good would it be? So fixating on immutability as a solution for application errors is the wrong approach.

This essay is not a criticism of container technology. It’s merely a criticism of people that want to use them to build “immutable infrastructure.” Container technology is fundamentally sound and in fact, the concepts have been around for years (in the form of FreeBSD jails or Solaris zones). It’s just that the user interface to build, run and manage containers has sucked so far, and if Docker brings nothing else to the table, you at least don’t have to have a Ph.D. in lxc just to reap the benefits.

Practical Immutability: From Purist to Application

At Chef, we believe that our customers will ultimately use containers both as lightweight, fast-start virtual machines, as well as for per-process isolation. Distributed container management and connectivity is still a difficult problem without an easy-to-implement solution, although recent announcements from Docker indicate that they’re improving this space. There is also a real use case around maximizing the resource utilization of cloud compute resources by cramming them full of resource-constrained containers.

It is almost inevitable that the use of containers as lightweight virtual machines will lead to the lifespans of those containers lasting longer than immutable infrastructure advocates might recommend. Moreover, those who choose the lightweight virtual machine approach will likely not have the patience to invest in and operate a toolchain that can safely (and quickly) build, create, deploy and undeploy containers at scale.

All of this does not detract from the real value and interesting concepts espoused by immutable infrastructure proponents. For example, the idea that you can terminate any container or virtual machine (à la Netflix’s Chaos Monkey) is still good resilient system design. Building systems that can’t easily be replaced is introducing non-recoverable failure points across your infrastructure.

Ultimately, the goal of all application designers and operational maintainers is to increase the confidence level in the stability and maintainability of all systems. You achieve nirvana when all failures are viewed as normal operations and not as apocalyptic events.

Intelligent use of configuration management tools like Chef to get machines or containers under consistent control is often a “good enough” solution for maximizing that confidence. With Chef, you don’t have to describe the state of every single resource on your system (every package, every config file, every service that should or should not be running) — you just describe the state that you want, and it works well enough that people don’t worry about the components they can’t control. “Perfect” systems rarely work in practice, largely due to the high operational cost of maintaining that perfection at scale.

Conclusions

To sum up:

  • Pure immutability is neither a practical nor a desirable end-state. Even so-called “immutable infrastructure” today still mutates as your customers use your applications.
  • When you don’t have full immutability, you need configuration management to keep those parts of the system you are interested in at a known state.
  • We believe the world will ultimately use containers both as lightweight virtual machines and for per-process isolation.
  • Configuration management lets you mix-and-match both containerized and non-containerized infrastructure in a consistent way. In chef-init, we are making that a seamless experience.
  • You can also use configuration management to construct your initial baseline images and manage the lifecycle of those images. In knife-container, we are making a way that you can do just that, using Chef recipes.

We think containers are an exciting development in the world of IT infrastructure, and we’ll have more code and words on this topic as the technology evolves.