If you read this blog, you’ve no doubt seen us profile what our customers are achieving with Opscode Chef. However, past posts have at times glossed over some of the coolest parts of our customers’ stories.
Beginning with this entry, our hope is to begin providing more depth to our user stories with a series of Awesome Chef profiles on Community members, Chef contributors, and Opscode customers making awesomeness happen.
Of course, business benefits like increased development speed and more agile operations will still be an important part of these profiles. We sell commercial products, after all, and our customers have jobs to do and businesses to run. However, we’re also going to try and dig deeper into how our users work, the technical and business challenges they face, why they make the decisions they do, and how they achieve success in their jobs, and careers.
We hope these posts help shine a little light on all of you in the Community who give your time and talents to Chef, and we look forward to profiling the contributions, ideas, and innovations of as many of you as we can.
So, let’s get to it.
Here’s the scene: A buzzworthy startup combining the best of Facebook and LinkedIn to create a professional network that lets users share their life at work. Like many startups, using the public cloud makes a lot of sense because its cheap, easy to use, and lets them get their platform into production much faster than with a traditional data center build out. So far, so good.
Then, in a matter of months, the platform takes off and now a small Ops team is facing the content uploads and demands of 30 million users.
This is the challenge Jeremy Koerber and the small Ops team at BranchOut faced. To address the compute challenge, they scaled out their AWS deployment, using AWS Elastic Compute Cloud (EC2) High-CPU instances paired with AWS Simple Storage Service (S3), with data stored in MongoDB or MySQL on the computer servers, while files and objects are stored in S3. This gave them the resources to manage user-generated content spikes, which are obviously hard to predict. But even with the near limitless scale of AWS, Jeremy and his team faced two challenges:
1) Empower developers to be self-reliant and make app updates without waiting on Ops
2) Achieve #1 without sacrificing system consistency or resource control
BranchOut’s Dev team needed to move fast, often faster than Ops could manually configure and deploy AWS instances. So Jeremy established a new configuration and deployment process using Hosted Chef, Scalr, and GitHub. Jeremy and his team combined Chef and Scalr to automate resource configs and create a central portal for viewing the entire infrastructure across environments. Then, they customized the Chef Community cookbook for Apache Tomcat to integrate additional Java components, giving them a push-button code deployment for all their AWS servers. Now, BranchOut’s Ops team can expose its server configuration code to the Dev team, so Devs can simply choose the configuration they need from GitHub, copy and paste the code into their AWS environment, and deploy the resources they need. This workflow makes Devs more self-reliant, while the infrastructure remains consistent on rock-solid config code.
Here’s what Jeremy had to say:
“Hosted Chef makes it easy for anyone on our team to deploy the resources they need. Because it integrates so well with Scalr, we have an additional level of transparency that gives us a real DevOps mentality – we’re all on the same page about what’s been done and what needs to be done.”
Very cool stuff. A big thanks to Jeremy and BranchOut for letting us tell their Chef story. To go deeper into BranchOut’s infrastructure, you can read the full case study here.