Keynote
Evolving from Infrastructure as Code to Policy as Code
Why Chef sees Policy as Code as the future of infrastructure automation
Today, there isn’t a company out there that doesn't worry about security, but traditional infrastructure as code (IAC) approaches no longer scale to meet the needs of modern security-minded organizations. Traditional approaches to IAC fail to account for regulatory or business security and compliance needs and still require manual interactions between DevOps and compliance teams.
During this session Chef Infra Product Manager Tim Smith describes what Policy is Code is and how Chef is streamlining the delivery of Policy is Code with new product enhancements like Chef Infra Compliance Phase and Role Based Policies.
We're going to talk about Policy as Code using Chef Infra with the Chef Infra Compliance Phase. That is a new feature in Chef Infra Clients 17. And then we're going to talk about some additional really great new stuff we have coming in the coming months. PR is ready, ready ship to you. That's going to make this Policy as Code vision a reality for a lot of people.
So with that, let's all jump in our time machines. I want to head back to the year 2008. Why 2008? Well, a lot of really amazing stuff happened in 2008 for technology as I went and put this list together.
First of all, the LHC came online at CERN-- really amazing. If you're into particle physics, if you think finding the Higgs boson is great, this is an absolutely monumental change in science for us. Android 1.0 ships. Even if you are an iPhone user, you probably have to respect the fact that Android coming onto the market really disrupted how mobile development worked the entire mobile landscape and really changed the way we use personal devices.
At the same time, Google had a really great year here because Chrome 1.0 ships. And again, it was a really disruptive event, brought a modern WebKit based browser to the masses and really made it so we could use all this fantastic new functionality we have on the web-- would not have been possible without Chrome 1.0 shipping.
Also back to science. SpaceX managed after a few failures to launch a Falcon 1 rocket. This is the start of their path towards reusable rocketry and got the US back into space, so very, very cool stuff all the way back in 2008.
But since we're at ChefConf, since we're all Chef users, it's really important also to note a very monumental thing that happened in 2008 for us, which is the first release of Chef Ships. I tried, I scoured the internet to find a blog post-- I know Adam Jacob posted something somewhere telling people about this new thing. I'm sure it's in a usenet archive. It's buried somewhere I'll never find it again, but you'll just have to take my word for it Chef 0.1 did ship in 2008. And that really set us on the path for Infrastructure as Code, Compliance as Code, and now Policy as Code.
But Chef shipping 0.1 is not what I want to talk about actually, I want to talk about my very first post-college job that I would be willing to put on a resume. And that was also in 2008 when I worked as a PC technician at McMurdo Station in Antarctica. Why would we talk about being a PC technician? Why would we talk about desktop support in this web operations conference?
We're all HIP, DevOps engineers, SRVs, like what is desktop support? That's not us. I think there's a lot of great lessons to be taken from desktop support. And that's really because we have similar problems.
We just have a significantly slower process when we're talking about desktop support. We have a world is harder to automate, we have a world that is more customer-oriented, we have a lot more problems on our break-fix situations, but there's some of the same processes and some of the same problems that we experience in web operations. And this job really shaped how I see the problems that we experience. I took away a lot of key things from it.
And those are really learnings that I took from that manual process. McMurdo station more than anything being at the bottom of the planet makes desktop support even slower than it would be if I was doing this in an office park in the US somewhere. Everything is slower, getting systems takes years, getting parts takes years, you can't just walk down to a cubicle. Things involve helicopters, and airplanes, and remote camps like this one here, and it makes you understand those processes a lot more. It makes you think about the manual things you're doing, about the inefficiencies and really see them at a much higher scale that I don't think you get through a lot of jobs.
So these are the learnings that I took away and how that shaped how I see operations and why I think this shapes where we're going as an industry. The first is this concept of scalability. I don't mean this like the way we usually think of scalability and computer systems, not talking about how many IOPS a system can handle, or how many transactions per second it's processing, what I mean is really process scalability.
Process scalability is incredibly important to anything we do in computers. As we go about our jobs as operators in our systems, we're always trying to automate things. Part of that automation is really understanding that process, understanding where we have inefficient manual processes, whether they scale, whether they don't, and what we can do to improve those.
This picture here is from when I hopped on a helicopter to take a nice 20 minute-- 20-mile ride up to an island called Black Island. Black Island is just North of McMurdo station. And it's where all of McMurdo's communications happens out of-- happened on this day that it helped us ticket was put in and my boss figured I would be the lucky one to go hop on this helicopter to solve that problem.
Somebody was having trouble sinking their iPad to their iTunes. This is critical emergency. Absolutely the thing we need to solve immediately. So I hopped on a helicopter, took this fantastic ride out, spent a day fixed this person's iTunes, fixed some other things just so we're clear. But this is really in my mind the definition of a process that don't scale, process that require literal boots on the ground, someone in cold weather gear taking a helicopter ride out to solve a problem.
How many of those could we do in a day? Obviously only one. This is a thing we hope doesn't happen, but how could we optimize that process, how could we optimize our systems that we have in place to make sure that we don't have to do a thing like this again?
Now, I'm really thankful that it happened, obviously, thank you anyone that paid taxes around 2008. It was a fantastic trip. But how can we make sure we don't have things like this in our business?
The second big takeaway I took away from my time as a PC technician was the importance of testing. You might think like what is desktop support have to do with testing, testing that's just not a thing we do in desktop support? It is a thing we do in desktop support. It's a thing we do in a very painful, manual way.
And at McMurdo station, the desktops that we're supporting, they're not just people in accounting, they're not just web designers or marketing people, a lot of those desktops that you're supporting are the systems that keep you alive, they're the systems that support the doctors, they're the systems that provide you with power, they keep the generators running, they keep your water clean. And without those systems you may have trouble living, so it's very important, obviously, that these systems be handled with the utmost care.
How do you do that? Well, you test any change you make to those. You're also deathly afraid of changing them, but any time you're deploying new application, or a security update, or a new piece of hardware into those applications, you need to make sure that you're absolutely confident that it works.
And this is all through manual processes, things that are passed down from employee to employee, Excel docs, Word docs, manual paper in a filing cabinet somewhere that tells you how to have confidence in making those changes. And this really slows the business down to the point that people are often afraid to make those changes because they're afraid of the manual testing process. Again going back to process scalability, not having scalable testing really keeps us from doing what we need to do as a business.
Next thing I learned was the importance and the hindrance of security and compliance and getting your job done. I was working at McMurdo station from the National Science Foundation, but there's a lot of other government agencies at McMurdo. This sign here is for NASA. This is a restricted area I couldn't get into because there's communications equipment that at the time would allow NASA to communicate to the space shuttle, to communicate to the International Space Station. This is military grade equipment obviously beyond what I had any clearance for.
NASA had security requirements on the station, NOAA had no security requirements, US Air Force, and the New York Air National Guard. They all brought their own different security requirements, their own different compliance requirements, and they often conflicted. It's often hard to understand who had jurisdiction over what and what security and compliance rules you had to apply as you did your job.
But it really slowed things down. It made it difficult for us to work, it made it difficult for us to be appropriately compliant and to achieve real security rules. One of my jobs was to be the human InSpec engine. As systems came on to the station, any scientist traveling from a University or a new person from the Air Force, or NASA, it would have to bring those systems to me. I would have to do my hand evaluation, my brain InSpec scan to determine if this system was compliant.
When I was done with it, I'd sign my name on a little sticker and put it on there and magically that system would be forever compliant. Couldn't possibly be that the moment that system left my office another person undid everything I changed to it. But this was our Security and Compliance. This was the National Science Foundation rule and I did it by hand.
It was very painful. I spent hours and hours doing this, but it brought security and compliance to our business. And it made me think about how we could do that better, how we can achieve real security, real compliance in continuous way without slowing down other work that needed to get done.
The last big takeaway I had was the concept of visibility. And again, this isn't visibility looking through a snowstorm or anything like that, this is visibility into the state of my infrastructure. Our desktops and how they're deployed, that is infrastructure just like the infrastructure of your website. It's a little different, obviously, it's printers, and systems, and monitors, and applications that are deployed instead of ELBs and SRV buckets, but you need to understand it just the same.
You need to have that visibility into the state of your infrastructure. You want to understand where applications have been deployed, you want to understand where security updates have been deployed, and it's a lot harder in a desktop world. You're making golden master images, you're burning them on the DVDs, you're just sending them out to sites, reimaging systems by hand.
The systems are a really slow moving amoeba of change into maybe some eventually consistent state, but realistically, everything is always in a slightly outdated state. How do you know what it is? And that really slows the business down. Again, makes it really difficult to do your job.
So it's four life lessons I really got out of this job that I took forward with me and I still think about today are-- the importance of scalability, the importance of testing, particularly scalable and quick testing, the importance of security and compliance and doing that in a meaningful continuous way, and that visibility into what our infrastructure looks like.
How do we know what is going on? How do we know what the state is? How do we know if we are scaling our systems? How do we know if we are testing our systems? How do we know we are in fact secure and compliant?
So as I waited for my flight off ice, I thought about what I wanted to do next. This was just a temporary gig, I needed to find a new job, and I thought I don't want to do desktop support. I don't want to do this again, it's far too manual. I want some sort of automation lifestyle, I want a fast moving job.
So of course, it's going to be a tech job. It's going to be big tech. I need to work for one of these cool large companies, I need the slide between the floors, I need the nerf guns between cubicles, I need the scooters, I wanted all that.
And I did get that job. It really helps to have a really funky thing on your resume to get that attention, but I managed to become an operations engineer and I was working with wildly large systems that were globally distributed. And unfortunately, what I found out was everything was the same mess. I don't have this magical world where a company has everything right, there is no perfect company. Everyone had some level of manual tire fire. Thankfully, at about this time 2008/2009, this really great new thing hit the market, we could say.
This new dawn happened and that is DevOps. And that really changed how our industry operated and changed how people like me-- operations engineers-- did their job. It's a great, very lengthy quote into exactly why DevOps is important out of Gartner. And it's pretty big.
And I'll be honest. At the time-- 2009-- I didn't read this whole thing. I got really, really excited about that second sentence here where we said DevOps implementations utilize technology especially automation tools. Boom, sold, done, I want DevOps.
Unfortunately, I totally skipped over the really far more important part, which is agile practices, culture, all those changes, lean breaking down silos all that stuff. Straight to automation, I was hooked and I really started paying attention to everything that was happening in this. And three people really changed how I saw DevOps and how I saw computers in general and their management. And I think everything that came out of that time is still 100% applicable in 2021 absolutely worth reading up.
First off Patrick Debois coined the term DevOps, put out some amazing literature, gave some amazing talks about the importance of automating our systems, bringing Dev practices into operations, automating things-- all very exciting for somebody like me. There was automation focused.
Next was John Willis. John Willis was an early employee of Ops code, which later became Chef. John just traveled around the US and would talk to anybody that would listen about this new magical thing, DevOps.
I was lucky enough to go to an event with him at a tiny wall conference room in Marriott in Portland where he talked about this magical thing where we can have continuous integration of changes into our environment. Where I was working, we were doing a deploy once a quarter. They usually took about two days that I literally slept in our office for, so this is all really exciting to me.
How can we automate all these things? How can we do this? It didn't seem possible outside of the Googles of the world, but John was insistent that anyone could do this with technologies like Chef.
And then, I started following everything that was going on with Adam Jacob. Adam founded Chef, wrote Chef, but also has an amazing insight into the problems operators have. Adam, like myself, came out of a system administration operations world and really wanted to fix those problems in the industry, wanted to make this better for everyone that experiences it. And a great thing came out of all these teachings from everyone here. And that's this concept of Infrastructure as Code.
Infrastructure as code, again, maybe this seems obvious now, but in 2009/2010 maybe, this was an absolute mind-blowing new concept. How we could treat our operations work, our infrastructure the same way we treat our development of software products, how we can bring those same development practices into the world of operations, codify our infrastructure, do everything we do as a code base, very radical new concept at the time. And it was a dream.
It was a dream for me that I wanted to achieve where my Dev teams and my Ops team could all work together in Chef. We could all contribute to the same pipeline, the same repository, we can all work with Chef, and we can rapidly get changes into production. We could have CI, we could have CD, everybody would be magical and happy, we wouldn't hate each other anymore, and we wouldn't have to DevOps.
And this is really possible because it solves those problems that we have, it solves the scalability, solves our testing, solves our compliance, our visibility. How does it apply to those problems that I experienced? And it does it. It was absolutely a game changer for me.
First of all scalability, the ability to write a three-line code change like this and know that I push out Nginx to all my systems. That's a massive process skill scaling force multiplier really. I don't have multiyear deploys to desktops, I don't have weekend deploys to production systems in my web operations job, I write three lines of code, I commit it, everyone's happy with it, we all give thumbs up on it, we merge it, it goes to production, and boom, we have the latest Nginx. Really, really massive increase in scalability of processes.
At the same time, we get testing of our infrastructure for the first time. We're not manually bringing changes into development environments, into staging environments, we're not constantly battling with this concept of does our staging environment really look like our production environment, we can have PR as they get opened and we can use these amazing tools in Chef like Test Kitchen, like Cookstyle, we can bring these in to CI pipelines, and we can have InSpec test runs.
So as I want to make that change to upgrade Nginx, does it work? Well, let's spin a bunch of systems up, let's spin different operating systems up, let's spin different systems up in my environment, different places where I would have Nginx. Let's install that, let's upgrade that, let's make sure it works.
I can write InSpec tests using Test Kitchen to execute those tests, I can see did port 80 actually work? Did the server start? Was it enabled? Is it the way I needed to so that I can continue delivering business value? Absolutely amazing leap forward here in the ability for me to get testing into my environments.
And then I get visibility. I can log in to Automate, I can see those changes, I can have absolute confidence that my environment looks a certain way, I can see as people update policy files, I can see how systems converge that they are in fact updating Nginx, the runs are successful and really have this visibility into what's going on in my systems, and have that visibility into a change that takes about 30 minutes not 30 days or 30 months, that's a 30-minute change now.
Now, you might have noticed that I did leave out one really critical part here, no mention of security and compliance. And that's because there's a really big catch. We don't have a security and compliance solution when we're talking about Infrastructure as Code.
2009 Infrastructure as Code DevOps movement really just did not address this. It didn't address, it in my mind, for two reasons-- first of all, it came out of a lot of smaller startups, places that didn't have security and compliance concerns. But also, as in general, as an industry, we just did not think about security. That was an afterthought, that wasn't a problem for us, that was a problem maybe for a security team if we even had them, but it can't be anymore.
In 2021, even in 2020, or 2019, security can't be an afterthought if we want to stay in business. We can't be the business that has multiple breaches. Those are game changing game vendors for many companies. So what that meant was that Infrastructure as Code dream that I had, had a big old stop sign in the middle. There's a big blocker that makes it just not possible anymore.
I still have Dev teams and Ops teams committing to that single pipeline. They're still giving each other high fives, are going to launch, now they're talking to each other, we got all the great parts of that cultural shift, we got the technology shift, we got the automation in place, but we can't push to production rapidly.
We can't push the production rapidly because even though we have all of these fantastic automation practices in place with Chef and testing practices, we can't get it through compliance, we can't get through security. Those need to go through change review boards, they need Jira tickets, they need meetings, they need spreadsheets, they need lots and lots of painful manual processes. And unfortunately-- fortunately slash unfortunately really, there is a circular dependency that we have where compliance requires configuration changes and all of our configuration changes require compliance. And this is just the reality of what our infrastructures look like in a modern infrastructure.
Compliance and security are just an absolute part of our business and that's something we can't avoid even if we wish it away sometimes. And at Chef, we solved that problem, I think, for a lot of people, we rubbed some of the same magic that we brought with DevOps and with Infrastructure as Code and some of the great innovations that Chef brought to market there, and we brought Chef InSpec market. And this is now this concept of Compliance as Code that we are absolutely at the forefront of.
And this brought our compliance teams and our security teams together. They brought them into that single pipeline the same way we did with Dev and Ops teams. We rapidly increased the productivity of these teams by getting them out of those spreadsheets, getting them out of Jira tickets, and having them work as code. We brought the concept of compliance as a thing that you do once a year, once a quarter, into a thing that you can do continuously. And this is a huge change. And you think for a moment like, I guess we solved it all, we automated our infrastructure, we automated our compliance.
The problem is we obviously skipped that first sentence of the IDC description of what DevOps was all about because we created silos again. Now we have a DevOps team that's working together on one CI pipeline and we have a compliance and security team that's working in an entirely different product, different pipelines completely separate from how they behave. And because of that we're still blocked.
We've increased our velocity, we've maybe slightly unblocked ourselves, accelerated how fast it takes us to get this concept of security and compliance. But just the same if we are a Dev team or an Ops team and we want to contribute to change, it still goes into a pipeline, it gets checked, it gets tested, it gets merged, but it can't go to production because then it has to go off to a security and compliance team. They might have to make inspect changes, security and compliance profiles are written to deal with new systems we're deploying or new changes those systems.
And that's automated, and it's certainly a lot faster, but it's still very manual. You still have this giant stop in the middle. And because of that, I think it's safe to say at this point that Infrastructure as Code is a fundamentally flawed concept. That doesn't mean that I'm against Infrastructure as Code. Again, Infrastructure as Code absolutely changed my life.
Concepts like Chef gave me many, many races over my career, but Infrastructure as Code does not scale into a world where we have security concerns as well. And in 2021, we all have some level of security concerns whether we want to or not. So really, the future for us and what you're going to see us talking more and more about is the concept of Policy as Code.
And Policy as Code is really about making DevSecOps a reality, taking that marketing buzzword and bringing it into a thing that we can really achieve, that we can really get our Dev teams and our Ops teams and our security compliance teams all working together. We're doing that by breaking up those silos we created. We're getting rid of that single InSpec silo and we're moving everyone into a common tool, a common pipeline, a common framework, everyone contributing to Chef and pushing to production in a rapid way.
What that looks like is a little bit like Chef Infra. If you are a seasoned Chef Infra user, you might look at this graph and look at the slope how changes go about and think that looks a lot like how I produce changes in my Infrastructure as Code world right now, and that's the point. There's nothing inherently wrong with the process we developed. We think a lot of the Infrastructure as Code principles and a lot of the Chef Infra principles were really fantastic. We just need to bring security and compliance into the mix there instead of putting them into an automated pipeline of their own.
So on the left, we have Chef Workstation. There, we're going to be creating and testing our policies. What I mean by policies, well, that could be an infrastructure policy like a Chef recipe, that could be a compliance policy like an InSpec profile. The policies for how we want our systems to behave whether we're talking about how they're set up or how they're secured doesn't matter.
We're going to test that, we're going to write that content there, we're going to have helpers, tooling locally to really iterate on that and make sure it is looking great before we contribute it. Don't be me in 2010/2011, don't contribute tons and tons of bad buggy code that your co-workers question your chops with. Write something great, iterate on it locally, get it perfect, push it up into PR, have that approval process that happens, through CI, you tested it again now through all your systems-- hopefully a very large fan out test where you have absolute confidence in it now-- and then we move in a system state enforcement.
So system state enforcement is the Chef Infra Server, the Chef Infra Client and this is where we take that policy that you've uploaded and we really make it happen. The Chef Infra Client is going to check in continuously, it's going to pull down that policy, it's going to see new Cookbooks, new insect profiles, and it's going to execute that policy. It's going to make sure that Nginx is upgraded on that system, it's going to make sure the Nginx is running, that it's secure, that it has all the best SSL ciphers, it's not listening on port and I know that was my example previously but please don't miss an opportunity.
We're going to take all of that-- all that change we just went through there and we're going to push that up in our data aggregation validation there. This is Chef Automate. This is where we can see that, we get that ability to log in as operators and have visibility into our infrastructure.
We can see not only the infrastructure side of it, see those upgrades that are happening and chef-client checking in making package upgrades successfully completing, but we can also see the compliance side of it. We can see how those same systems are compliant. Make sure that change we're pushing out doesn't take systems out of compliance anywhere.
We can also push that out of Chef Automate. We don't want to hold your data hostage. We want to allow you to get that data out with APIs, push it out things like ServiceNow or Slack and we want to make sure that this data is available for you to view but also to push into other systems because we know you have tons and tons of tools in your environment.
So this is Policy as Code. Like I said, it looks a lot like the previous infrastructure flow. That is the point.
We want to make this as easy as a concept to grasp and to move into as we possibly can. And we'll talk a little bit about why that's important and how we've achieved that in the next section. But first, I want to discuss the importance of having the single package of change.
We talked about that system state, enforcement, and how Chef Infra Client is running. Why is that so important? And then we have a single place that we're making these changes. And it's really because we're working to meld together our tools and our workflow.
On the left here, we have Chef Infra with Cookbooks. Those contain recipes, resources, and attributes. And then Chef InSpec. It's shipping profiles. Those have controls, waivers, inputs.
They're similar in what they do-- one is definitely scanning the system and determining system state and the other one is also scanning the system and determining the system state, but in order to make changes to it. We want to bring those together. We want to have a more cohesive product that works together. And what we've done is bring everything under Chef Infra Client.
And for a long time, Chef Infra Clent is shipped with InSpec embedded. We're really utilizing that here. We're giving you two distinct phases that happen within that client run.
The Infra phase is probably what you're more familiar with. That's the traditional Chef Infra Client run. This is where we're executing Cookbooks and recipes, and resources using those attributes to tune how those things execute.
After that though, we have a compliance phase. And this is the new part of this concept. The Compliance Phase runs still within the Chef Infra Client. It's part of that run.
It's integrated into how it works through all of our process-- through Chef Infra Servers, through Chef Automate, but it pulls down Automate, pulls down InSpec profiles, kicks off controls, looks at waivers you might have in place, uses those inputs to decide how they should run. And this all works together in a single client. And why that is really, really great is it allows us to safely and easily promote changes to our environments.
We already have the concept of policy files. It's a fantastic name. Thank you to whoever did that five years ago, six years ago, really helps help this migration here. We have the concept of policy files where we can lock specific version Cookbooks in, we can create these immutable artifacts. And that means what you test on your desktop is the same thing that you push to a staging environment, it's the same thing we promote to production, and you can have confidence that it's going to work everywhere.
It gets rid of that work on my desktop-- excuse-- if it works on your desktop it should work everywhere because we're all using the same immutable artifacts. Someone makes a change to Cookbooks later. If they change environments, or roles, or whatnot, it doesn't matter because we're locking a policy file none of those things are going to have any impact on the policy file and how it executes.
But that was a Cookbook-centric world. It was an Infra-centric world. And by bringing inspect profiles into the mix and shipping our inspect profiles directly in our cookbooks, we can get all the same benefits for our compliance as well. And this means that we can really have that concept of policy that we promote, not just in infrastructure change that we safely promote through our environments, but also a security compliance change.
As we roll out new infrastructure changes, we also want to make sure they're secure. We want that to be part of that process. And what that does is that really lets us shift all of our security concerns as left as possible. We don't have that world we talked about previously, where someone's making changes, they're editing Cookbooks, and then they're having a security team look at it after the fact.
Security is part of this process. It's how we develop it. It's how we develop locally on our workstation. And that same change that we develop on our workstation goes out everywhere.
We can see this in Test Kitchen, we can run chef clients locally, we can see the output of our compliance scans right in the Chef Infra Client, and we can know as we develop-- just like we know that our chef client runs are successful, we'll know that our compliance side of that is also successful.
This also gives us all those best practices we got from the DevOps world into the security world. We want to bring CI pipelines in, we want to bring code reviews in, we can make sure by bringing this all together into the same pipeline that as we make infrastructure changes and Dev and Ops teams review those, that we also have security teams reviewing those same changes.
We have security teams and operations in Dev checking out our security teams as well and understanding if we're doing the right thing, that we really get a lot of fantastic benefits by bringing these Dev principles in there. And we think this really gives you the best of both worlds. Policy as Code has a lot of ease of use, ease of migration, but also massive power behind it with our InSpec engine.
On the East side, this is the same language you've been using for cookbook testing, if you're in the operations world, you're using shaft, you're writing cookbooks, you've got InSpec tests that are running at the end of those cookbooks I want to show you looking to see if Nginx is enabled, looking to see if it started, looking to see if Port 80 is responsive, that is the exact same engine that you will use to write security compliance rules, which is fantastic. You don't have to retrain teams, you don't have to have them learn a new tool, they've already used it to a certain extent, you instead have to learn a little bit more and understand a little bit more about how they can use it for a new thing.
There's no new clients to deploy, no new pipelines to deploy, we're bringing this all into a single pipeline. That's really the real benefit here, getting everybody to collaborate and work together to achieve DevSecOps means that we don't have anything new to deploy. And that really simplifies our path to production.
We don't have different things that have to happen, we don't have to coordinate through tickets, we can have our dev teams, our operations teams, and our security teams all working together in that single pipeline and all being part of that deployment to production. The power really comes from the power of InSpec. There's about 500 built-in InSpec compliance resources right now-- an amazing number of resources. Every week we're doing new releases there, you're seeing new resources.
We got, I think, three or four added, just this last week, giving you more database capability the ability to look at additional things in MongoDB and Postgres and MySQL, really making it so you can write simple resources that do powerful things under the hood. We're the ones doing all the projects as we're understanding what the commands are that you have to run. To achieve this compliance, you just get the benefit of it.
Even more if you're using our content, you're using the content that comes with automate. You get CIS profiles, you get desisted profiles, all out of the box you can apply to those to systems and achieve that compliance without having to interrupt anything, which is even better. We're utilizing that immutable content delivery system with policy files. This is an amazing way to be able to securely, safely push out systems-- push out changes to systems, do it with confidence. That's what we want is confident deployers.
We're not going to roll back things we want to roll forward. And we want to be able to roll forward with confidence. We have to have testing, we have to have immutable deployments. And the best part is this works with all those existing tools that enable that. This works with Cookstyle, this works with Test Kitchen, those things are all being modified to support additional compliance functionality and brings us all into that workflow that you already have a powerful workflow testing already is ready now for the security and compliance you're going to add to it.
So with that, I want to talk a bit about what this looks like in Chef 17. In Chef 17 this April, we shipped the compliance phase. And what that allowed you to do is set a few attributes, grab some profiles, in this case, we're grabbing a profile from Automate, and then push that data up to Automate, run our compliance phase, and you can see at the bottom in every Chef run we are checking this very simple thing-- we're checking to see if /temp is a directory and if it's owned by root.
That is actually an important thing. CIS profiles do include permissions en route or on temp directories that is definitely a valid compliance profile, bit of a silly example, but totally valid. And by setting these attributes here, we're going to get that Compliance Phase running with every infrequent real push that data up into Automate. And you also again you'll see that on the command line here.
And this is really an evolution of the Audit Cookbook. Moving from just a simple audit capabilities in the Audit Cookbook to really this world of Policy as Code and everything we talked about with that. We think that's really great to take the approach of iteratively improving on the audit cookbook because it gives so many benefits-- first of all, there's nothing to migrate to, you set one attribute-- you Compliance Phase to true and now you get the compliance phase instead of the legacy Audit Cookbook.
And there's nothing to manage or solve anymore. You don't have a need for that audit cookbook all that functionality is built-in but you can get rid of managing that dependency, putting on the supermarkets, bringing in with bookshelf, bringing with policy files. That means there's nothing to sync, you don't have to bring that down to nodes it might be on slow RAM links, and there's nothing to upgrade. You never have to worry about whether the audit cookbook is going to work for Chef 18, 19, or 20 because that compliance capability is just built in, it's always going to work, we're always going to test it, we're going to make sure it is absolutely the best as we go on.
But I don't want you to think that this is really just a lift and shift of the audit cookbook because this is really us taking the capabilities of the cookbook, the API of the audit cookbook and giving you a much more powerful solution. The first part of that seems simple but really is a big departure in how this works is to give you a command line output. Giving the compliance phase output as you run your Chef Infra Client means that in testing, you can see the compliance of your systems.
Through Test Kitchen runs, you can view and see exactly what's happening within those systems. As you run them manually if you need to on a system in production or staging, you'll be able to see that output, see what your compliance state looks like, and also obviously continue to report that up into systems like Automate.
The next thing we built in is the ability to really meld together the Infra side with Infra attributes, bringing those into your controls, making those a thing that you can access. We collect a huge amount of data. All the data that you've already set in your policy files, in your Cookbooks about how you deploy your application plus all the things that Ohai gathers, those are all available to Chef Infra when you're writing that policy as cookbooks you can decide with you're on EC2 you can look at maybe I am profiles, you can pull all kinds of fantastic information about the state of that system.
And we want to expose that to you so you can write controls that utilize all that same data. We're bringing that in as an input in InSpec. The Chef node input allows you to make changes like this where you can actually go and see hey, what is my Chef environment? I'm in production. I only want to run the certain control in a production environment. Maybe my production environment has very different requirements than other environments.
This also lets you write great controls like AMI and AWS. AMI in Azure I need different controls for those, I have different clouds, I have different requirements, really, really great when you're in a massively scaled out hybrid setups, data centers, different clouds, the right controls that are reactive to each of those so that you can have compliance across everything that is that environment that you're responsible for.
Next we put waivers into your Infra recipes. Waivers are a fantastic feature, really applicable especially for people that are using our CIS and desisted content, you get that content and you apply one of the CIS profiles, 500 different CIS controls go through and 10 of them are absolutely not approved of your system.
You need a waiver for those, you need to set a waiver, you need to explain to a potential auditor why you're applying a waiver. And this allows you to set that justification, have that go up into automate have that be available programmatically through the automated API and in the UI. And now you can set that directly in your infrastructure Cookbooks.
So we have this InSpec waiver file entry resource. You can add it to it and then the InSpec client as it runs as part of that compliance phase will automatically grab these files that are set and you automatically get those waivers nice and wired up for you without any additional work.
And then the next thing, which is a little bit harder to explain, a little bit harder to talk about is just the increased stability in air handling. We've done a huge number of under-the-hood improvements. This isn't just-- like I said-- it isn't just a lift and shift, we brought all this new functionality, and tons of new air handling, improved air messages, improved execution, to really make this experience a lot better.
And the best thing, I think, of all this is that this is all available today. We've shipped this in Ship 17.0 and we've improved it in 17.1, 17.2, 17.3, and 17.4. You're going to see us continuously approve this every month as we do releases and that's why I want to talk about what's next. What is coming in 17.5 and beyond and things that you will see in the coming months, it's all very, very real and it's all really, really exciting.
The first part is we're bringing compliance code directly into your cookbooks. I talked about setting attributes, pulling InSpec profiles from automate, pulling them from supermarket. That definitely solves a huge problem for people today, but really to get to that world where we talk about that single artifact that you promote through your tests on your system, you promote it safely, we have to ship that compliance content directly in the cookbook. And that's why we've added-- we've added this functionality here.
You can see up at the top third there we're loading InSpec profile files, we're loading InSpec input files, and InSpec waiver files, we're bringing those directly out of the cookbook and making them available to the author. And at the end, we're running a compliance report and that compliance report is using this tmp profile. We can ship one, we can ship 100 profiles directly in our cookbooks we can use that cookbook as that container that we ship through our environments mutable with policy files.
This is done through a new folder. In the cookbooks called compliance, you just put things into the inputs directory, into the profiles directory, into the waivers directory not automatically get loaded by the client, but we do want to give you control. The same way we want to expose all those Chef Infra attributes, we want to let you have that power to decide which profiles, which inputs, which waivers you set based on where you are. And we added some new Chef Infra language helpers that makes it so you can actually include those when you want to include them. So you can include inputs, include profiles, include waivers the same way you include recipes right now.
And the really great part with this is you could say include waiver control a if EC2 question mark. And we could only kick that off when we're in EC2. Lets us put logic into how those profiles run, which profiles run where because it isn't a simple environment and we don't have a simple environment where we can just lay down one blanket security statement. It almost never works that way, everyone has complex environments and we want to give you the power to be able to do that. So we're going to ship all that content to you and then let you decide where you want to use it, how you want to use it, when you want to load it.
Next thing we're going to give you is Generators in Chef Workstation to make this really easy to jump in on. We already have generators like this one, Chef generate cookbook, my important cookbook. I get a cookbook created, I get all the content, I get all the best practices and metadata Chef ignores. All the boilerplate gets created for me and everything filled out with that name.
We're adding new generators for InSpec. That's going to allow you to run Chef generate InSpec profile, whatever InSpec inputs. If you're in a standalone directory, you'll get just a standalone profile. If you're already in a cookbook directory, you'll get that compliance directory created, you'll get a profile created for you directly in there to let you jump right in and start creating that security content without having to go about creating directories and creating profiles by hand.
We're also working on deeper integration within Test Kitchen right now. You get that output. You can see how your cookbook runs, you can see how the compliance report runs at the end, we want to make it so you can fail those runs when compliance doesn't go as planned. You want to bring that in your CI pipelines, you want to make sure that as compliance fails, that runs fail, that the builds go red, giving you the appropriate error codes, giving you tunables within the kitchen.amo to decide how that happens. Those are all coming really soon.
And the best part here is that all of this ships this year. This isn't some vaporware thing, nobody likes vaporware. We're bringing together all this new functionality, bringing together InSpec and Chef Infra into the single package all this year.
And just to summarize why this is important to us, why do we build this? Why is this what we do? It's really about everyone out there using their products.
Like I said to myself, I come from an operations background. This is very, very personal to me. I can think of myself using these tools in my past jobs. Lots of people at Chef are in the same boat. We've all used Chef, we've all experienced the pain, we've had pages, we've been up at the mill the night, we've been to change reviews, we've gone through all the pain that is these manual processes and we don't want this to be your reality.
We don't want a world in which we have mostly automated systems with tons and tons of manual blockers, we want to give you that 2021 dream. This is about enabling that dream again, renewing it, and allowing you to have all those teams working together in that single pipeline through Chef taking things to production in a very rapid fashion. And that's just making DevSecOps a reality. Again, not a buzzword, it can be a real thing. And we're making it a real thing with Policy as Code.
And with that, I want to say thanks for coming. Feel free to reach out again with any questions about anything I've talked about here. You can find me on Slack, you can find me on Twitter, you can shoot me an email.
If you think this is just the worst idea ever, I would love to hear that. If you think it's fantastic and wonderful, you should tell me that too. That would also be great. But thanks for coming. And I hope you enjoy all the other great talks are going on ChefConf.
Great. Thanks, Tim. Before you run off for a gelato, we had a few questions come in to ask you.
Sure.
OK, first one, our ChefDK and Chef Workstation the same thing.
Yeah. Yes, and no would be the great answer to that one. We renamed ChefDK many years ago, I think, about three years ago at this point into Chef Workstation. We brought a bunch of new functionality into it and effectively forked ChefDK off, made a fantastic new thing, brought tools like Chef 1 integration with VMware View center, brought that all into this new product Chef Workstation, and then since then and they left ChefDK.
So Chef Workstation is really our single tool going forward. It has all the things you need for Chef-- and by Chef, I mean Chef the company-- it has the Chef Infra Client, it has the chef command line, things like knife, things like Cookstyle, and Test Kitchen, habitat, InSpec, all in one package and all running the latest versions. If you're on DK, it's pretty dated at this point, a lot of the things we talked about here actually realistically, all the things we talked about here are not possible with ChefDK, so you should absolutely go and grab Chef Workstation, install that, and check out all the cool stuff there.
Great. Couple more here, why we have a bit of time. Do you have to be-- do you have to have a working version of Chef Infra Server and Chef Automate to use Compliance Phase?
Yeah, Compliance Phase, you can use that absolutely without Automate. The command line output put that I showed there will work without Automate. In the first example that I have what is new in the Compliance Phase actually set automate as the output.
We're sending data into Automate through that, but we could choose to just send it to the CLI that would give you, obviously, a little bit of security confidence where what you're going to lack thereof is that real visibility. You're going to have security and compliance in your pipelines potentially, you're going to have it on your workstation as you're developing content, but you won't have that continuous compliance.
So if I logged into a system I made a bunch of terrible changes, I totally brought it out of compliance, you wouldn't have that continuous scanning of compliance, it would send that into Automate. You wouldn't be able to push it into a ServiceNow ticket, you wouldn't be able to alert in Slack that that change had happened that a system had gone out of compliance, so you do you do really want automate there as that part that's giving you that visibility.
Chef Infra Server is part of Automate now, so you can install just Automate, you can get the Chef Infra Server, You can also run Infra Server standalone. If you already have that, that's fine as well either way. But yeah, you would need to get the complete solution you would need both of those.
Great. And while we're on the subject of Automate, there was one other question I'll ask. Is Chef Manage going away?
Yeah, you might have noticed we've got a whole bunch of releases of Automate. Recently, the team there is just cranking out new stuff. One of the great things that they're working on is the new Infra views functionality. You can go to the infrastructure tab up at the top in Automate and you can actually add your Chef Infra Servers, you can add each organ server and you can get-- I would say-- the views that you had in Manage plus a lot more.
We've added all kinds of great new policy file stuff there, but you can see your nodes, and your clients, and your environments, and your roles, you can edit attributes, you can go view policy files and policy groups and change nodes, put them in different policy files, you get all that functionality. And that's really because yeah, we are moving away from Manage.
We don't want to have to have you have a third thing that you installed. We're putting Chef Infra Server into Automate and Automate is going to include not just that view portion, but also the ability to modify your infrastructure which is going to be really great and every week you're going to get more of that if you check our release notes.