Chef Management Console 1.11.0 Release

Manage 1.11.0 is now available from the Chef downloads site.

This release includes fixes for reporting dashboard errors, changes to make running Manage behind a load balancer more usable, and various other bug fixes and improvements.

As always you can see the public changelog on hosted Chef at https://manage.chef.io/changelog.

Chef Server 12.0.5 Released

Today we have released Chef Server 12.0.5. This release includes further updates to provide API support for key rotation, policy file updates, and LDAP-related fixes to user update.

You can find installers on our downloads site.

Updating Users

This release fixes Issue 66. Previously, users in LDAP-enabled installations would be unable to log in after resetting their API key or otherwise updating their user record.

This resolves the issue for new installations and currently unaffected user accounts. However, if your installation has users who have already been locked out, please contact Chef Support (support@chef.io) for help repairing their accounts.

This fix has resulted in a minor change in behavior: once a user is placed into recovery mode to bypass LDAP login, they will remain there until explicitly taken out of recovery mode. For more information on how to do that, see this section of chef-server-ctl documentation.

Key Rotation

We’re releasing key rotation components as we complete and test them. This week, we’ve added API POST support, allowing you to create keys for a user or client via the API.

Key Rotation Is Still A Feature In Progress

Until key rotation is feature-complete, we continue to recommend that you manage your keys via the users and clients endpoints as is done traditionally.

Policyfile

Work on Policyfile support continues to evolve at a rapid pace. This update includes new GET and POST support to named cookbook artifact identifiers. Policyfile is disabled by default, but if you want to familiarize yourself with what we’re trying to do, this RFC is a good place to start.

Release Notes

As always you can view the release notes for more details, and the change log for even more.

Bento Box Update for CentOS and Fedora

This is not urgent, but you may encounter SSL verification errors when using vagrant directly, or vagrant through test kitchen.

Special Thanks to Joe Damato of Package Cloud for spending his time debugging this issue with me the other day.

TL;DR, We found a bug in our bento boxes where the SSL certificates for AWS S3 couldn’t be verified by openssl and yum on our CentOS 5.11, CentOS 6.6, and Fedora 21 “bento” boxes because the VeriSign certificates were removed by the upstream curl project. Update your local boxes. First remove the boxes with vagrant box remove, then rerun test kitchen or vagrant in your project.

We publish Chef Server 12 packages to a great hosted package repository provider, Package Cloud. They provide secure, properly configured yum and apt repositories with SSL, GPG, and all the encrypted bits you can eat. In testing the chef-server cookbook for consuming packages from Package Cloud, I discovered a problem with our bento-built base boxes for CentOS 5.11, and 6.6.

[2015-02-25T19:54:49+00:00] ERROR: chef_server_ingredient[chef-server-core] (chef-server::default line 18) had an error: Mixlib::ShellOut::ShellCommandFailed: packagecloud_repo[chef/stable/] (/tmp/kitchen/cache/cookbooks/chef-server-ingredient/libraries/chef_server_ingredients_provider.rb line 44) had an error: Mixlib::ShellOut::ShellCommandFailed: execute[yum-makecache-chef_stable_] (/tmp/kitchen/cache/cookbooks/packagecloud/providers/repo.rb line 109) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1'
---- Begin output of yum -q makecache -y --disablerepo=* --enablerepo=chef_stable_ ----
...SNIP
  File "/usr/lib64/python2.4/urllib2.py", line 565, in http_error_302
...SNIP
  File "/usr/lib64/python2.4/site-packages/M2Crypto/SSL/Connection.py", line 167, in connect_ssl
    return m2.ssl_connect(self.ssl, self._timeout)
M2Crypto.SSL.SSLError: certificate verify failed

What’s going on here?

We’re attempting to add the Package Cloud repository configuration and rebuild the yum cache for it. Here is the yum configuration:

[chef_stable_]
name=chef_stable_
baseurl=https://packagecloud.io/chef/stable/el/5/$basearch
repo_gpgcheck=0
gpgcheck=0
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-packagecloud_io
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt

Note that the baseurl is https – most package repositories probably aren’t going to run into this because most use http. The thing is, despite Package Cloud having a valid SSL certificate, we’re getting a verification failure in the certificate chain. Let’s look at this with OpenSSL:

$ openssl s_client -CAfile /etc/pki/tls/certs/ca-bundle.crt -connect packagecloud.io:443
CONNECTED(00000003)
depth=3 /C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root
verify return:1
depth=2 /C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limited/CN=COMODO RSA Certification Authority
verify return:1
depth=1 /C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limited/CN=COMODO RSA Domain Validation Secure Server CA
verify return:1
depth=0 /OU=Domain Control Validated/OU=EssentialSSL/CN=packagecloud.io
verify return:1
... SNIP
SSL-Session:
    Verify return code: 0 (ok)

Okay, that looks fine, why is it failing when yum runs? The key is in the python stack trace from yum:

File "/usr/lib64/python2.4/urllib2.py", line 565, in http_error_302

Package Cloud actually stores the packages in S3, so it redirects to the bucket, packagecloud-repositories.s3.amazonaws.com. Let’s check that certificate with openssl:

$ openssl s_client -CAfile /etc/pki/tls/certs/ca-bundle.crt -connect packagecloud-repositories.s3.amazonaws.com:443
depth=2 /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5
verify error:num=20:unable to get local issuer certificate
verify return:0
---
Certificate chain
 0 s:/C=US/ST=Washington/L=Seattle/O=Amazon.com Inc./CN=*.s3.amazonaws.com
   i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3
 1 s:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3
   i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5
 2 s:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5
   i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority
...SNIP
SSL-Session:
    Verify return code: 20 (unable to get local issuer certificate)

This is getting to the root of the matter as to why yum was failing. Why is it failing though?

As it turns out, the latest CA certificate bundle from the curl project appears to have removed two of the Versign certificates, which are used by AWS for https://s3.amazonaws.com.

But wait, why does this matter? Shouldn’t CentOS have the ca-bundle.crt file that comes from the openssl package?

$ rpm -qf ca-bundle.crt
openssl-0.9.8e-27.el5_10.4

Sure enough. What happened?

$ sudo rpm -V openssl
S.5....T  c /etc/pki/tls/certs/ca-bundle.crt

Wait a second, why is the file different? Well this is where we get back to the TL;DR. In our bento boxes for CentOS, we had a line in the ks.cfg that looked like this:

wget -O/etc/pki/tls/certs/ca-bundle.crt http://curl.haxx.se/ca/cacert.pem

I say past tense because we’ve since removed this from the ks.cfg on the affected platforms and rebuilt the boxes. This issue was particularly perplexing at first because the problem didn’t happen on our CentOS 5.10 box. The point in time when that box was built, the cacert.pem bundle had the VeriSign certificates, but they were removed when we retrieved the cacert.pem for 5.11 and 6.6 base boxes.

Why were we retrieving the bundle in the first place? It’s hard to say – that wget line has always been in the ks.cfg for the bento repository. At some point in time it might have been to work around invalid certificates being present in the default package from the distribution, or some other problem. The important thing is that the distribution’s package has working certificates, and we want to use that.

So what do you need to do? You should remove your opscode-centos vagrant boxes, and re-add them. You can do this:

for i in opscode-centos-5.10 opscode-centos-5.11 opscode-centos-6.6 opscode-centos-7.0 opscode-fedora-20
do
vagrant box remove $i
done

Then wherever you’re using those boxes in your own projects – cookbooks with test-kitchen for example, you can simple rerun test kitchen and it will download the updated boxes.

If you’d like to first check if your base boxes are affected, you can use the test-cacert cookbook. With ChefDK 0.4.0:

% git clone https://github.com/jtimberman/test-cacert-cookbook test-cacert
% cd test-cacert
% kitchen test default

ChefConf Talk Spotlight: A sobering journey from Parse / Facebook

We’re five weeks away from ChefConf 2015 and we’re filling seats fast, so, if you haven’t already, register today and guarantee your seat at the epicenter of DevOps.

Continuing our series of spotlights on the tremendous talks, workshops, and sponsors at this year’s show, today we focus on Awesome Chef Charity Majors – a production engineer at Parse (now part of Facebook) – and her session, “There and back again: how we drank the Chef koolaid, sobered up, and learned to cook responsibly” – a must-see based on the title alone!

Here’s the download on Charity’s talk:

When we first began using Chef at Parse, we fell in love with it. Chef became our source of truth for everything. Bootstrapping, config files, package management, deploying software, service registration & discovery, db provisioning and backups and restores, cluster management, everything. But at some point we reached Peak Chef and realized our usage model was starting to cause more problems than it was solving for us. We still love the pants off off Chef, but it is not the right tool for every job in every environment. I’ll talk about the evolution of Parse’s Chef infrastructure, what we’ve opted to move out of Chef, and some of the tradeoffs involved in using Chef vs other tools.
This will be a great session for all of you looking for guidance on tooling, or even a friendly debate about the subject. It will also provide patterns of success from some seriously smart and active Chefs over at Parse/Facebook.

As for the presenter herself, Charity is happily building out the next generation of mobile platform technology. She likes free software, free speech and single malt scotch.

See you at ChefConf!

 

Pauly Comtois: My DevOps Story (Pt. 3)

This post concludes our bi-weekly blog series on Awesome Chef Paul Comtois’ DevOps Story. You can read the final part below, while part one is here and part two is here. Thank you to Pauly for sharing his tale with us!

Leveling Up the Sys Admins

The last hurdle was that, even with all we’d accomplished, we still weren’t reaching the sys admins. I had thought they would be my vanguard, we would charge forward, and we were going to show all this value. Initially, it turned out they didn’t want to touch Chef at all! Jumpstart, Kickstart and shell scripts were still the preferred method of managing infrastructure.

About the same time that the release team was getting up to speed, the database team decided that they wanted a way to get around the sys admin team because it took too long for changes to happen. One guy on the database team knew a guy on the apps team who had root access and that guy began to make the changes for the database team with Chef. The sys admins were cut out and the apps team felt resentful because the sys admins weren’t doing their job.

That started putting pressure on the sys admins. The app team was saying, “Hey, you guys in sys admin, you can’t use that shell script any more to make the change for DNS. Don’t use Kickstart and Jumpstart because they only do it once, and we don’t have access. We need to be able to manage everything going forward across ALL pods, not one at a time and we need to do it together.” It was truly great to see the app team take the lead and strive to help, rather than argue.

Read more ›

IBM InterConnect and Chef

IBMThis week our friends at IBM are hosting their InterConnect 2015 conference and we’re pleased to announce expanding (and existing) support for a wide variety of their products. IBM is synonymous with the Enterprise and they have embraced Chef in a big way. By using Chef across your IBM infrastructure, efficiency is improved and risk reduced as you can pick the right environment for your applications. Whether it’s AIX, POWER Linux or an OpenStack or SoftLayer Cloud, Chef has you covered by providing one tool to manage them all.

In Chef 12 we officially added AIX support and there has been tremendous interest because many large enterprise customers have a significant investment in the platform. By providing full support for AIX Resources such as SRC services, BFF and RPM packages and other platform-specific features, AIX systems become part of the larger computing fabric managed by Chef. The AIX cookbook expands functionality and there is even a knife-lpar plugin for managing POWER architecture logical partitions.

In addition to supporting AIX on POWER, we’re also currently working on providing official Chef support for Linux on POWER for Ubuntu LE and Red Hat Enterprise Linux 7 BE and LE. We plan to release initial Chef client support for all 3 platforms by ChefConf. Once the clients are available the Chef server will be ported to these platforms and we expect to release it early this summer.

Chef is core to IBM’s OpenStack offerings and IBM is very active in the Chef OpenStack community. Chef is used to both deploy and consume OpenStack resources through knife-openstack, kitchen-openstack, Chef Provisioning, and OpenStack cookbooks. Support for Heat is under active development and new features are being released and supported all of the time.

IBM’s SoftLayer Cloud also has great Chef support. The knife-softlayer plugin allows you to easily launch, configure and manage compute instances in the IBM SoftLayer Cloud. There is a Chef Provisioning plugin for SoftLayer under development and they even have a Ruby API for further integrations.

With the Chef client on AIX, the client and server on Linux on POWER, and nodes being managed on OpenStack and SoftLayer clouds; administrators with IBM systems have many options when it comes to managing their infrastructure with Chef. We’ve enjoyed working with them and expect to continue making substantial investments integrating IBM’s platforms to meet Chef customers’ automation needs across diverse infrastructures.

Chef and Microsoft to Bring Further Automation and Management Practices to the Enterprise

New Agreement Empowers Enterprises to Automate Workloads Across On-Premises Data Centers and Microsoft Azure to Become Fast, Efficient, and Innovative Software-Driven Organizations

SEATTLE – Feb 23, 2015 – Today it was announced that Chef and Microsoft Azure have joined forces to provide global enterprises with the automation platform and DevOps practices that increase business velocity to meet customer demand in the digital age. This agreement builds on 12 months of engineering work to integrate Chef’s IT automation platform withthe Microsoft stack, helping customers rapidly move Windows and Linux workloads to Azure.

According to research firm IDC, DevOps will be adopted (in either practice or discipline) by 80 percent of Global 1000 organizations by 2019 (IDC MaturityScape Benchmark: DevOps in the United States, Dec. 2014).

Working together, Chef and Microsoft will provide enterprises with the tools, skills, and guidance to make IT innovation more rapid and frequent within Azure. By automating both compute resources and applications, Chef enables developers and operations to best collaborate on rapidly delivering high-quality software and services.

“IT is shifting from being an infrastructure provider to becoming the innovation engine for the new software economy. Key elements of the new, high-velocity IT include automation, cloud, and DevOps,” said Barry Crist, CEO, Chef. “Our partnership with Microsoft is about bringing these elements to enterprises in all industries and geographies. This is a big investment for both Chef and Microsoft, bringing to bear the expertise and resources to transform IT into an innovation engine using Microsoft technology.”

“Microsoft is excited to extend our work with Chef to help customers rapidly move their workloads into the Azure cloud,” said Jason Zander, Corporate Vice President, Microsoft Azure. “Through this collaboration, we are not only enabling faster time to innovation in the cloud, but we are also underscoring Microsoft’s commitment to providing a first-class cloud experience for our customers regardless of whether they are using Windows or Linux.”

Key components of the collaboration include:

  • Engineering Collaboration: Chef and Microsoft will further enhance native automation experiences for Azure, Visual Studio, and Windows PowerShell DSC users. Microsoft Open Technologies has its own collection of Chef Cookbooks, providing rock-solid code for automating the provisioning and management of compute and storage instances in Azure. 2015 will bring additional deliverables across Windows, Azure, and Visual Studio with a focus on empowering customers to automate heterogeneous workloads and easily migrate them to Azure.
  • Sales Training and Customer Support: Chef will deliver hundreds of hours of DevOps education in Microsoft’s expansive ecosystem across industry events, digital channels, and community meetups. Chef will work with Microsoft to enable their field sales organization to support customers embracing automation, DevOps practices, and Microsoft Azure. Microsoft users interested in Chef and DevOps can already access a wealth of content, new online training tutorial on the Windows platform and a webinar series on Automating Azure with Chef.
New Webinar: Automating the Microsoft Stack with Chef

On March 19th, Microsoft’s Kundana Palagiri and Chef’s Steven Murawski and Michael Ducy will showcase the technical integrations between Chef and Azure. This webinar will demonstrate real-world use cases, providing attendees with a step-by-step guide to achieving the benefits of Chef within Microsoft environments. Register today.

Additional Resources

Chef Server 12.0.4 Released

Today we released Chef Server 12.0.4. This release includes cookbook caching, continued development of the key rotation feature, and some LDAP improvements.

Cookbook Caching

Cookbook caching lets you serve up cookbook resources to Chef clients faster by keeping those resources cached by more efficient servers. This feature is off by default, but can be enabled. See this blog post for the full low-down on cookbook caching.

Continued Key Rotation Work

Key rotation is a feature that is still under development. With the last Chef Server release, we implemented basic key rotation support via chef-server-ctl with the promise that API support was coming soon. We have implemented the first endpoint of the API in this release, with more to come in releases scheduled for the near future.

GET Me Some Keys

A GET to the Chef Server endpoints, /organizations/ORGNAME/clients/CLIENTNAME/keys or /users/USERNAME/keys, will return a list of keys for a client or user, respectively.

If you haven’t used the key rotation chef-server-ctl commands, for now, this will simply return the default key for a client or user. The same key is still returned via GET to the users and clients endpoints.

Key Rotation Is Still A Feature In Progress

While we are finishing up the rest of the API, we recommend you continue to manage your keys via the users and clients endpoints as is done traditionally. However, if you can’t wait to get started with rotating, we recommend you do not delete the default key for now.

See the docs for additional information on key rotation.

LDAP Improvements

Brian Felton added support for filtering LDAP users by group membership. To restrict Chef logins to members of a particular group, use the ldap['group_dn'] configuration option in /etc/opscode/chef-server.rb to specify the DN of the group. This feature filters based on the memberOf attribute and only works with LDAP servers that provide such an attribute.

A number of other LDAP bugs have also been fixed. Check the release notes for details.

Cookbook Caching

If you’re configuring cookbook s3 URL TTL in your chef-server.rb configuration file (opscode_erchef['s3_url_ttl']), then you’ve been creating cookbook URLs that expire that many (28800 by default) seconds from “now” (i.e. the time of the request), which is great for Chef Client runs, but it’s terrible for caching!

Each signed URL has a query string with an expiration time, which means that every time a signed URL is generated, it’s unique. If you want to cache URLs, we need URLs that don’t change as frequently.

Today in Chef Server 12.0.4, we’re introducing a new setting: opscode_erchef['s3_url_expiry_window_size']. Don’t need it? Just set it to :off and close this browser tab (you have too many open, tbqh). Actually, it’s :off by default, so you don’t have to do anything.

If you want less uniqueness in URLs, you can set s3_url_expiry_window_size to be the length of time for which a URL should be unique. For example, let’s set it to "15m". Now, every URL generated in a 15 minute window will be the same. The price we pay is that the link will actually take a little longer to expire than you’ve configured in s3_url_ttl, but no more than 15 minutes longer.

You can also set s3_url_expiry_window_size to be a percentage of the s3_url_ttl. The default of 28800 (or 8 hours), and a s3_url_expiry_window_size value of "10%" would mean a window size of 48 minutes.

Here’s a walkthrough of the "15m" setting:

Screenshot 2015-02-18 15.00.25

Each letter represents a unique URL, with the capital letter being the first time that URL is seen. Look at the URL A generated at 1:03. Until 1:15 they’ll all be set to expire at 2:15 indicated by the lower case as.

Once it’s 1:15, we’re in a new interval, but we don’t try getting a URL until 1:25. We set that one to expire at 2:30 (1hr + the remaining time in the interval).

Nobody asks for a URL until after 1:30, so the URL B is only used once and is never asked for again. Oh well. We played the odds and lost this time. It’s not the end of the world.

At 1:33, the URL C is generated and this interval gets used alot, so it’s good that we have this feature.

You get the idea. Over the course of the day we will only ever generate 96 unique expiration times, as opposed to a new expiration time for every URL requested.

Now, 15m may not be the optimal window size. If we went with "60m" then we’d only generate 24 unique URLs per day. That’s why we’ve made it configurable.

If you’ve got an F5 load balancer or even if you want nginx to serve up cached cookbook content instead of hitting s3 or Bookshelf, well, now you can!

After you’ve enabled the s3_url_expiry_window_size, you have another choice to make. If you’re using nginx to cache cookbooks:

opscode_erchef['nginx_bookshelf_caching'] = :on

Then nginx will serve up the cached content instead of forwarding the request to s3 or Bookshelf. If you’re using an F5 or other load balancer, turn that setting off like this:

opscode_erchef['nginx_bookshelf_caching'] = :off

and your load balancer will take care of serving up cached content.

Chef 12.1.0 chef_gem resource warnings

Apologies for the new Warn SPAM

Chef 12.1.0 will be released shortly and commits have been merged to master which will result in the following warning banners being output for all uses of the chef_gem:

[2015-02-17T23:59:35+00:00] WARN: chef_gem[fpm] chef_gem compile_time installation is deprecated [2015-02-17T23:59:35+00:00] WARN: chef_gem[fpm] Please set `compile_time false` on the resource to use the new behavior. [2015-02-17T23:59:35+00:00] WARN: chef_gem[fpm] or set `compile_time true` on the resource if compile_time behavior is required.

Sorry for the annoyance, but this is being done for a good reason. Making these warnings go away will also be substantially easier than the CHEF-3694 errors.

Background

When Omnibus was first created we needed a way to target installing gems into the omnibus repo so that chef could internally use it and installing them into the base system, so the chef_gem resource was created. Back then we didn’t know any better than throwing require "mysql" directly into recipe code and since the require was compile-time it would always blow up with LoadError if the installation of the chef_gem was delayed until converge time. We decided to bake into the resource that chef_gem would always install at compile time to avoid this problem.

Now, fast forward 2 or 3 years and best practice is to make require lines more lazy. Library cookbooks that use chef_gem‘s should expose LWRPs and the require should occur in the provider code which moves it to converge time. That eliminates the need to install the chef_gem at compile time. We now somewhat regret the choice that was made in forcing chef_gem to run at compile time.

And this causes issues where if you’re doing things right and don’t require gems at compile time you’re still forced to install chef_gems at compile time. If you are installing a native gem this forces build-essentials to be installed at compile time. If your native gem depends on zlib or libxml2 or libxslt then you need those libraries installed first at compile time and now those cookbooks need compile_time flags to get exposed. It creates a bit of a race to install things at compile time where there is no longer any need to do so.

Changes in Chef 12.1.0

The chef_gem resource gets a compile_time boolean property in 12.1.0 so that compile_time false will shut this behavior off. This is now the recommended setting. In Chef 12.1.0 the default is still to install at compile_time so there is no breaking change being shipped in Chef 12.1.0.

However, the intent is to change this default. In order to make that future transition less painful, the warning is letting you know that you must be explicit about if you want your chef_gem resources to run at compile time or converge time. There are actually two switches which can be used.

Per-Resource Flag

This will be a bit tedious since it requires updating every chef_gem resource, but to adopt the new behavior you can just add compile_time false like:

chef_gem "aws-sdk" do
  compile_time false
end

If you wind up with your recipes throwing a LoadError somewhere (and you should test with fresh builds or manually delete the gem out of your omnibus install in order to surface this problem) then you may need compile_time true:

chef_gem "aws-sdk" do
  compile_time true
end

Global Config Flag

A global Chef::Config[:chef_gem_compile_time] flag was added which can be used to globally switch behavior and it has effects on the warnings. It has three different values:

  • nil: this is the current default and results in all the spammy warnings
  • false: this will be the future default, it removes ALL warnings, but will flip the behavior of chef_gem and some recipes may fail
  • true: this maintains the current default behavior, suppresses the individual warnings, but WARNs once per chef run that this is set to true

So, nil is for spammy warnings, false is for early adopters (some cookbooks certainly will break, those will need compile_time true flags on their chef_gem resources or will need to get fixed), and true means that you’re too busy to care, but the tradeoff is that when the default does change at some point in the future you will likely get broken at some point in the future when that default changes AND when you stat pulling cookbooks that assume that as the default.

Community Cookbooks

All the community cookbooks will need fixes to suppress these warnings. Since they require backwards compatibility with prior Chef versions, the preferred PR to submit will look something like this:

chef_gem "aws-sdk" do
  compile_time false if Chef::Resource::ChefGem.method_defined?(:compile_time)
end

Again, Sorry

Sorry for the Warn SPAM but it’s for a good cause. The ultimate goal is much less need to force anything to compile mode, which is vastly more annoying.