Is Chef vulnerable to CVE-2021-44228 (Log4j)?


On December 9, 2021, Progress Software was made aware of a critical vulnerability in a common Java logging library called Log4j.  Links to additional resources describing the vulnerability and its origin are included at the end of this post.  


While Chef does not use log4j as part of its codebase, Chef Automate, Chef Infra Server, and Chef Backend do use Elasticsearch for data storage.  

Elasticsearch uses log4j and, in response to this reported vulnerability, they have evaluated their product and provided guidance that they are not impacted. Please go here to see their communication.

UPDATE: December 15th  8PM EST 

Elastic has recently updated their guidance with additional specifics. Elasticsearch 6.x and 7.x are still considered safely mitigated, but Elasticsearch 5.x has now been identified to be vulnerable to CVE-2021-44228. 

Chef Infra Server and Chef Automate contain Elasticsearch 6.x and Java 11. Elastic has reaffirmed these versions are not susceptible to CVE-2021-44228 or CVE-2021-45046, and no changes are required to mitigate the vulnerability. We have released Chef Infra Server 14.11.21 with Elasticsearch 6.8.21, which as a precaution sets the “-Dlog4j2.formatMsgNoLookups=true system property and removes the JndiLookup class from log4j. We will make a similar release of Chef Automate with Elasticsearch 6.8.21 soon.  

Chef Backend 2.2.0 contains Elasticsearch 5.6.16 and now requires a configuration change to mitigate the vulnerability while we prepare an updated release.Please refer to the Chef Backend 2.2.0 Mitigation directions at the bottom of this post. 

Additional Information: 

For additional information on this vulnerability as it relates to other Progress products, refer to the Progress Security Center:   


Chef Backend 2.2.0 Mitigation

### Verify the Chef Backend cluster is healthy 

Connect to one node in your Chef Backend cluster and run both of these commands: 
$ sudo chef-backend-ctl status 
$ sudo chef-backend-ctl cluster-status 

Ensure that all 3 nodes are healthy in the output. If there are any issues, do not proceed with these steps until all cluster nodes are returned to a healthy state. The output should look similar to this: 

$ sudo chef-backend-ctl status 
Service        Local Status        Time in State   Distributed Node Status 
leaderl        running (pid 2818)  4d 19h 58m 6s   leader: 1; waiting: 0; follower: 2; total: 3 
epmd           running (pid 2641)  4d 19h 58m 23s  status: local-only 
etcd           running (pid 2552)  4d 19h 58m 31s  health: green; healthy nodes: 3/3 
postgresql     running (pid 5264)  4d 19h 47m 2s   leader: 1; offline: 0; syncing: 0; synced: 2 
elasticsearch  running (pid 2700)  4d 19h 58m 18s  state: green; nodes online: 3/3 

System  Local Status                                          Distributed Node Status 
disks   /var/log/chef-backend: OK; /var/opt/chef-backend: OK  health: green; healthy nodes: 3/3 

$ sudo chef-backend-ctl cluster-status 
Name            IP           GUID                              Role      PG        ES          Blocked      Eligible 
ip-10-0-12-194  a0db9989fe02e36fe396251c32669427  follower  follower  master      not_blocked  true 
ip-10-0-10-34   4549c51f96bada0e02a1fd664645b462  follower  follower  not_master  not_blocked  true 
ip-10-0-8-11    076a56710dfe1b611ab74d131a6ffe2d  leader    leader    not_master  not_blocked  true 

### Update the configuration on the followers 

Verify that the cluster node you are currently logged in to is NOT the one that is identified as the 'leader' in the 'Role' column of the 'sudo chef-backend-ctl cluster-status' output. That node will be updated last. It is okay if the follower node is the PG leader or ES master, those will be moved. 

Add the following line to the end of the "/etc/chef-backend/chef-backend.rb" file: 
elasticsearch.jvm_opts = [ "-Dlog4j2.formatMsgNoLookups=true" ] 

Run the following command to apply the configuration change: 
$ sudo chef-backend-ctl reconfigure 

Verify that the change has been applied by running this command and verifying that '-Dlog4j2.formatMsgNoLookups=true' was added to the java process as shown below: 
$ ps axww | grep log4j2.formatMsgNoLookups 
21505 ?        Ssl    0:12 /opt/chef-backend/embedded/open-jre/bin/java -Dlog4j2.formatMsgNoLookups=true -Xmx1292m -Xms1292m -XX:NewSize=80M -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -XX:+AlwaysPreTouch -server -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Dlog4j.skipJansi=true -XX:+HeapDumpOnOutOfMemoryError -Des.path.home=/opt/chef-backend/embedded/elasticsearch -cp /opt/chef-backend/embedded/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -Epath.conf=/var/opt/chef-backend/elasticsearch/config 

Run the following two commands to re-verify that the cluster status is still healthy: 
$ sudo chef-backend-ctl status 
$ sudo chef-backend-ctl cluster-status 

Repeat these steps on the other follower node 

### Update the configuration on the leader last 

Log in to the node identified as the 'leader' in the 'Role' column of the 'sudo chef-backend-ctl cluster-status' output. 

Verify the cluster is healthy: 
$ sudo chef-backend-ctl status 
$ sudo chef-backend-ctl cluster-status 

Demote the leader, then confirm that another node is now the leader, and the cluster is still healthy 
$ sudo chef-backend-ctl demote 
$ sudo chef-backend-ctl status 
$ sudo chef-backend-ctl cluster-status 

Now repeat all the steps listed in the previous section on this node. 

The mitigation process is now complete. 
Posted in:

Aaron Kraft

Aaron Kraft was the VP of Engineering for Chef at Progress.