Until now, there have only been two approaches to deal with business-critical legacy applications written in a pre-cloud era: rewrite them, which is a painful, risky and long-lived process (and in some cases, not possible); or lift-and-shift, which doesn’t address the underlying technical debt or the difficulty of managing and deploying this software. In our latest webinar, Jason Layn and I showed you a third option, enabled by Habitat: lift, shift, and modernize.
In our presentation, we reviewed the challenges application teams are facing in achieving speed and flexibility while adopting modern cloud technologies. We then brought in Habitat, which takes a fundamentally different approach from either rewriting or simply lifting-and-shifting. Habitat allows you to extract an application’s business value from the underlying infrastructure to improve its manageability. Using Habitat, you package and build your application in a singular way, then have the flexibility to deploy those applications anywhere.
Added to this, Habitat’s management interface allows us to make our applications extremely portable, by allowing us to defer decisions about tunables in our applications to runtime. Habitat also gives us the ability to take advantage of cloud native capabilities, such as RESTful API endpoints for health checks, even if our application does not natively support them.
Sounds cool, right? Watch below to see how Habitat can help future-proof your application portfolio and bring your legacy applications in the modern era and beyond!
During the webinar, we had some great audience questions that I’ve addressed below.
How is this different from Docker?
Docker is a great tool to manage applications running in containers. Habitat supports exporting applications packaged in Habitat to Docker for use with Docker’s scheduling and runtime utilities. There are some applications out there that cannot or will not (i.e. ISV support) run in Docker containers. You can still take advantage of Habitat’s packaging interface and leverage the Habitat supervisor to run those applications in the same consistent, repeatable and reliable way
I’ve used Chef for configuration management before. Why couldn’t I just use Chef to migrate my applications to the cloud?
As we outlined in the slide “why is this different?” Chef takes the traditional approach of managing applications from the infrastructure up. While we have had a lot of success with customers automating their deployments using Chef, the reality is that unless you can account for all of the different permutations of infrastructure in your Chef cookbooks, you still have the same risks associated with deploying applications in this bottom up way. The great news is that Chef and Habitat are complimentary; you can use Chef to ensure consistent configuration of your supervisor interface across your different cloud providers, and still use Chef’s InSpec to ensure that configuration is compliant.
My company is using Kubernetes as a way to be cloud native. What’s so special about Habitat?
Similar to the Docker question, Kubernetes is at its core a container scheduler with some workflow benefits to declare your application’s requirements. Using Kubernetes does force you into their workflow. If you ever have a requirement to run your applications outside of a K8S, you’ll have to refactor your application for that deployment target.
If dependent libs are packed w/ the app, do you run the risk of running several copies of same libversion in memory on the same host?
No. Habitat is only going to install that dependency once. For example, if 5 apps all use libcurl v 5.0.1, the first application will install it (assuming it is the first time an app with this dependency runs). The subsequent apps 2-5 will see that the package is already installed.
My organization is currently looking at solutions akin to Habitat, like Pivotal Cloud Foundry. Are habitat packages compatible with PCF? What advantage does Habitat have over PCF?
Habitat packages are completely compatible with PCF as a deployment target. The additional benefits you get by using Habitat is: 1) The applications you automate using Habitat can be used in PCF or in another deployment target (for example, you might want to actually run an app that you originally targeted for PCF to a cloud provider. With Habitat you have the flexibility to simply export that application to that preferred provider 2) Habitat gives you a consistent interface to automate all of your applications, BOSH only works in PCF and so you are tied to BOSH as the means to interface with PCF 3) Habitat gives you the ability to automate applications WITHOUT requiring a rewrite to conform to 12FA. Additionally, you can take advantage of cloud native primitives like exposing RESTful API endpoints for health checks etc even if your app does not natively support those primitives, as well as giving you the ability to define deployment more complex deployment topologies like leader / follower for more advanced applications.
Do you have samples where multiple large databases are managed? Enterprise applications using multiple large DBs?
We have had many customers automate their databases using Habitat. How large or small the database is becomes purely a function of the resources you allocate to it. One of the advantages of Habitat is if your database deployment starts small, you may choose to deploy it to a container or container service. As the database requires more resources to run, you may want to make a business decision to have dedicated hardware for your database. With Habitat, this decision is as simple as re-exporting your application to the deployment target you choose!
Is there any performance penalty when running under Habitat vs natively on the original deployment?
Aside from a small footprint for the supervisor process, there’s no extra overhead to run your apps under Habitat.