Immutable Infrastructure and Containers

Immutable infrastructure and containers
It seems like a long time ago since I attended a Docker meetup in NYC where Michael Bryze (Gilt’s CTO) talked about the benefits of immutable infrastructure using Docker. Some people are still skeptical, but we at Tutum are big supporters of this model – one that has benefited greatly with the rise of containers.

What is ‘Immutable Infrastructure’?

When you deploy an update to your application, you should create new instances (servers and/or containers) and destroy the old ones, instead of trying to upgrade them in-place. Once your application is running, you don’t touch it! The benefits come in the form of repeatability, reduced management overhead, easier rollbacks, etc. Its advantages have been written about in-depth by Chad Fowler, Michael DeHaan, and Florian Motlik.

In order to achieve this, you need to architect your application to meet two primary requirements:

  • Your application processes are stateless. Their state is stored in a service outside of the “immutable infrastructure” scope (the exception being stateful containers which use volumes, but we’ll get to that).

  • You have a template and/or a set of instructions that can deploy an instance of your application from scratch.

This last requirement is the key, and although it can be done in many ways, containers were born for that.

Using Configuration Management Software

Do you need containers in this model? Technically no, but it helps tremendously.

Without using containers, you could achieve immutability by deploying new virtual machines that either use a VM template with the new version of your application (which can be automated), or are configured using configuration management software, like Chef or Puppet. The objective is to be able to deploy fresh application instances from scratch, ready to handle traffic.

Once this is done, you can switch your load balancer to start sending requests to them and terminate the old ones. Using this model, even the complexity of your ‘recipes’ is reduced removing all the code needed for upgrading your application in-place.

But, let’s be honest: building a VM template for every version of your application that only works on a specific cloud provider is not ideal (even if you automate it, it’s a cumbersome process), and continuously testing configuration management scripts is an experience to be avoided.

Containers to the Rescue

So, why use containers then? Because they are much, much faster to build, test and deploy than snapshotting VMs or running scripts to configure servers. Once your application image has been built, tested and tagged, deploying it is a very efficient process since you have basically removed the configuration of the underlying OS from the equation.

The heavy lifting can be delegated to your cloud provider of choice by deploying their base template for a vanilla up-to-date OS. It might even be more performant if it’s patched and optimized for their hypervisor – you don’t care, as long as it runs Docker.

Containers also remove the need to deploy new servers every time you want to push a new version of your application. Because all your application’s dependencies and logic are inside containers and are completely replaced with every new version, your servers can persist and still reap all the benefits from the ‘immutable infrastructure’ model. This dramatically reduces deployment time.

Best of all, you still get all the benefits of containers, such as not being tied to any cloud provider or any Linux distribution (as long as they run Docker). If it works locally, it will work on any provider. Isn’t this the dream?

How do you automate your new version deployment using containers? There are two main steps:

  1. Building your new image. Although you can use a wide range of methods to build your application image (manually, using configuration management software, etc.), using a simple optimized Dockerfile is the way to go in most cases. You can use your CI/CD platform to test it before pushing it. For production deployments, tagging it with a version number will let you rollback your application easily if needed.

  2. Deploying your new containers. You can deploy (manually or automatically) your new containers in new or existing servers, and switch the load balancer to send traffic to your newly deployed containers. This can be at the instance level (for example, using one application container per AWS EC2 instance and using an Elastic Load Balancer), or at a container level (using a haproxy or nginx container that forwards traffic to your application containers).

How do you automate step 2 using only containers and using multiple hosts for your application? Use Tutum!

Tutum to the Rescue

Using Tutum, deploying a new version of your application becomes a trivial task. You just need to change your image tag in your service definition, and hit Redeploy.

For non-production deployments where rolling back to a specific version is not that important, you can even automate the redeploy process using our autoredeploy feature, or redeploy triggers with the DockerHub.

Once the redeploy process is started, Tutum will replace existing containers with new ones, one-by-one. We provide a tutum/haproxy image that is automatically configured based on its linked containers. Whether you deploy it locally using Docker links, or launched inside Tutum, it will automatically reconfigure itself when linked services scale up or down, or get redeployed.

If you want to run both new and old containers side-by-side to rollback quickly, you don’t need something as over-complicated as Asgard.

Deploy a service that uses your new image tag. Add a link from your tutum/haproxy service. It will detect the change and automatically start forwarding requests to both old and new services. When you are ready to make the switch, just terminate the old service. By default, tutum/haproxy will detect dead application containers and redispatch requests to healthy containers.

What if they use data volumes? I know I said earlier that the application needs to be stateless, but in Tutum, volumes are persisted across deployments. So if you redeploy a tutum/mysql container, which by default creates a data volume for /var/lib/mysql, Tutum will reuse this volume and keep all the data.

Once your containers are running, don’t touch them! Use docker exec (or Tutum’s “terminal” feature) only to troubleshoot and run one-off admin tasks, not to change your application code! Changes to your application should only be done in your image and environment variables, not to your running instances.

What’s Next?

We are working hard to make the process of going from code to production as intuitive, simple and powerful as possible. We also have some exciting new features to announce in the upcoming weeks. We are also listening to our community’s suggestions and welcome your thoughts and ideas!

CTO & Co-founder of Tutum

Tagged with: , ,
Posted in General
One comment on “Immutable Infrastructure and Containers

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories
%d bloggers like this: