Blue-Green Deployment using Containers

blue-green

Introduction

I’ll show you how you to set up and execute a blue/green deployment, red/black push, and A/B deployment – all using containers.

Continuous Integration and Continuous Delivery are generally well-understood concepts. The goal of CI/CD is to automate and quickly deliver sound software to production. Many articles have been written in the recent months about CI or CD or CI/CD with containers.

One of the challenges with automating deployment is the cut-over itself, taking software from the final stage of testing to live production. You usually need to do this quickly in order to minimize downtime – Martin Fowler

At Tutum, we see a lot of our users leveraging redeploy triggers to automate the deployment of a service. These webhooks are automatically being triggered after tests pass in a CI system (eg. CircleCI) or a build completes in DockerHub.

While this is a clean and simple way to get your latest code deployed to a staging environment, it should not be used in a production environment.

Why? Downtime.

In the simplest of scenarios, when you redeploy, the existing containers are destroyed, and new ones are created. Even if it takes a few seconds to perform this operation, that’s a few seconds of downtime. Adding a load-balancer and doing a rolling-deployment (containers are destroyed/created sequentially) can minimize downtime, but not eliminate it. There will be downtime if the application takes too long to start after the container itself started, resulting in requests sent from the LB to the app before it is ready (can be mitigated with app-level health checks).

You could also accidentally deploy a malfunctioning/buggy version of your app, or the deployment process could fail altogether. And, needless to say, given how rolling-deployments work, all this is only feasible if two versions of your app can co-exist without conflicting.

How blue-green deployments help

Blue-green deployments solve these issues above by ensuring there are two production services/environments running simultaneously, but only one of them (eg. green) is live at a given time.

As you get ready for a new release of your application, you update the service that is not live (eg. blue). After you are able to test and verify everything is in order, you scale the idle service (blue) with enough containers to handle current traffic levels in production.

Lastly, you flick the switch to have the load-balancer send traffic to the idle service. Now blue is live, and green is idle.

This set-up also gives you the ability to easily roll-back if necessary. If anything goes wrong, simply switching the LB to green again will bring you back to your previous state.

Once you are satisfied with your blue service/environment, you can start pushing changes to green and use that as your staging environment.

Here’s a GIF of this process:

Containerizing blue-green deployments

In order to do this with a fully containerized setup in which not only the app, but also the load-balancer is containerized, we find ourselves with two challenges:

  1. The LB cannot rely on the Docker links environment variables for its configuration. If that were the case, we’d have to redeploy the LB every time we updated it, and that would result in downtime.

  2. We need to be able to update its hostname-based link configuration without downtime.

The first challenge is easy to solve, simply have your load-balancer rely on hostnames instead of environment variables for its linking. More info can be found here.

The second challenge is trickier. Today, it is not possible to add/remove links in a container using Docker. So the flicking of the LB as described earlier, would not be possible, at least not without modifying the LB’s configuration in some other way.

But what if you COULD modify links?

Honglin Feng recently wrote an article about the pure awesomeness of the tutum/haproxy image. tutum/haproxy is an open-source container load-balancer that has been developed to take full advantage of Tutum’s Stream API – dynamically updating as containers are created/destroyed.

Today we take it one step further by dynamically modifying links without downtime.

And why is this exciting you ask?

By being able to modify service links without the need to redeploy your service, it is now trivially easy to get your own blue-green deployments on Tutum!

Let me show you how to accomplish this using a very simple example stackfile (docker-compose file):

lb:
  image: 'tutum/haproxy:latest'
  restart: always
  links:
    - web-green
  ports:
    - '80:80'
  roles:
   - global

web-green:
  image: 'borja/bluegreen:v1'
  restart: always
  target_num_containers: 3
  deployment_strategy: high_availability

web-blue:
  image: 'borja/bluegreen:v1'
  restart: always
  target_num_containers: 1
  deployment_strategy: high_availability

You can try this by clicking on the button below:

Deploy to Tutum

After the stack is successfully deployed, click on the Endpoints tab. Then point your browser to the URL of the service or container endpoints published by this stack.

You’ll see that green is live, with 3 containers being load-balanced.

Next I’ll be showing you how to do blue-green deployments using Tutum’s CLI tool. Keep in mind that everything can be accomplished just as easily using the Web UI.

If you haven’t installed the CLI yet but would like to try it out, learn more here.

Through out the next steps, you can ignore the “Service must be redeployed to have its configuration changes applied” messages from the CLI. A redeploy is no longer necessary when updating service links 🙂

Updating Blue to V2

$ tutum service set --image borja/bluegreen:v2 --redeploy --sync web-blue

This will change the tag of the image being used to v2 and redeploy the web-blue service. After the update is complete, you’d want to ensure that blue has been deployed successfully. There are a number of ways you could go about doing this.

One way to manually check is to launch a staging LB and link it to blue (which has no publicly facing ports). Another solution would be to exec into the container, and check from within the container.

Scaling Blue up to 6 containers

After you have verified that blue was deployed successfully, we need to scale it to match green (at a minimum).

$ tutum service scale web-blue 6

May be a good time to check again that everything is good with web-blue before we flick the switch.

Swith the LB to Blue v2

$ tutum service set --link web-blue:web-blue lb

Oh no! Where did the Tutum logo go? We were not careful with our update and we deployed a buggy version of our app to blue! Let’s solve this quickly by rolling back to v1.

Rollback to Green v1

$ tutum service set --link web-green:web-green lb

That was easy!

Fix, update Blue to v2.1, and switch LB to Blue

$ tutum service set --image borja/bluegreen:v2.1 --redeploy --sync web-blue
$ tutum service set --link web-blue:web-blue lb

Use Green as Staging

$ tutum service scale web-green 1
$ tutum service set --image borja/bluegreen:v3 --redeploy --sync web-green

Scale up and switch LB to Green v3

$ tutum service scale web-green 6
$ tutum service set --link web-green:web-green lb

This is cool, but what else can you do? Red/black push

A couple of years back, Ben Schmaus wrote a blog post about how Netflix handled deploying the Netflix API. He referred to it as “red/black push”.

The graphic below, borrowed from the original blog post, illustrates the basic flow of this approach – do keep it in mind it uses AMIs (Amazon Machine Images), and not containers:

While it is in many ways similar to the blue/green deployment that we’ve covered in this article, there is a key difference. If you notice in Step 2 of the process, both v1 and v2 are live. This is akin to having the LB send traffic to both blue and green during the update process. Netflix had its reasons to run both v1 and v2 concurrently, and you might as well.

Let’s see how to do that with Tutum. Starting from where we left off:

Use Blue as Staging

$ tutum service scale web-blue 1
$ tutum service set --image borja/bluegreen:v4 --redeploy --sync web-blue

Scale Blue up to 6 containers

$ tutum service scale web-blue 6

Switch LB to Green v3 and Blue v4

$ tutum service set --link web-green:web-green --link web-blue:web-blue lb

You now have two versions of your application deployed alongside of each other.

Awesome! Anything else? A/B deployments

Google and Amazon are known to run an infinite number of tests on their live production systems, often having a small percentage of their users unknowingly experience a version of an app before it is rolled out to everyone else.

To do this effectively, not only must you be able to run two or multiple versions of your app concurrently, you have to be able to define the percentage of users that are routed to the different versions of your application.

As per the red/black change we did to our application before, you should already be running two different versions of your application simultaneously, v3 and v4. To define the percentages, we simply scale blue and green separately.

80% Green v3 and 20% Blue v4

If we are not sure about the stability of v4 quite yet, for example, we’d want most of the traffic (80%) being sent to v3, and a smaller percentage (20%) to be sent to v4.

$ tutum service scale web-green 8
$ tutum service scale web-blue 2

This will result in 8 containers running v3 and 2 containers running v4.

40% Green v3 and 60% Blue v4

As your confidence level on v4 increases, you can scale accordingly, increasing how much of your traffic is handled by Blue v4 containers.

$ tutum service scale web-green 4
$ tutum service scale web-blue 6

That’s all folks!

It is by providing these basic building blocks of functionality that Tutum gives users the flexibility to deploy and manage their applications as they see fit.

Questions or comments are always welcomed, here or in our Slack community channel.

Thank you!

Borja is a co-founder and the CEO at Tutum. Borja holds a MSc in Information Security from Carnegie Mellon, a MSc in Applied Informatics from University of Hyogo, and a BSc in Computer Engineering from Georgia Tech. In his previous life he worked as a R&D engineer developing location based services, and later as a tech consultant for large Telecom providers around the World. Borja describes himself as a tech entrepreneur, hacker and DIYer. When not working on Tutum, Borja likes to tinker with hardware and build things.

Tagged with: , , , , , ,
Posted in Features, Tutorial
7 comments on “Blue-Green Deployment using Containers
  1. robbydooo says:

    Awesome post! Did not know you could use multiple links on the ha proxy to load balance different versions that is really useful for A/B testing

  2. robbydooo says:

    One question though, if you are using this for A/B testing you would need to make sure that traffic sticks to those containers that are using the newer version otherwise each refresh would display a different version.

    Is the HAProxy image sticky by default? I do not see any reference to that in the deployment file.

  3. Borja Burgos says:

    Haven’t tried it, but you should be able to enable stickiness with the SESSION_COOKIE env var.

  4. […] zero downtime deployment on top of Tutum is so-called Blue-Green Deployment as detailed in this blogpost at Tutum. The concept underpinning blue-green deployment is pretty simple, the idea is to have a […]

  5. Настя says:

    Отличная, БЕСПЛАТНАЯ реклама для Вашего бизнеса, товаров или услуг на нашем ресурсе (ТИЦ 40 и Яндекс каталог). ССЫЛКИ на сайты разрешены! http://rekforum.forum2x2.ru

  6. Hi there mates, how is the whole thing, and what you wish for to
    say about this piece of writing, in myy view its really aamazing in support oof me.

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories
%d bloggers like this: