Load Balancing – The Missing Piece of the Container World

Load Balancing

Today, more and more applications are shipped using container technology. It’s obvious how powerful the technology is when setting up a web application with only one docker command.

However, when your applications need to scale, your load balancer must be reconfigured. Of course, you can update the config file and reload the load balancer manually, but there should be an automatic way to do this. Developers aren’t Ops.

As Simple as Running a Container

With the official Docker haproxy image, you can add your config file to the official image, then build and run it, or you can use volumes to mount your host config file to the container. Either way, you will end up updating the config file whenever your applications are changed.

Ideally, the haproxy should be smart enough to configure itself to load balance the traffic between application containers. To make this possible, we created a custom tutum/haproxy

How it Works

We’ll use tutum/hello-world as a basic web application to show how tutum/haproxy works.

First, we will create two instances of tutum/hello-world:

docker run -d --name web1 tutum/hello-world
docker run -d --name web2 tutum/hello-world

Then, run the load balancer linking to the two hello-world instances:

docker run -d -p 80:80 --link web1:web1 --link web2:web2 --name lb tutum/haproxy

Now, the load balancer, named lb is running, which listens to port 80 and forwards requests to both web1 and web2 containers using a default roundrobin algorithm.

Scale Your Application

Perfect. We have setup a haproxy container load balancing two applications with only three lines of code. But what happens if I need another instance of my application?

We need to remove the current haproxy container and create a new one linking to all three applications:

docker run -d --name web3 tutum/hello-world
docker rm -f lb
docker run -d -p 80:80 --link web1:web1 --link web2:web2 --link web3:web3 --name lb tutum/haproxy

Now, our haproxy load balances across all three containers.
Just three lines of command achieve our goal, not bad.

Even Better with Tutum

The above method to load balancing works easily. However, we still have to manually redeploy the haproxy container, and it also causes a short period of downtime. We can solve these issues if we deploy everything in Tutum. Let me show you how:

1) Install tutum cli:

pip install tutum

Mac users can install tutum using brew:

brew install tutum

2) Create a service of your web application:

tutum service run --name web tutum/hello-world

3) Create the load balancer service:

tutum service run --link web:web --role global -p 80:80 tutum/haproxy

Now, you have one haproxy container load balancing on a hello world application.

Note: It is very important to add the option –role global when launching the haproxy. By adding the global role, the container will be given the privilege to access Tutum Api.

4) Scale your application:

tutum service scale web 3

Now you have three instances of your application container running.

Note: If you have more than one node running in Tutum, Tutum will try to deploy containers in the emptiest node by default. If you want the containers to be deployed spread evenly between different nodes you can specify the deployment strategy as HIGH_AVAILABILITY in step 2, by doing:

tutum service run --name web --deployment-strategy HIGH_AVAILABILITY --role global tutum/hello-world

5) Update the load balancer:

Now you don’t have to do anything here, the load balancer will be updated automatically.

Job’s done!

SSL Support

Adding SSL support to tutum/haproxy is really easy. All you need to do is set the environment variable SSL_CERT.

The contents of SSL_CERT are the combination of the private key and its public certificate. Suppose your private key and public certificate are named private.key and public.csr. To get the combination of them and store in cert.pem, you can do:

cp private.key cert.pem
cat public.csr >> cert.pem

Now, the last step is to remove the newline in cert.pem and replace them with the string “\n”(2 characters: “\” and “n”, not the newline in programming language).You can easily convert it by running:

awk 1 ORS='\n' cert.pem

The output is the exact content that you need to pass to the environment variable SSL_CERT.

Putting it together, you can run tutum/haproxy using:

docker run -d --link web1:web1 --link web2:web2 -p 443:443 -e SSL_CERT="$(awk 1 ORS='\n' cert.pem)" tutum/haproxy

Or when using Tutum:

tutum service run --link web:web --role global -p 443:443 -e SSL_CERT="$(awk 1 ORS='\n' cert.pem)" tutum/haproxy

Now, tutum/haproxy will load balance your applications on port 443 with your certificate passed in.

Behind the Scenes

The principle is straight forward. When you scale a service in Tutum, Tutum will notice the changes. Then, tutum/haproxy will be notified of those changes and generate the config file automatically. Once the new config is created, haproxy is immediately reloaded.

More Info

tutum/haproxy has many more options to adjust the behavior of haproxy. We also support features like VirtualHost. I’ll be writing another article covering advanced usage of the tutum/haproxy image.

For more information about tutum/haproxy, please visit: https://github.com/tutumcloud/tutum-docker-clusterproxy

Tagged with: ,
Posted in Features, Tutorial
13 comments on “Load Balancing – The Missing Piece of the Container World
  1. Very cool. What I’m wondering is how do you handle node failures with load balancers. Meaning the IP addresses stay with one node that is the LB. when that node fails you need to move the IP addresses manually to another node and then move your LB to that node. Do you have any strategies to do that automatically?

    • In tutum, if you scale a service with a published port (not dynamic), the SVC endpoint will DNS round robin to the different running containers within that service. So if you scale your `haproxy` service to 2 containers(since you use a pulished port, the 2 containers will go to different nodes), the endpoint `haproxy.aemadrid.svc.tutum.io` will resolve to `haproxy-1.aemadrid.cont.tutum.io` and `haproxy-2.aemadrid.cont.tutum.io`

  2. myseotool says:

    This is great but is not usable in production until multiple ports can be handled simultaneously. Any service that relies exclusively on 443 must still accept requests on port 80 in order to redirect to 443.

  3. Open Source Blog site – Open up source at Google with information regarding Google’s open source jobs and programs.

  4. On-line private club or web private club is the on the web and
    modernised versions of classic private club.

  5. Even even though I enjoy playing for true income I make very good use of the free slots supplied on the Net.

  6. […] Feng recently wrote an article about the pure awesomeness of the tutum/haproxy image. tutum/haproxy is an open-source container […]

  7. […] my previous blog post, I described how easy it is to run a load balancer using tutum/haproxy image. However, the real […]

  8. Eliram says:

    Regarding SSL support – I’m not sure where i find my private.key and public.csr…
    I’m working with Amazon but it’s pretty new to me.

  9. So is the paradigm here that we should completely ignore our cloud PaaS / IaaS provider and deal directly with Tutum? For instance, if we use Azure, we should install our certificates in Tutum and not in the Azure Cloud Service that Tutum provisions in Azure on our behalf?

    Azure Cloud Services include load balancing across VMs contained in the service. It seems that the Tutum node cluster maps directly to the Azure Cloud Service and that the Tutum node is the Linux VM in Azure. So if Tutum is doing load-balancing across containers within the node boundary, is the Azure Cloud Service not performing the load balancing across nodes (VMs) within the node cluster (cloud service)? If that’s the case, don’t we need to install the SSL certificates in Azure?

  10. Everything is very open with a clear clarification of the challenges.
    It was definitely informative.Your site is useful.
    Thanks for sharing!

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: