Today, more and more applications are shipped using container technology. It’s obvious how powerful the technology is when setting up a web application with only one docker command.
However, when your applications need to scale, your load balancer must be reconfigured. Of course, you can update the config file and reload the load balancer manually, but there should be an automatic way to do this. Developers aren’t Ops.
As Simple as Running a Container
With the official Docker haproxy image, you can add your config file to the official image, then build and run it, or you can use volumes to mount your host config file to the container. Either way, you will end up updating the config file whenever your applications are changed.
Ideally, the haproxy should be smart enough to configure itself to load balance the traffic between application containers. To make this possible, we created a custom tutum/haproxy
How it Works
We’ll use tutum/hello-world as a basic web application to show how tutum/haproxy works.
First, we will create two instances of tutum/hello-world:
docker run -d --name web1 tutum/hello-world
docker run -d --name web2 tutum/hello-world
Then, run the load balancer linking to the two hello-world instances:
docker run -d -p 80:80 --link web1:web1 --link web2:web2 --name lb tutum/haproxy
Now, the load balancer, named lb is running, which listens to port 80 and forwards requests to both web1 and web2 containers using a default roundrobin algorithm.
Scale Your Application
Perfect. We have setup a haproxy container load balancing two applications with only three lines of code. But what happens if I need another instance of my application?
We need to remove the current haproxy container and create a new one linking to all three applications:
docker run -d --name web3 tutum/hello-world
docker rm -f lb
docker run -d -p 80:80 --link web1:web1 --link web2:web2 --link web3:web3 --name lb tutum/haproxy
Now, our haproxy load balances across all three containers.
Just three lines of command achieve our goal, not bad.
Even Better with Tutum
The above method to load balancing works easily. However, we still have to manually redeploy the haproxy container, and it also causes a short period of downtime. We can solve these issues if we deploy everything in Tutum. Let me show you how:
1) Install tutum cli:
pip install tutum
Mac users can install tutum using brew:
brew install tutum
2) Create a service of your web application:
tutum service run --name web tutum/hello-world
3) Create the load balancer service:
tutum service run --link web:web --role global -p 80:80 tutum/haproxy
Now, you have one haproxy container load balancing on a hello world application.
Note: It is very important to add the option –role global when launching the haproxy. By adding the global role, the container will be given the privilege to access Tutum Api.
4) Scale your application:
tutum service scale web 3
Now you have three instances of your application container running.
Note: If you have more than one node running in Tutum, Tutum will try to deploy containers in the emptiest node by default. If you want the containers to be deployed spread evenly between different nodes you can specify the deployment strategy as HIGH_AVAILABILITY in step 2, by doing:
tutum service run --name web --deployment-strategy HIGH_AVAILABILITY --role global tutum/hello-world
5) Update the load balancer:
Now you don’t have to do anything here, the load balancer will be updated automatically.
Adding SSL support to tutum/haproxy is really easy. All you need to do is set the environment variable SSL_CERT.
The contents of SSL_CERT are the combination of the private key and its public certificate. Suppose your private key and public certificate are named private.key and public.csr. To get the combination of them and store in cert.pem, you can do:
cp private.key cert.pem
cat public.csr >>
Now, the last step is to remove the newline in cert.pem and replace them with the string “\n”(2 characters: “\” and “n”, not the newline in programming language).You can easily convert it by running:
awk 1 ORS='\n' cert.pem
The output is the exact content that you need to pass to the environment variable SSL_CERT.
Putting it together, you can run tutum/haproxy using:
docker run -d --link web1:web1 --link web2:web2 -p 443:443 -e SSL_CERT="$(awk 1 ORS='\n' cert.pem)" tutum/haproxy
Or when using Tutum:
tutum service run --link web:web --role global -p 443:443 -e SSL_CERT="$(awk 1 ORS='\n' cert.pem)" tutum/haproxy
Now, tutum/haproxy will load balance your applications on port 443 with your certificate passed in.
Behind the Scenes
The principle is straight forward. When you scale a service in Tutum, Tutum will notice the changes. Then, tutum/haproxy will be notified of those changes and generate the config file automatically. Once the new config is created, haproxy is immediately reloaded.
tutum/haproxy has many more options to adjust the behavior of haproxy. We also support features like VirtualHost. I’ll be writing another article covering advanced usage of the tutum/haproxy image.
For more information about tutum/haproxy, please visit: https://github.com/tutumcloud/tutum-docker-clusterproxy