In my previous blog post, I described how easy it is to run a load balancer using
tutum/haproxy image. However, the real world use case requires more controls on how the load balancer behaves. I am going to talk about some advanced topics in this article, but before starting, I would like to introduce the concept of a “service”, which serves as the basic build block of our load balancer
Service vs Container
What is a service?
A service is a group of containers that run with the same parameters on the same image. For example, if you run
docker run -d tutum/hello-world 3 times, you could say the 3 containers created belong to the same service.
The concept of a service perfectly matches the function of a load balancer — a load balancer dispatches requests to the same application server, which is an application container in the docker world. For instance, if we link service A (containing 3 containers) and service B (containing 2 containers) to a load balancer, the load balancer will balance the traffic on 3 containers when accessing service A, and on 2 containers when accessing service B respectively.
How to setup services?
- Just as with
tutum/haproxy, the basic building block of Tutum are services too. This means if you run your application using Tutum, the service of your application has been setup by Tutum natively.
- If you run
tutum/haproxyoutside of tutum, say using docker only, the link alias of your application container matters. Any link alias has the same prefix followed by “-” and an integer is considered from the same service. For instance,
web-2are from service
web2are from two different services
Virtual host and virtual path
When you link multiple web application services to
tutum/haproxy, you can specify an environment variable
VIRTUAL_HOST in your web application services, so that when you access the load balancer with a different host name, you can still access different services. Here is an example:
docker run -d --name web1-1 -e VIRTUAL_HOST="www.example.com" <your_app_1>
docker run -d --name web1-2 -e VIRTUAL_HOST="www.example.com" <your_app_1>
docker run -d --name web2 -e VIRTUAL_HOST="app.example.com" <your_app_2>
docker run -d --link web1-1:web1-1 --link web1-2:web1-2 --link web2:web2 -p 80:80 tutum/haproxy
When you access
tutum/haproxy takes you to your first application balancing on two instances, and when you access
app.example.com, you are brought to your second web application.
Apart from the domain name, you can also tell haproxy to select services based on the path of the url you are accessing. For example, if your application is set with
-e 'VIRTUAL_HOST=*/static/, */static/*, all the urls whose path starts with
static will go to that service. Similarly, if you specify
-e 'VIRTUAL_HOST=*/*.php, all the requests to an url that ends with
.php will be directed to your php application service.
For more information on the usage of
VIRTUAL_HOST, please see Github: tutum/haproxy.
Affinity and session stickiness
There are three environment variables you can use to set affinity and session stickiness in your application services:
BALANCE=source. When it is set, HAProxy will hash the IP address of the visitor. It makes sure that the visitor with the same IP address can alway be dispatched to the same application container. It works for both
APPSESSION=<appsession>. HAProxy uses the application session to determine which application container a visitor should be directed to. It works only for
httpmode. A possible value of
JSESSIONID len 52 timeout 3h.
SRV insert indirect nocache.
Multiple SSL certs termination
As mentioned in the previous article, you can activate SSL termination by simply adding
tutum/haproxy. But in many cases, you may have multiple SSL certs bound with different domains. For example, you have cert A with common name
prod.example.com and cert B with
staging.example.com. What you expect is that when a user accesses
prod.example.com, HAProxy terminates SSL with cert A, and SSL of
staging.example.com is terminated by cert B. To achieve this, you only need to set two environment variables
VIRTUAL_HOST settings on your application services:
docker run -d --name prod -e SSL_CERT="<cert_A>" -e VIRTUAL_HOST="https://prod.example.com" <prod_app>
docker run -d --name staging -e SSL_CERT="<cert_B>" -e VIRTUAL_HOST="https://staging.example.com" <staging_app>
docker run -d --link prod:prod --link staging:staging -p 443:443 tutum/haproxy
TCP Loading balancing
tutum/haproxy runs in
http mode by default, but it also has the ability to load balance TCP connections by using environment variables
TCP_PORTS set in your application service. Below is an example:
docker run -d --name web -e VIRTUAL_HOST=www.example.com --expose 80 <web_app>
docker run -d --name git -e VIRTUAL_HOST="https://git.example.com" -e SSL_CERT="<cert>" -e TCP_PORTS=22 --expose 443 --expose 22 <git_app>
docker run -d --link web:web --link git:git -p 443:443 -p 22:22 -p 80:80 tutum/haproxy
In the example above, when you access
http://www.example.com, you will visit your
<web_app>; when you access
https://git.example.com, you will go to
<git_app> with SSL termination. In addition, port 22 is accessible by TCP connection.
tutum/haproxy also supports SSL termination on TCP. To enable it, instead of setting
TCP_PORTS=22, simply set
TCP_PORTS=22/ssl together with a
In the above sections, we introduced some basic examples of the advanced functions of
tutum/haproxy. Using these functions in combination with one another can be very powerful. To find more information, please visit: