Introducing Overlay Networking for Containers and Dynamic Links in Tutum


At Tutum, our objective is to provide you with the best tools to develop, test and deploy applications in a faster, simpler, yet flexible way. To achieve that, we make use of containers, which provide great benefits in terms of portability and minimal footprint.

We are pleased to announce that we have taken another step towards this objective with the implementation of dynamic links and private overlay networks for containers (using weave).

Background on Docker links and service discovery

Docker introduced at a very early stage in its development the notion of container links with the purpose of providing a simple service discovery mechanism between containers. By using container links, Docker injects connection information (IP and port) of the services available from one container to another. It provides this information in two ways: via environment variables (i.e. $DB_3306_TCP_ADDR), and via hostnames (i.e. db) using /etc/hosts.

The benefits of service discovery are very well known: it’s a great way to remove the need to manually specify topology network information (IP and port) of an environment (dev, prod, etc.) when launching containers.

Docker links force you to use environment variables or “placeholder” hostnames in the application image configuration files. This is a great way to allow the image to be reusable across different environments. Both environment variables and DNS hostnames are supported in virtually any programming language, needing very little change to the application codebase.

It also allows you to store this service discovery information in the form of “links” in a “stack definition” (see fig/docker-compose) which can still be reused between environments. It becomes especially powerful if every service in the stack uses containers (you will see the benefit as the number of services increase).

Dynamic links

At Tutum, we like this simple yet powerful approach to service discovery, and we have made it even better when you deploy your containers in Tutum with what we call dynamic links.

In Tutum, containers use an internal DNS service to resolve hostnames. For the application, it works exactly the same as if it were running locally, but instead of resolving it using a static /etc/hosts file, it will issue a query against Tutum’s DNS service which will return the appropriate IP of the linked container at that time (hence the dynamic). This of course works between nodes thanks to the new overlay network for containers, discussed next.

If your linked service consists of more than one container, you will have a per-link hostname which resolves to all linked containers IPs with DNS round robin (i.e. web A, and per-container hostnames which resolve to each linked container IPs (i.e. web-1 A, web-2 A More information about how it all works is available on our support site.

With dynamic links, if you redeploy a service, scale it, or deploy a new linked service, hostnames will automatically be added/updated on all running containers, removing the need to redeploy any of them.

And best of all, this is 100% compatible with Docker links, so if your stack runs in your laptop, it will seamlessly run at scale in Tutum without any modifications.

Private overlay network for containers

All nodes deployed using Tutum are now connected to one another which enables a secure overlay network for the containers running on them. This is true regardless of the geographical location, IaaS provider, or type of node (bring-your-own-node works too!). Every new node launched or added to Tutum via BYON will automatically join the private network. This private network is encrypted and requires authentication; only nodes belonging to a specific account can connect and communicate with each other.

What this mean is that every new container launched has its own universal IP. This IP is reachable from any other container on any other node (belonging to the same account, of course). Even better, when you redeploy a service, all of its containers reuse the same IP address, even if new containers are scheduled to a different node. Thanks to this behavior, there is now no need to redeploy an entire stack when redeploying a single service.

In order for existing services to take advantage of this new functionality, they must be redeployed. Existing services will automatically be connected to the overlay network during the redeployment process. If you use Tutum’s BYON (bring-your-own-node) functionality, make sure your nodes have ports 6783/tcp and 6783/udp open in any firewalls.

What’s next

These new features, coupled with the new stacks functionality, will make the deployment of multi-service architectures a far more rewarding experience.

How can we make service discovery work better for you? Looking forward to your feedback!

Sr Engineering Manager @ Docker

Tagged with: , , ,
Posted in Features, News
13 comments on “Introducing Overlay Networking for Containers and Dynamic Links in Tutum
  1. Can we assume that there is a direct port mapping on these ips so that port 80 on a container is always exposed as port 80 on the private network?

  2. […] We are very pleased to be able to report another public production case at scale!  Congratulations and thanks are due to our friends at Tutum who announced a major upgrade to their Docker cloud service yesterday. […]

  3. Dan says:

    What is Tutum DNS runing SkyDNS/SkyDock? I have done something very similar, really cool to see this in a real service. Kudos

  4. […] constraints. These constraints arise from its dependency on docker links. This was discussed in a previous post. The static nature of today’s links prevents services from being modified or redeployed […]

  5. […] a multi-host environment. This is why our stack support was delayed until Tutum built solutions for networking, volumes and service […]

  6. Sensacional post. De momento soy un novel pero tengo bastantes ganas de aprender más cosas sobre este interesante mundo.

  7. […] We are very pleased to be able to report another public production case at scale!  Congratulations and thanks are due to our friends at Tutum who announced a major upgrade to their Docker cloud service yesterday. […]

  8. Is the internal DNS service a unique point of failure?
    Is it distributed?

    I’m currenly considering using consul or tutum dynamic links and I’d like to have more information about the reliability of the service

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: