Continuing with our objective to provide you with the best container tooling to work simply and flexibly, I’d like to discuss the topic of service composability: the current state of affairs in the OSS world (docker-compose), what we’re doing at Tutum, and what’s coming in the near future.
Not long ago service-oriented architectures (SOA) were all the rage. Today microservices architectures are in everyone’s timelines and newsfeeds. Yet the idea of decoupling a system into smaller components focused on doing a small task is anything but new. One early and very successful example of combining “small, sharp tools” to accomplish larger tasks was the UNIX OS. You may be left wondering, what does this have to do with containers and composability? Everything.
There are plenty of reasons why you would want to build a highly decoupled (or loosely coupled) modular system, where each component focuses on doing just a small task and doing it right. That discussion is, however, beyond the scope of this article. But here’s one reason: such an approach lends itself to a continuous delivery software development process (push an update to a component without affecting the entirety of the system).
This is precisely one of the reasons containers are so attractive from a development and operations standpoint. Containers allow engineers to easily package (or contain) these “components” (ie. basic units of functionality). But what about building complex systems made up of many such components? That’s where container composability comes into the picture.
So what exactly is composability?
Composability is a system design principle that deals with the inter-relationships of components.
The very first step towards building a complex system is to get two containers communicating. Easy, right? You’ve got docker links to do just that.
Did you know that Docker containers did not always have links? One of the perks of having started Tutum in the very early days of this space is remembering just how rudimentary the tooling was at the beginning, and how far things have come in these 2 years. Just for curiosity’s sake, I went to the changelogs. Docker links did not exist until v0.6.5, introduced on October 29th, 2013, 7 months after the initial public release.
Back to the topic at hand, docker links were the first way to compose complex systems from containers and remain (arguably) the most straightforward way to do so using docker. Here’s an excerpt from Docker’s own documentation:
Links allow containers to discover each other and securely transfer information about one container to another container. When you set up a link, you create a conduit between a source container and a recipient container. The recipient can then access select data about the source.
— Docker Documentation
Docker links work by leveraging Environment Variables and /etc/hosts. It’s simple, say that container B links to container A. B would get a pre-defined set of environment variables describing A (network, port, etc.), as well as inherit all of A’s environment variables. Additionally, an entry pointing to A would be created in B’s /etc/hosts file.
Links were the first tool to compose multi-container systems, but as many came to realize, managing linked containers quickly becomes a burden.
The stack (ie. group of services) is defined in a YAML file. In this file, all of the services of the stack are defined, as well as their inter-relationships (what is connected to what). The ability to define an entire stack in this manner allows you to launch tens of containerized and interconnected services with a single command.
Here’s an example docker-compose YML:
web: image: tutum/quickstart-python:latest links: - "cache:redis" environment: NAME: blog readers ports: - "80:80" cache: image: tutum/redis environment: REDIS_PASS: safe_password
The YAML above defines a stack with 2 services: a web service linked to a cache service. It also defines the source of these services (image), a port to be published in the web service, and an environment variable to be used to define a password for the cache service.
The implementation of docker-compose is rather straightforward. The tool parses the contents of the YML file, translating it into the correct set (and execution order) of the corresponding
$ docker commands, and executes them. When using docker-compose, the inter-relationships of the services defined are implemented using the same docker links described earlier.
Despite the value docker-compose provides with its definition/template file to describe an entire stack, there are a number of constraints. These constraints arise from its dependency on docker links. This was discussed in a previous post. The static nature of today’s links prevents services from being modified or redeployed without breaking or affecting other services in the stack.
Just last week we announced support for stacks – Tutum’s take on service composability. When we set forth on implementing this feature we did so with the following requirements:
- Must be backwards compatible with Fig/docker-compose YML templates
- Must leverage Tutum’s dynamic linking
- Must leverage Tutum’s private container overlay network
- Must support heterogeneous applications (see next section)
I’m happy to say that our initial offering already comes close to achieving all that. Today, our users can deploy their existing fig.yml* templates using Tutum. Unlike with docker-compose, Tutum’s services and containers can be deployed and balanced across any number of hosts, anywhere in the world – following a true hybrid cloud approach. You are no longer constrained to a single host.
Furthermore, thanks to Tutum’s dynamic linking and overlay networking capabilities, services can be modified, updated, redeployed and scaled without other services being affected or links breaking. Try it out and let us know what you think.
*Tutum does not have build capabilities today, but we’re hard at work on this feature and you’ll soon be able to easily build all your docker images from within the platform.
Composing complex container-based systems or stacks, coupled with Tutum’s dynamic linking and networking capabilities is powerful. However, what about services that are not containerized? Having had the opportunity to talk with hundreds of our users, we soon realized the majority of use cases leveraging containers are heterogeneous, with some services in the stack not being containerized.
At Tutum we believe that true service composability should account for both containers and non-containers – this is what we meant by the requirement in the previous section: support heterogeneous applications.
For example, say that you already have a MySQL DB available in your production environment. For dev/test, it makes perfect sense to use a containerized MySQL. But when deploying containerized services to production, you need those services to seamlessly reference the production, non-containerized MySQL DB. How do you do this today? The answer is: it’s complicated.
With that in mind we’ve started working on user-defined external services. The objectives are as follows:
- Avoid hard-coding credentials for non-containerized services
- Same user-experience between containerized and non-containerized services
user-defined external servicelinking
- Single stack definition to account for both containerized and non-containerized services
What do you think about this? As with all of our features, we want to hear from you! This is your opportunity to help shape Tutum into a service you love <3.