Why S6, why not Supervisor?
I know a lot of people have been using Supervisor in their containers, and it’s a great system! It’s very easy to learn and has a lot of great features. Phusion produces a very popular, very solid base image for Ubuntu based around
UPDATE 2014-12-09: I mistakenly wrote that Phusion uses Supervisor, when they in fact use Runit.
When Docker launches a container, your
ENTRYPOINT process is launched as “process id 1” – and on nearly any Linux system, whether it’s a physical install, a virtual machine, or a container, “process id 1” has a special role. When any orphaned/disowned process exits,
PID 1 is supposed to clean up after it.
Supervisor explicitly mentions that it is not meant to be run as your
init process. If you have some subprocess fork itself off, it won’t be cleaned up by Supervisor. Phusion dealt with this by writing their own init process, which is fine – I don’t see anything particularly alarming or bad in their code – but it seems like overkill compared to just using a process supervisor that can run as
S6 is meant to run as “process id 1”, so I figure it’s a good way to cut out the middle man. Another benefit to S6 – it’s written in C, so I can easily produce static binaries and use it with any image, even the busybox image.
Getting S6 into an image
There’s two ways to get S6 into your image – either build it within your Dockerfile, or statically build it and include it via
ADD directives. I prefer the latter, since I can reduce my image’s build time and keep the image size smaller.
I’ve become a fan of using Docker to create “build images,” where I create an image that compiles code and spits out a tarball. I have an image for S6 on github that produces static binaries of the S6 suite. Feel free to look at that (or just use it) to get an idea of how to compile S6.
I usually start by making a base image that includes S6 and other programs I tend to use in most of my images (for example, I can’t live without curl installed), then continue building images from that base. I’ve taken to making a folder named
root in my project directory, laying out my filesystem in it, then having
COPY root / towards the end of my Dockerfile. This lets me bring in S6 and my own configuration files as one layer.
Here’s a simplified version of what my
root directory looks like.
| |-- s6
| |-- cron
| | |-- finish
| | |-- run
| |-- syslog
| |-- finish
| |-- run
| |-- .s6-svscan
| |-- finish
| |-- bin
| |-- (s6 binaries)
Like I said, when I run
docker build, all these files are copied into the image as a single layer.
Using S6 to start services
init-like program is
s6-svscan – when launched, it will scan a directory for “service directories”, and launch
s6-supervise on each of those. In my example above, I’m using
/etc/s6 as my “root” s6 directory, so
syslog are “service directories.” That
.s6-svscan directory is not a service directory, that’s a directory used by
Each service directory has two files –
s6-supervise program will call your
run program, and when the
run program exits, it will call your
finish program, then start over (by calling your
run program). The
run program can be anything – a shell script, or if a program requires no arguments/setup, I can just symbolically link to it, and the same goes for the
finish program. If I don’t have any particular clean-up to do when my
run program exits, I’ll just make
finish a symlink to
When it comes to actually running a program, S6 is similar to Supervisor, Upstart, or Systemd – S6 will “hold on” to a program, instead of say, writing out
PID files like SysV Init does. So I have to make sure each of of my
run scripts launches programs in a foreground/non-daemonizing mode.
This is usually pretty easy to do – here’s my
run script for cron:
exec cron -f
run script for syslog:
exec rsyslogd -f /etc/rsyslog.conf -n
And here’s the
CMD directives from my Dockerfile:
That’s all! I’m now cruising along with S6, and running multiple processes inside a container.
Earlier, I mentioned that
.s6-svscan directory, and it’s actually pretty important. When Docker stops a container, it sends a
TERM signal to “process id 1”, which in my case is
s6-svscan gets a
TERM signal, it will send a
TERM to all the running services, then try to execute
The important thing to note: it will not try to run the
finish script in each of your service directories.
Since the container is about to be stopped (and probably destroyed), this isn’t a problem. I still like to run my ‘finish’ scripts, though, just in case I write one where I do something of importance. Here’s my
for file in /etc/s6/*/finish; do
UPDATE 2015-03-01: Laurent reached out to me and pointed out I was incorrect – when
s6-svscan gets a
TERM signal, it will:
- Send a
TERMsignal to each instance of
s6-supervise(each of your monitored processes has a corresponding
s6-supervisewill send a TERM signal to the monitored process, then execute your service’s
- After that,
s6-svscanwill run your
s6-supervise receives that TERM signal, it runs
finish with stdin/stdout pointed to
/dev/null – meaning you won’t see any text output from those finish scripts. But they are in fact running, meaning that script above, where I manually call each
finish script is not necessary.
Laurent is going to try and come up with a solution for that, since that behavior is confusing.
Playing nice in the Docker ecosystem
In my previous article, I mentioned that I like to pick some process and call that the “key” process – if that dies, then my container should exit. I do this because most Docker containers do exactly that – they run a single process, and if that process calls it quits, the container calls it quits, too.
For example, let’s say I’m running a NodeJS program (for kicks, I’ll go with Ghost),
syslog in a container. I don’t particularly care if
syslog die – I’ll just have S6 restart the process. But if Ghost dies, I want the container to exit, and let my host machine handle alerting me and restarting it. So my
finish script for Ghost would be:
s6-svscanctl -t /etc/s6
This will instruct
s6-svscan to bring everything down and exit.
Ideas for future projects
There’s a few things I want to implement in the future.
I think S6 is capable of this, I just haven’t figured out how!
S6 has an interesting way to handle logs – if I create a directory named
log and place a
run script in it, the output of my program is piped into that
run script. There’s a
s6-log program that’s meant to be used as that piped-into program, that handles log rotation, can pipe logs into other processes, and so on.
I see a lot of images that just dump all output to stdout and let Docker handle it. I think there’s potential to come up with something better with these tools – I’m not sure what “better” is yet, but it’s something I’m going to be thinking about.
I think S6 is a really interesting, efficient alternative to Supervisor, and I especially like that I can include it on any image, even the
busybox image. I really hope you enjoyed reading this – do you have any neat ideas? Have you been working on something similar? Use the comments to let me know. Thanks so much!
If you’re interested in building on top of what I’ve created, I have a collection of images here.
Everything except the “base” image I still consider pretty volatile right now. I keep all images in their own branches (and within a folder within that branch), so the latest version of the “base” image would be at
/base in the “base-14.04” branch. You can find the base image here.
The base image is actually a bit more complicated than what I’ve written about, but it still follows the same basic structure/layout. However, I just run a few more services, and they have more complicated startup scripts.
You can find my Arch Linux image with s6 installed here.