You can find the original article here: http://www.centurylinklabs.com/the-future-of-linux-container-hosting/
Below is a full transcript of the interview:
Lucas Carlson: The idea of having Docker containers as your basic foundation instead of virtual machines in the cloud. It’s a new way of thinking, he’s on the bleeding edge of this new hosting thinking.
Borja Burgos: Last 9 months have been actively working at the CEO and co-founder of Tutum.
LC: So tell us about Tutum, what is it? and who should use it?
BB: So the way we think of Tutum. It’s a single endpoint, it’s an infinite docker host. So for people that are familiar with Docker should relate to this. With docker you have a number of individual hosts. Think of a single infinite endpoint where I can just run containers regardless of what the infrastructure is. That’s what Tutum is. On top of that, a bunch of value add services.
Who should use it? Really anybody from a developer to a DevOps. Anybody that’s running backend infrastructure software, is a potential candidate for Tutum and what we’re building.
LC: Why not just run your own docker? So docker the whole premise of Docker is that you can set it up very easily. Any linux distribution can run a Docker daemon and you should just be able to set up Docker and run containers anywhere. Why not just set up a Digital Ocean or a CenturyLink Cloud virtual machine, put Docker on it, and deploy your apps that way? Why use a hosted Docker?
BB: Right. Docker is great, they’ve been able to put a great interface on some primitives and build a great open source project. But at the end of the day the Docker container is nothing but a building block. An awesome building block, but a building block. Meaning the moment you start trying to do containers at scale, you start running into problems. How do I run containers in two different hosts for redundancy purposes? How do I get visibility into what containers deployed into which hosts? How am I supposed to load balance the traffic that’s coming to the different containers running across multiple hosts, across multiple clouds, right? And these problems are not solved by the basic building block, which is the Docker container. Hence, the reason for having something on top. That layer of orchestration, management, deployment, and that is what Tutum is.
LC: Got it. So it runs more than a Docker daemon. It provides a sort of mesh, so you don’t have to worry about which Docker host runs your container, is that right?
BB: Right, the whole infrastructure as we know it today is abstracted away, in such a way that your running and managing containers. You don’t have to worry about that T2 instance, or that Digital Ocean droplet, or CenturyLink machine that’s running my infrastructure. Now all that I see is containers. And I can link my applications in containers, that’s the beauty of it.
LC: So how is Tutum different than PaaS like Heroku? Or even some of the Docker PaaS like Deis or Flynn? How does Tutum look different than a PaaS?
BB: So Tutum tends to sit more on the infrastructure spectrum. If you think of Tutum right in between IaaS and PaaS, the ones you mentioned tend to be more on the PaaS. If you think of Heroku, their basic building block is code. My code needs to satisfy a specific box, and if my code doesn’t fit that box, then I can’t deploy my code to Heroku. So there are restraints in place that prevent me from deploying my code to a PaaS.
Also most of these PaaS, the paradigm that they follow is that this platform will run your code/application. Now any services that the application needs to consume need to come in as services. Meaning, I will have to pay or set up a MySQL somewhere that my application will then consume. We see the potential of also containerizing those services, and have that be a part of the infrastructure. So we are focusing more on the infrastructure aspects of it. A barebones container infrastructure with the value add from PaaS and we do see services as potentially containerized in a single solution as opposed to being treated as different citizens within this ecosystem. With my application and all my services.
LC: So how is it different than Deis or Flynn? So some of our audience is familiar with the Docker PaaS approach. Does Tutum do more, less, or something different than some of those Docker PaaS solutions?
BB: Ultimately, the biggest differentiator is that Tutum is a service. Whereas Deis and Flynn are open source initiatives that you have to manage and what not. Even if there is overlap, that’s going to be a differentiator. I do see Deis and Flynn leaning more towards the developer centric tools moreso than the infrastructure portion of it and devops portion of it. Whereas we see Tutum leaning more towards the left. But I agree that there’s a big overlap in feature set and end-goal. And I would be hard-pressed to think that one would run both Flynn, and Deis, and Tutum or a combination of those. It’s hard to imagine that happening.
LC: And just to clarify for the audience, would you ever be able to run Flynn or Deis on Tutum?
BB: So that would be a great conversation for me to have with Jeff Lindsay or the guys from Deis. I don’t see why not as far as how they handle the container deployments. And today absolutely not, but moving forward, why not? It may be a possibility. It may be something to explore.
LC: So where are Tutum Docker containers actually hosted?
BB: Today we are running on top of AWS. And I say today, because from the get-go we built everything with the intention of being infrastructure agnostic. Any services we’re using from AWS could be replaced by another infrastructure or bare metal by a private data cloud, say OpenStack, or something along the lines of VMWare solutions. So it’s more about the layer on top more than the hosting services for Tutum. US East.
LC: Are you planning on going to to multiple data centers within Amazon?
BB: Absolutely, so as we develop and as Tutum grows, it makes absolute sense to be in as many places as possible for our customer base.
LC: And you might not know this yet, would you be able to pick which data center you’d be in? Or is that going to be abstracted away?
BB: From a user perspective?
BB: Right, so there are use-cases where one needs data for privacy or legal concerns to not leave a specific geographical region, in which case you have to choose. As we’re able to satisfy more and more complex use-cases, one should be able to choose where your applications, code, and data are stored and running.
LC: How much does it cost to get started with Tutum?
BB: So our current options, our smallest container is a $4/mo container. Unlike Amazon, we charge by the minute. And I believe we go all the way up to $64/mo, which has 4GB of ram. We’re still in beta, this is our pricing model for today. Everybody gets $4 of free credit when they sign up. So you can run a container up to a month for free.
LC: So what hardware do you get for $4? That’s less than Digital Ocean. So what’s the underlying server? What is it running on?
BB: So the way Tutum works today, and again we’re looking at how does Tutum grow from here on. Today we are running containers in a multi-tenant fashion. Meaning, there are hosts that are running multiple containers from different customers. Ultimately a fraction of their compute unit up to being 4 ECU’s, ranging from 256 Mb of Ram to 4 GB of ram. And there’s no reason why we can’t eventually make larger and smaller instances. But for today we don’t want to make things more complicated than they have to be, so that’s what we’re running.
LC: So how many containers per server? on your hosts?
BB: We’ve been exploring with different size hosts. Anywhere between a couple of dozen, to a few hundred is possible.
LC: Great. And what happens if one of the hosts goes down?
BB: What we do know and what one of the things Tutum does now and abstracts away; if you have an application, what Heroku calls 12-factor stateless application, that is very easy to scale horizontally. You can do that with Tutum. You choose your Docker image, you deploy that and you can deploy that into four containers. Now those containers are actually going to be deployed into different hosts in Amazon. And those hosts are actually going to be running in different availability zones. So you’re running in such a redundant way that your application is running in two containers. And if the host were to go down, you’d still be up and running in the sense that there would be a host with your application. Even if the whole availability zone goes down, which has happened with Amazon, your application would still be running. We’re looking at how do we do things like live migration. So we want to migrate nodes or containers within a node to different nodes. How are we able to store the state, if we stay on Amazon, on EBS. So if the host goes down, we can create a new host and the containers and Docker would be able to spring up those containers based on the EBS data. So we’re looking at solutions for that. So right now the simplest way to not experience down time is to make sure your application is running in two or more containers.
LC: Got it. So does Tutum offer a load balancing service that works with four containers? or do you have to build that as a fifth container, a load balancer?
BB: So this is something that we asked ourselves early on. We love control, but we like flexibility, but we also like ease of use. So we try to do something that satisfies both. Meaning if your application that’s containerized and listens on port 80, then we will automatically register it to our load balancer. So if your application is deployed to three containers, you will get a single endpoint that will be load balanced automatically by Tutum. But if you don’t like our load balancer for whatever reason and you want to run your own load balancer, then you can simply containerize HA Proxy or Hipachy, or whatever load balancer you prefer. Configure it however you want, deploy that in a container in Tutum, and then link your containerized load balancer to your application that’s listening on port 80 and load balance that way. So that’s an approach we’ve seen some people take, and something we want to be able to support as we move forward.
LC: Very cool. So that’s really neat, that’s above and beyond the Docker container hosting. It’s that additional automatic load balancing feature. Are there any other features that go above and beyond the container hosting?
BB: Absolutely. Going back to what I said earlier, we’re looking to provide value add on top of just running containers. Zero down-time deployments. You can very easily update all of your containers with an image and an image tag that might be prior to the one you’re running prior today or one you just built. So with one click, you can tell it to reeploy to all of my containers the new image. So we’re looking at different ways to do that in a zero-downtime fashion. You mentioned load balancing, and high availability. Being able to do a git-push, do a build, do my CI/CD, deploy to Tutum, so that whole integration from code all the way to up and running. That’s something we’re looking into. So yes, value adds on top of the basic Docker hosting is something that is very close to our end goal here.
LC: yeah, that’s great. Because just last week we were talking to Avi at Shippable, a hosted CI/CD for Docker. And that was very interesting. I’m friends with the guy behind Drone.io. How does one with CI/CD in the context of Tutum.
BB: The way I see and envision the big picture here is to say that I’m a developer, I do a git push. Now this git push would trigger Docker Build somewhere. We have an open source project called Boatyard.io, that you’re welcome to check out. So somewhere there’s a web hook that triggers the build. And that build will then trigger the tests, the continuous integration. That will tell you everything is working with your code. In that case, it would trigger an automatic redeployment of your updated application ideally with no downtime. So that whole schema, in that regard you have things like GitHub, Docker builder like Docker Hub. Something like Shippable or Drone.io and eventually after that it would be the running and the management and deployment of the application.
LC: Very cool. So I think this is the ultimate vision for the workflow of developers for Docker. To be able to have the exact same container that you built on your laptop be what you test in CI/CD and QA. And also be the exact same environment that goes over to Tutum. Is this why developers might pick something like Tutum over Digital Ocean? Bieng able to have the same environment on their laptop as in production?
BB: Absolutely. Ultimately the way I see it, Digital Ocean is great. I myself use them, I can pay $5 and I have a development environment in the cloud. A development environment that I can go in, configure once, and work with. Now personally I would much rather be able to run my code or application and all of its dependencies. Meaning the services it needs, on my laptop. Now if I can do this with fig where I can have MySQL containerized, Redis cache, my application, my Web server, everything containerized. Have some sort of manifest like Fig that defines this environment, this stack. And with a single command run that locally, and see how that behaves. And then once I see it’s working, I can push that to the cloud and have the exact same environment, that exact same application and all of its services running at scale in the cloud, publicly redundant and highly available, load balanced. That in my mind, is a developers dream. So that is something we definitely have in mind. How do we get to this point.
LC: That’s awesome. That does sound like a dream. It seems like next generation kind of stuff. So that all sounds great, so what isn’t great about Tutum yet?
BB: So there’s a number of things that are still in the pipeline. Volumes and data storage and persistent storage. And dealing with data in general is a challenge. It’s something we’re looking and actively working to solve. Today anything that satisfies the 12-factor stateless application, it’s a great use-case and we work seamlessly. But if I want to run MySQL today with Tutum, the data that has been stored in MySQL would die when that container dies. Now that isn’t great for anything close to production level systems. So that is the one thing we need to keep developing and working. We hope to have a solid solution for that in the immediate future. In the next two to three months, have a persistent storage solution. And also as I mentioned, we don’t want to tie ourselves to IaaS. We see Tutum as more of as a IaaS 2.0. We want to remain agnostic to the infrastructure underneath, and allow the end user to bring in their own infrastructure. So we’re looking at the best options for how to move forward with that. And allowing people to choose whatever infrastructure that best satisfies their use-case.
LC: Can you tell us anything about the technology that you’re going to use for persistent storage? or is that not public yet?
BB: It’s not public yet, so I can’t talk much about it. But we have a number of initiatives that we have in place, and we’re talking to different groups, some of which are very active in the Docker ecosystem and looking to partner with them as we look for something that we can develop that will benefit us and ultimately anyone in the Docker ecosystem can use as a multi-host persistent volume solution.
LC: Cool. That’s very exciting. So are Docker containers going to replace virtual machines? it sounds like Infrastructure 2.0 is all going to be containerized. What’s the future of virtual machines and that world?
BB: I don’t think so. The way I see it, if you rewind a few years. Provisioning hardware sucked. So the virtual machine solved a hardware problem. We have these big machines and now we can cut them up into little pieces and make many virtual machines. So we have an operating system that thinks its running on its own hardware. But it’s really not, it just thinks it is. Now you move that up the stack, and you take an application and you make it think that it’s operating on it’s own operating system. But really its not, it’s a shared operating system. So I think the use case for containers and virtual machines, there may be some overlap. Ultimately one is not going to replace the other. And there’s going to be some things that are going to be done better with one or the other. But I see that there’s going to be a need for both of them. One is a hardware problem, the other is a software problem. Now can these problems be solved with just containers or VMs? Absolustely. But ultimately it’s about choosing the right tool for the right thing.
LC: Got it. So what about the security? You mentioned that these are multi-tenant containers and there have been some stuff recently about the security concerns of linux containers. Is there a worry about multi-tenant security?
BB: If you ask the guys from the virtual machines camp. They’ll say of course, we have the hypervisor, and no one can defeat the hypervisor, and that’s another layer of security. The way we see it, there’s going to be a lot of work being done, and a lot has already been done in the last 18 months, about increasing the level of privacy/security isolation that containers enjoy today. We ourselves are building things around and on top of containers to ensure that containers remain isolated. So we have support for user namespaces. To make the user in the container doesn’t map to the root user or to the user with any privilege really outside of the container. How do I do an app armor round the container and limit the sys calls. SO there are many measures that can take place for you to run a secure multi-tenant container cloud. But absolutely, this is something that I think as we see time moves forward, like we saw during Solomon’s keynote at DockerCon that security, authentication, privacy, are things that for the next 12-24 months we’ll see a lot more being developed in that regard, so it’s interesting.
LC: Yeah, that’s very cool. So what’s possible when you combine Tutum with GitHub and Docker Hub altogether. How does that change the ecosystem for developers?
BB: I think it goes back to what we mentioned earlier. If you’re a developer and I know that a single git push is ultimately going to result in my code, my custom application with all of its services, running at scale in the cloud, and I don’t have to worry about where that’s running, how that’s running, or if it’s not running? That’s a game changer. That’s the exact same code that was running in my laptop, is the code that I was able to test, and ultimately I have the peace of mind that what’s running in the cloud is the exact code I want. With a couple hacks, that’s possible today with Tutum, Docker Hub, and GitHub. So one can trigger an automatic build from GitHub, and one can set using our API an automated deployment or redeployment of an application using Docker Hub and Tutum. So the possibilities are really a true Build Once and Deploy and Forget About it, until you build again kind of solution moving forward.
LC: Whats the future of linux containers look like?
BB: Thats an interesting question. I think one of the reasons why Docker has become so cucessful was that they standardized containers in general. And the fact that they were to gather the support of the big players in the industry with this standardization. So they defined the standard container, these are the tools to interact and collaborate with th see containers. And then you have the big guys, the Google, Amazon, IBM’s of the world saying they support it. That’s huge. So is there the need for more container technology? Like other proprietary container specific projects? It’s hard to tell. I still think there’s a lot of work to be done in the containers we have today. Things like live migration and security like we mentioned earlier, are things that containers need to support out of the box. And I think thats the direction of where things are going to go for containers. Seeing initiatives like CoreOS. A container native operating system. Where I no longer have to do an apt-get piece of software, but all of my software comes in the form of a container. That’s the sort of future of containers that I see, more so than new container technologies come into this space. We may see a couple here and there, but I’d like to see Docker succeed, everybody embracing this technology, and building on top of it as opposed to other projects coming up on the sides.
LC: Do you think other big players are going to start doing native Docker hosting?
BB: Yes, I wouldn’t be surprised if they didn’t start enabling and making it easier to deploy containers. We’ve already seen it from the likes of AWS Elastic Beanstalk. I can now, through a text file deploy a container using their service. One container per VM. Google likewise has announced a number of integrations with Docker and it’s very likely that those integrations will move forward. Ultimately the way I see these players, their business model revolves around computing cycles. And they make money off of the VMs that you’re running. And they have to provide value adds and different services for you to use their cloud computing services, and that’s who they monetize. How much of that value they provide moving forward? It would be interesting to see. But absolutely I do see these players moving into this space.
LC: So that’s why you’re building on top, other than pure Docker hosting, you’re offering these services to keep you differentiated.
BB: Right. Tutum is not in the business of competing on infrastructure against Google or Amazon. We have seen the last few years that compute and memory and storage is becoming a commodity that’s becoming cheaper and cheaper and cheaper. Let’s be realistic, Tutum is great, we’re doing something awesome, we love it, but ultimately I cannot compete with AWS or GCE or any of the big players when it comes to commoditizing infrastructure. Now the way we see it, Docker brings a lot of new things to the table. And there’s a lot of value add to be implemented on top of the bare infrastructure, and that’s where we see the value Tutum provides.
LC: What is the biggest barrier to entry to adopting Docker for most businesses today?
BB: I think it’s a low barrier to entry. Until recently maybe it was the fact that the technology was so new and there’s just a sense of fear that somethings only been around for fifteen months. Will this be supported? What kind of support can we expect? Also the shift in paradigm. You even see this from people who have heard of Docker that have sort of tried out Docker, maybe have gone through a tutorial. It takes maybe from a couple of minutes to a few days until it really clicks and you say, “Wow, now I get containers, now I get Docker, and I see the benefit.” So some people will read articles online about containers and Docker and until they experience and try it out, it doesn’t click, and they don’t see the value add it provides. They say, “well i can do that with virtual machines and XY technology.” So those two things. The technology might not be mature enough, being around for only 15 16 months. And it’s a shift in mentality for how developers, devops, sysadmins, how these teams work and how they deploy new software.
LC: So it’s been great talking to you. It’s been great picking your brain. What’s next for Tutum?
BB: Lots of things. Stay tuned, we have a number of services and cool features coming up. We’ll be doing a transition from doing the multi-tenant cloud hosting, and really focusing on the value add that our users are requesting from us. We’re working hard on these like persistent storage, support for secure socket layers, and lots of exciting new developments coming up in the next two to three months. I think everybody will be pleasantly surprised. And we’re also looking now that we’ve grown the team a bit, to be more active with the community. We love this space, we love to be part of the Docker ecosystem. And as much as possible it’s about giving back. So expect to see some awesome contributions from us.
LC: I cannot wait. It’s been a pleasure talking to you, and hope to keep in touch and I’d love to hear when you make these announcements. And talk to you about your new open source projects. Thank you so much for your time.
BB: Absolutely, thank you so much Lucas.