Deploying a Redis Cluster on Google Compute Engine with Docker

RedisGCE

Redis cluster is a solution for running Redis in a distributed infrastructure where data stored is auto-sharded across multiple nodes. In this tutorial, we are going to install it on a Google Compute Engine virtual machine using Tutum’s API and the recently announced Bring Your Own Node feature.

Creating the GCE instance

Head over to Google Cloud Console, create a new project and spin up a new Ubuntu 14.04 virtual machine (you can create a new Google account and opt into the free trial):

create

Ensure that you’ve enabled traffic via HTTP in the Firewall section:

enable-http

Also, ensure that you’ve created and assigned a static IP address to the instance:

create-static-ip

Next, install the Google Cloud SDK and use the provided gcloud utility to set the working project as:

gcloud config set project [project-name]

Installing the Tutum agent

We now need to SSH into our newly created instance. Head over to the instances page and get the command we’ll use to SSH into the machine as shown below:

ssh

It should be something like:

gcloud compute --project "[project-name]" ssh --zone "us-central1-a" "redis-cluster-test"

After creating the instance, since docker’s REST API, by default is accessible through port 2375, we allow traffic through that port by heading over to the Network section of the instance, selecting the default network and adding a tcp:2375 entry under the allow-default-http default firewall rule section:

enable-http-port

Be sure to also add port 80 which we’ll use to serve the demo web app.

Having enabled traffic through the docker and web ports, we then proceed to link this node to Tutum. We simply click the Bring your own node in the node’s section, copy the given installation command and run it inside the machine that we just accessed via the SSH session.

Running the test application

We will be deploying this test app which is a demo project that contains two services:

  • a set of redis cluster nodes
  • a redis cluster manager with a simple web front-end

It’s a simple todos application that’s backed by Redis Cluster. The application can be run locally using Fig as:

fig build
fig up
fig scale node=6
fig up

The web app runs on port 5000. If you’re using boot2docker, run boot2docker ip to retrieve the IP address to use to access the app. The manager service, on boot, first creates the cluster before starting the web server. The fig scale node=6 command was then used to increases the number of containers of the node’s service making the cluster have 3 masters, each served by a single slave. One thing to note is that we link the node’s service to the manager service in order to advertize the Redis connections ports. We then discover these addresses when starting the manager application as:

// lib/manager/bin/run

var nodes = [];
for (var k in process.env){
if (/REDISCLUSTERTEST_NODE_(d+)_PORT_6379_TCP$/.test(k)){
nodes.push(process.env[k]);
}
}

The addresses are then passed to the redis-trib.rb command which creates the cluster before starting the web server:

// lib/manager/bin/run
var cmd = spawn(
'bash',
[
'-c',
'echo yes | ../redis-3.0.0-rc1/src/redis-trib.rb create --replicas '+replicas+' '+nodes
]
);

Deploying the application

Our ready to use instance can be managed using Tutum CLI or the official APIs. Before that, we need to add a deployment tag to our node so that we target our deployments to that specific node. This can be accomplished easily in the node’s detail page:

tags

Fig is a great utility for local developments that can also be used to run services in production. However, Tutum provides an easier way to manage services on multiple nodes and clusters in production. We will be using tutum-deploy which provides a similar functionality to Fig. It requires a configuration file similar to Fig’s that express the expected state of the clusters or nodes we wish to deploy to:

---

nodes:

- uuid: "{{REDIS_CLUSTER_TEST_NODE_UUID}}"

services:

- name: redis-cluster-node
image: "tutum.co/{{USER}}/redis-cluster-node"
build: lib/node
containers: 6
tags:
- gce

- name: redis-cluster-manager
image: "tutum.co/{{USER}}/redis-cluster-manager"
build: lib/manager
containers: 1
ports:
- inner_port: 5000
outer_port: 80
port_name: http
protocol: tcp
published: true
env:
PORT: 5000
REPLICAS: 1
require:
- redis
tags:
- gce

In order to deploy the demo application to GCE, we first need to login into our private docker registry with:

docker login tutum.co

Once authenticated, we proceed to set the following 3 environment variables:

  • TUTUM_USER
  • TUTUM_APIKEY
  • TUTUM_REDIS_CLUSTER_TEST_NODE_UUID (the custom node uuid)

The configuration declares that we wish to deploy the 2 services to a single custom node. To deploy the stack, assuming you’ve already installed tutum-deploy, simply run:

make deploy

The above command, builds the images, pushes them to the registry and then finally deploys the services to the custom node. You should then have been able to access your application via the linked static IP. To test if our application is indeed using the Redis cluster, head over to the manager container and search for a master node indicated in its logs. Deleting one of them should not bring the cluster down since Redis Cluster will automatically assign the corresponding slave as the new master. However, deleting more nodes would eventually lead to the cache service failing. Redeploying the app should however recover the lost nodes and therefore the service. Note that, the number of slaves or masters can be increased by updating the manager service REPLICAS and the node service’s containers attributes.

Conclusion

This post was meant to show how Tutum’s services could be a great compliment to Fig, in management of large numbers of production nodes.

Tagged with: , , , ,
Posted in Tutorial
7 comments on “Deploying a Redis Cluster on Google Compute Engine with Docker
  1. Anirudh Banarji says:

    does gce have a free tier?

  2. I am really grateful to the holder of this web
    page who has shared this fantastic piece of writing at
    here.

  3. Lavonne says:

    Ahaa, its nice dialogue regarding this article at this place
    at this weblog, I have read all that, so now mee also
    commenting here.

  4. […] Deploying a Redis Cluster on Google Compute Engine with Docker […]

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories
%d bloggers like this: