Abstract: In this post I’ll describe how to create a private cloud with Docker, Varnish and a lot of shell scripting on a set of private (virtual) servers. This post assumes intermediate knowledge of related techniques. I will still once in a while go into setup details because, well, this is also an exercise for me 🙂
Overview
Despite not having much myself to show for on operating IT infrastructure, almost all my projects require bits of operation knowledge here and there; in a desperate attempt to fill my last holidays’ hot noons with productive intent and also to delve a little deeper into operations, I set out to conquer the holy grail of ops: create my own, private cloud.
My current job at AMOS is about PaaS and – all benefits regarding efficiency, performance and availability put aside – I value the substantial simplification of cloud resource provisioning as opposed to ordering resources from an internal department. Whereas requesting a new (virtual) server or extra storage space can easily become an exercise of patience in large enterprises, cloud infrastructure provisions virtual resources in minutes with added benefits such as isolation for extra security. Thus, the main goal of this exercise is to develop a platform that simplifies container allocation to the maximum possible extent.
Here’s an all-in-one picture of the cloud’s ingredients and how network traffic is routed between them:

Virtualbox for providing a set of virtual servers and isolating networks. This is just a convenience facility since I don’t want to set up a real data centre with real hardware. I could be using real and/or remote virtual servers, but I often prefer working offline (for instance, during my holiday when I wrote this post) and having local VMs to play with comes quite handy. There is a single, master VM which routes external network traffic to the node VMs and manages deployment tasks of applications on node VMs. Node VMs run applications and connect to the manager VM via a private, internal network.
Docker will be used for packaging applications and running them in containers on the node VMs. I was toying with the thought of using
Quemu instead because it would allow running entire operating systems in the cloud instead of Linux containers, but packaging applications for Quemu might have been a bit more of a challenge than packaging Docker images. The master VM will run a Docker registry which will store all application images that run somewhere on a node VM.
Dnsmasq for providing node VMs with an IP and making them accessible under DNS names.
Squid for routing network traffic from the virtual machines to the outside world. For the scope of this exercise it wasn’t terribly useful because there isn’t a universal Linux standard of how to tell applications to use an HTTP proxy; in short, having node VMs using Squid was messy and I resorted to connecting the node VMs directly to the internet for the few times I needed to download something to them.
Varnish for routing HTTP traffic from the outside world to applications running in the various VMs. It’s important to get version 4 because of its ability to declare backend groups programmatically – version 3 is default in Ubuntu 14.04.
Creating the virtual machines
In this cloud there will be two types of VMs: managers and nodes. In particular, there will be one active manager that runs Varnish, Docker and the less important Squid, and several nodes that receive work from the manager.
Setting up the manager
Let’s first set up the manager: create a new virtual machine with Virtualbox, let it have a NATed Ethernet interface for talking to the internet (quite useful for installing all the software we’ll need) and an internal network for talking to the nodes which we’ll set up later.
We’ll download an Ubuntu 14.04 server ISO image and install it into the manager VM. We’ll call the host “matrix-manager”. Then we map a local port from the host (i.e. 2222) to the SSH port 22 of the manager VM:
We’ll probably need more ports (like HTTP and HTTPS) later on, but for now SSH shall suffice.
Let’s update Ubuntu (apt-get upgrade, you know the drill) and give the internal Ethernet card a fixed IP address:
/etc/network/interfaces:
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
iface eth0 inet dhcp
auto eth1
iface eth1 inet static
address 10.10.10.1
netmask 255.255.255.0
The internal network will be the private 10.10.10.x subnet with the domain name “matrix”. Any nodes running in that subnet won’t be able to see the internet, yet.
An important setting in /etc/hosts: we’ll need to convince Ubuntu to not associate “matrix-manager” with the local network, because Dnsmasq will later advertise “matrix-manager” as 127.0.0.1, creating quite a confusion with node VMs (which we haven’t yet set up). No, this is not clever foresight, it is super-human hindsight as I’m writing this post after having stumbled and bruised myself. Unfortunately I haven’t found a way to have dnsmasq automatically generate host names, so I had to hardcode them:
/etc/hosts:
Now let’s install and configure dnsmasq:
/etc/dnsmasq.conf:
interface=eth1
domain=matrix,10.10.10.0/24
dhcp-range=10.10.10.2,10.10.10.255,255.255.255.0,12h
enable-ra
log-queries
log-dhcp
The trick here is that we’ll use Dnsmasq as both a DNS and DHCP server.
Now let’s create a local SSH key:
ssh-keygen
We’ll later copy this key to the node VMs.
Now we’ll install our own, private Docker registry on matrix-manager following
mostly this tutorial. We won’t go as far as working with security certificates, so we’ll run an
insecure registry:
/etc/default/docker:
DOCKER_OPTS="--insecure-registry matrix-manager.matrix:5000"
The idea is that we’ll push (and tag) any image first on the matrix-manager registry and then have the node VMs pull those images from the registry, similarly to a maven nexus repository.
docker run -d -p 5000:5000 --restart=always --name registry registry:2
starts the private Docker registry on the master VM and will make sure it always runs when the VM is rebooted.
Setting up the node VMs
In short: set up again a virtual machine, give it a single Ethernet network card which maps to an internal network. The idea is that node VMs can’t “break out” of the cloud; any network communication with either the intranet or the internet happens through proxies installed on the manager VM. Users can access web applications running as Docker images on the node VMs through a Varnish proxy running on the master VM.
After installing Ubuntu on the node VM (let’s call it “node1”), give it briefly internet access for updates and install Docker on it, then switch the Ethernet card back to internal network. Tell Docker where (and how) to find the Docker registry for our own images on the matrix-manager VM:
/etc/default/docker:
DOCKER_OPTS="--insecure-registry matrix-manager.matrix:5000"
Now let’s copy our SSH key over to the node VM: ssh-copy-id cloud@matrix-manager.matrix where “cloud” is my local ubuntu account, your’s is probably called differently. Also, I’d like the node VM to get its host name from DHCP. This script will do it (with some minor modifications): https://nullcore.wordpress.com/2011/12/09/setting-the-system-hostname-from-dhcp-in-ubuntu-11-10/
#!/bin/sh
# Filename: /etc/dhcp/dhclient-exit-hooks.d/hostname
# Purpose: Used by dhclient-script to set the hostname of the system
# to match the DNS information for the host as provided by
# DHCP.
#
# Do not update hostname for virtual machine IP assignments
if [ "$interface" != "eth0" ] && [ "$interface" != "wlan0" ]
then
return
fi
if [ "$reason" != BOUND ] && [ "$reason" != RENEW ] \
&& [ "$reason" != REBIND ] && [ "$reason" != REBOOT ]
then
return
fi
echo dhclient-exit-hooks.d/hostname: Dynamic IP address = $new_ip_address
hostname=$(host $new_ip_address | cut -d ' ' -f 5 | cut -d '.' -f 1)
echo $hostname > /etc/hostname
hostname $hostname
echo dhclient-exit-hooks.d/hostname: Dynamic Hostname = $hostname
So far there is nothing else to do for the node VM. Shut it down, clone it a few times (i.e. node-2, node-3) changing the MAC address. You should be able to start them without any problem and note that each node VM gets a unique network name and host name (e.g. node107, node 202 etc). On the matrix-manager VM you can also check all known nodes: cat /var/lib/misc/dnsmasq.leases
Milestone 1: a first test
Let’s see whether we can do this very basic thing: install a public Docker image in the manager’s registry and then have a node pull it from there. SSH to the manager, then:
docker pull hello-world && docker tag hello-world matrix-manager.matrix:5000/cloud/hello-world
docker push matrix-manager.matrix:5000/cloud/hello-world
Then ssh to a node, like node1.matrix and:
docker run matrix-manager.matrix:5000/cloud/hello-world
This should run a “hello-world” instance which you can verify by running:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8721f9846d03 matrix-manager.matrix:5000/hello-world "/hello" 9 seconds ago Exited (0) 8 seconds ago tender_hopper
Milestone 2: Pushing images to nodes with scripts
So far nothing of what we did is really cloud-like: there are just a bunch of manually set-up virtual machines. Let’s try to automate a deployment process with these goals:
1. As a developer, I’d like to install web applications in the matrix.
2. As a tester, I’d like to be able to use web applications in the matrix.
Linux shell scripting is quite powerful; after all this time I’m quite amazed how much it accomplishes – and here it will accomplish no less than the full deployment lifecycle:
1. picking up a custom application
2. packaging it as a Docker image
3. pushing it to our private registry
4. logging in to a node VM and running the application image
5. registering the application’s URL with varnish and routing HTTP traffic to the node VMs
6. all of this with fail over and load balancing
The scripts we’ll be discussing below are located in the code repository:
https://github.com/ggeorgovassilis/cloudmatrix/tree/master/home/cloud/bin
The first script, create-image, packages an application into a Docker image. The script would typically be executed on a developer’s workstation, requires a Docker installation on the workstation and will store the application image in the local registry.
The next script, install-image, is executed on the master VM and pulls the application image to our master registry.
The run-container script logs into a node VM via SSH, pulls an application image from the master VM registry and runs it as a container. Since we’re interested in web applications, the script will also bind HTTP to a local TCP port over which the application can be talked to.
Another interesting script is deploy-applications-plan which reads instructions from a file which describe which application images to install on which node VMs.
Last not least, update-proxy reads the same application deployment plan and creates a VCL file for Varnish which maps HTTP requests to containers running on specific node VMs.
So let’s get started! The action plan looks like this: deploy two Java web applications that output a simple “Hello world” to the cloud and access them via a web browser.
Step 2: run the create-image script on a Docker machine like this:
create-image simplewebapp1 test-image1
create-image simplewebapp2 test-image2
Step 3: install the images on the master VM registry:
install-image simplewebapp1
install-image simplewebapp2
Step 4: create an application deployment plan (for inspiration, look at
applications.plan in the code repository). You must know the node names for this, so maybe run the master VM and a few node VMs first in order to determine their DNS names.
Step 5: run the deploy-applications-plan script which will execute the application plan on node VMs.
Step 6: run the update-proxy script which will have Varnish map application URLs to the respective nodes.
Step 7: map a convenient TCP port via Virtualbox networking to Varnish’ 6081 port. For this example, I’ll use a 1:1 mapping and use 6081.
Step 9: verify that failover works. Shut down one of the node VMs via the Virtualbox control panel and verify that Step 8 still works.