Simple blue-green deployments with Docker and Nginx

Last week I attended a talk about blue-green deployment with Jenkins and Docker, hosted by the AWS Barcelona Meetup. There a simple approach using HAProxy and two docker images was presented, where all the deployment process was managed by a Jenkins instance with a job triggered after a git push. AWS infrastructure was not used extensively there, the AWS Container Registry (ECR) being the only AWS-specific component used.

I wanted to investigate if the use of the EC2 Container Service can make the deployment task easier. To my surprise it really didn’t. I spent some hours trying to set up the service, creating IAM components, tagging and pushing images, creating clusters… but I found the whole system really awkward. I was unable to find the images I pushed, the console was not very informative and the documentation didn’t help a lot either.

So I dropped it. Containers are supposed to ease the DevOps tasks, but I believe that the (IMO) bad job of some platforms like AWS trying to fit the containers ecosystem into their mastodontic platform can make a lot of people stay away from containers and keep managing code deployments the old way, astonished by the excess of bells and whistles they have to learn how to use.

Inspired by another talk I attended to about containers in Kubernetes I decided to try Google Cloud Platform. To be honest it does shine a lot more than its AWS equivalent. Kubernettes documentation is much better maintained and clear, looking like something you could really use in production. In theory you can use Kubernetes with any provider, but in practice it works better with Google Cloud, to the point that the basic tutorial asks you to use Google Cloud. Setting up a Kubernetes cluster on your own infrastructure by hand is certainly not trivial.

While Kubernetes marketing promises you a lot of things, and certainly looks awesome, it still seemed rather complex and I wanted to try out a simpler solution for green-blue deployment.

Simple blue-green deployment with Docker and Nginx

First, for those unfamiliar with what a blue-green deployment is, it basically consists on the following. We have a web application live constantly serving requests and we want to roll out an update. Downtime is very expensive for us and we want to avoid it at all costs, so stopping the current application, updating the code and starting it up again is a no-go because some requests might be dropped during the switch. What we do instead is to keep two instances (Blue and Green) of the application: One serving live requests and a second one on hold, that will be updated on deploy. Now imagine Blue is live and we want to deploy a new version. We update the code of the Green instance and restart it. After checking that everything is OK with Green, we switch to processing new incoming requests to the Green instance, putting the Blue one on hold. This way we don’t drop any request.

In our setup, we will use Nginx to route the requests to two application instances, running as Docker containers on two different ports: 8080 and 8081. The Docker applications are really dumb bottle apps, for simplicity.

Get the repo with all config files mentioned here from GitHub.

We start from a Ubuntu 16.04 droplet in Digital Ocean. I chose Digital Ocean because it is very cheap, simple and fast to spin, but any other provider will work, including a local Vagrant box.

Install nginx and docker:

apt install nginx docker.io

Disable default nginx site, which we don’t need:

rm /etc/nginx/sites-enabled/default

Clone the bottle app repo:

git clone https://github.com/dukebody/docker-blue-green bottle

It contains a very simple Dockerfile that will install the app requirements (bottle and gunicorn) and will make gunicorn run the app with two workers, listening on port 8080 in the container.

FROM python:3-onbuild
EXPOSE 8080
CMD [ "/usr/local/bin/gunicorn", "-w", "2", "-b", ":8080", "app:app" ]

The bottle app itself is extremely simple as well. When the root “/” path is requested, it returns a hello world message.

from bottle import route, run, default_app

@route('/')
def index():
    return '<b>Hello world!</b>!'

app = default_app()

Add bottle-a and bottle-b config files to nginx available sites. This is where the magic routing happens. Both files are very similar, with one difference: In bottle-a, the app served under “/” is bottle-a, which will be listening on the port 8080, while in bottle-b it is bottle-a, listening on 8081. In both cases, the other app will be served under “/stage/”, which will allow us to smoke test the newly deployed aplication before making it go live.

# /etc/nginx/sites-available/bottle-a
upstream bottle_app_a {
    server localhost:8080 fail_timeout=0;
}
upstream bottle_app_b {
    server localhost:8081 fail_timeout=0;
}

server {
    listen 80;

    location / {
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_redirect off;
        proxy_pass http://bottle_app_a/;
    }

    location /stage/ {
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_redirect off;
        proxy_pass http://bottle_app_b/;
    }
}
# /etc/nginx/sites-available/bottle-b
upstream bottle_app_a {
    server localhost:8080 fail_timeout=0;
}
upstream bottle_app_b {
    server localhost:8081 fail_timeout=0;
}

server {
    listen 80;

    location /stage/ {
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_redirect off;
        proxy_pass http://bottle_app_a/;
    }

    location / {
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_redirect off;
        proxy_pass http://bottle_app_b/;
    }
}

Build current version of the app, tagging it as “bottle”. It might take some time to download all the python:3-onbuild Docker layers.

docker build -t bottle .

Run the container we just built as “bottle-a”, listening on port 8080:

docker run -d -p 8080:8080 --name bottle-a bottle

Set “bottle-a” as the available live site and reload nginx:

ln -s /etc/nginx/sites-available/bottle-a /etc/nginx/sites-enabled/bottle-a
service nginx reload

Now if you point your browser to the IP of your droplet you should see a message of “Hello world!”. Nice!

That was the original bootstrapping. To do the blue-green deployment, we use a nice fabric fabfile. We could do this using a bash script in the target host, but Python is easier to understand. :)

from time import sleep

from fabric.api import run, cd, task, env

env.repo_path = '/root/bottle'

def _git_update(branch):
    with cd(env.repo_path):
        run('git fetch --all')
        run('git checkout {}'.format(branch))
        run('git reset --hard origin/{}'.format(branch))

@task
def build_docker_image():
    with cd(env.repo_path):
        run('docker build -t bottle .')

@task
def switch_color():
    old_color = run('ls /etc/nginx/sites-enabled | grep bottle-')
    if old_color == 'bottle-a':
        new_color = 'bottle-b'
        new_port = '8081'
    else:  # old_color == 'bottle-b'
        new_color = 'bottle-a'
        new_port = '8080'

    run('docker run -d -p {new_port}:8080 --name {new_color} bottle'.format(
        new_port=new_port, new_color=new_color))

    # health check
    # wait 1 sec to give time for the container to start
    sleep(1)
    response = run('curl -L http://localhost/stage/')

    if 'Hello world' in response:
        run('rm /etc/nginx/sites-enabled/{}'.format(old_color))
        run('ln -s /etc/nginx/sites-available/{new_color} '
            '/etc/nginx/sites-enabled/{new_color}'.format(
            new_color=new_color))
        run('service nginx reload')

        run('docker stop {}'.format(old_color))
        run('docker rm {}'.format(old_color))
    else:
        run('docker kill {}'.format(new_color))
        run('docker rm {}'.format(new_color))

@task
def deploy(branch="master"):
    _git_update(branch)
    build_docker_image()
    switch_color()

It is pretty straightforward:

  1. Update the app repo to the last version using git.
  2. Build a new Docker image with the last version.
  3. Check which version of the app is live, which is the name of the file present in /etc/nginx/sites-enabled/. It should be “bottle-a” according to where we left it.
  4. Run a new Docker container using the image previously built, using the opposite name of the live instance and an appropriate port. In the example above, we would run a container named “bottle-b” listening on 8081.
  5. Perform a smoke health check using curl on /stage/, checking if the string “Hello world” is contained in the response.
    1. If it is, swap the config file present in /etc/nginx/sites-enabled/, reload nginx, stop and remove the old container. This will make the new container start serving the new incoming requests. Note that the “docker stop” command sends a SIGTERM signal to the old container, allowing it to finish serving its requests for 10 seconds before SIGKILLing it (you can extend this timeout if you have longer running requests).
    2. If it isn’t, simply stop and remove the new container, leaving the live app unharmed.

To run this fab task from your local system:

fab -H remote_user@remote_host deploy

Where “remote_user” and “remote_host” depend on your target machine. You will see some commands running in the remote host. When fabric is done, you should have the last version of the application running as “bottle-b”.

To see how everything is handled gracefully when trying to deploy a broken app you can run:

fab -H remote_user@remote_host deploy:branch=broken

The health check failed and the broken app was not deployed live. Nice!

Future work

This was a purposedly simple system to perform blue-green deployments minimizing the amount of new concepts and tools you have to learn and use, but there are some quite rough edges to polish:

  • The image is built at the target system from the application git repo. Ideally we should build the image locally and push it to a private docker image repository (there are many options for this), so we can pull it later from the target system. Having to build an image in a live system might consume valuable resources that could be otherwise used for serving the real app.
  • If we want to go back to a previous image, it is not easy to see which image corresponds to which version of the app in the git repository. A solution might be to tag the built images after the git commit hash.
  • The fabric script above is pretty dumb. If something fails in the middle the system might be left in an inconsistent state and we will have to fix it manually before being able to automatically deploy again.
  • The setting above only works for a single container. If we have to manage multiple containers for our stack we could use docker-compose with two yml files.

One thought on “Simple blue-green deployments with Docker and Nginx

  1. This is a fantastic explanation. I’ve been the whole day looking for a simple implementation of blue-green deployment and this is by far the easiest one to follow. Thanks a bunch for that, I was almost giving up on the idea.

Leave a Reply

Your email address will not be published. Required fields are marked *