Hosting a Ghost blog with NGINX and Docker (Part 1)

Part 1: Docker and Nginx

After reading Jake Hamilton's interesting post on hosting a Ghost blog with Docker, I thought it would be good to share my own Ghost + Docker blog setup.

This will be a two-part post, with this post focussing on automated NGINX setup, the next (much shorter post) will cover integrating Ghost.

This post was written for Docker Compose version 2. Version 3 removes some options included below. An updated post will follow once I have tested it.

Introduction

So this post will assume a basic understanding of Docker and how to work with Docker containers.

If you haven't worked with Docker much, make sure to check out Codeship's guide to the Docker ecosystem

I use this same basic setup to run a whole suite of Docker-powered apps from a small hosted VPS (I'm very cheap), including ownCloud, this blog, my website (s), OpenVPN, and a few other tools and apps. This combines both self-built scripts with a few extra third-party images for convenience.

You'll need to have Docker and Docker Compose installed to emulate this setup!

Before we start

Make sure you have Docker and Docker Compose installed and available on the PATH. Since this is how I use it myself, the examples below will mostly be working out of a /apps directory I create on all my app servers.

Setting up NGINX

Now, you can manually create your own NGINX sites and configurations to proxy (and this is how Jake did it), but there is another, easier option: Use Jason Wilder's awesome nginx-proxy image. This simple image (explained in full in this blog post) uses Docker's events and inspection APIs to generate new reverse proxy configurations for NGINX whenever a Docker container is started. Basically, you just need to start the nginx-proxy container first, then any containers started with a VIRTUAL_HOST environment variable will be reverse proxied through the running NGINX container.

But what about SSL?

Ah, but of course you were going to ask that, because it's 2017 and you're using SSL for all of your sites now, right?! (if not, you should be!)

To fit in with our automated approach to proxy sites, it makes sense to use Let's Encrypt, the free and automated certificate authority. Fortunately for us, there is a "companion" container available for nginx-proxy to enable automated retrieval of SSL certificates!

The docker-letsencrypt-nginx-proxy-companion container (from Yves Blusseau/JrCs) connects directly to the running NGINX proxy container and adds SSL certificates to generated sites.

Introducting Compose to the mix

Okay, so we now have a few more moving parts and are clearly angling towards full automation, so let's set up Docker Compose to simplify running and managing our containers.

Our compose file is going to define two services: proxy for the NGINX reverse proxy and ssl for the Let's Encrypt companion. This way, we can easily define the dependency and bring both up and down together easily.

While you can do these steps using a text editor direct-on-server, I strongly recommend using a YAML IDE locally, then copying to your server, even if you don't have Docker locally.

So first, the very basics, let's define our nginx-proxy container in a docker-compose.yml file:

version: '2'
services:
  proxy:
    image: jwilder/nginx-proxy
    container_name: nginx-proxy
    ports:
      - '80:80'
      - '443:443'
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro

If you were to run docker-compose up now, you would get a new container (named nginx-proxy) built, created and started. The container would have NGINX listening on port 80 and 443, but with no sites to proxy for. We also don't have SSL support, so let's add that to the same docker-compose.yml file:

To enable docker-letsencrypt-nginx-proxy-companion you need to add a few volume mappings. Check the repo for more info.

version: '2'
services:
  proxy:
    image: jwilder/nginx-proxy
    container_name: nginx-proxy
    ports:
      - '80:80'
      - '443:443'
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro #same as before
      - /etc/nginx/vhost.d # to update vhost configuration
      - /usr/share/nginx/html # to write challenge files
      - /apps/web/ssl:/etc/nginx/certs:ro # update this to change cert location
  ssl-companion:
    image: jrcs/letsencrypt-nginx-proxy-companion
    container_name: ssl-companion
    volumes:
      - /apps/web/ssl:/etc/nginx/certs:rw # same path as above, now RW
    volumes_from:
      - proxy
    depends_on:
      - proxy

Thanks to tiby for noticing an error in the volumes options above. Should work better now!

That may seem like a lot, but there's not much to it:

  • The proxy service is largely the same, but now with a couple of extra mappings to cooperate with the SSL companion
  • The SSL companion uses the docker-letsencrypt-nginx-proxy-companion image.
  • Using the volumes_from option means that ssl-companion will mount all the volumes from proxy
  • Using the depends_on option will only start ssl-companion after proxy has started (when running docker-compose up)

If you haven't already, run docker-compose up from the same directory and watch to see the messages as your containers are built and created. You can use docker-compose ps to check on the current containers (useful if docker ps includes a lot of unrelated containers).

If you've already run docker-compose up, you can use docker-compose up --force-recreate to force re-creating a running service.

Running sites

Now, with that container running in the background, any containers you start with the VIRTUAL_HOST environment variable defined will get added to NGINX as a new host to proxy. When running from docker run, you can specify environment variables using the -e option:

docker run -e "VIRTUAL_HOST=foo.bar.com"

Or when using Compose, you can include the environment node:

services:
  service_name:
    environment:
	  - VIRTUAL_HOST=foo.bar.com

The nginx-proxy container defaults to port 80, or (if there's only one) the port your container EXPOSEs. You can also set VIRTUAL_PORT to set this explicitly.

Enabling SSL

However, setting VIRTUAL_HOST will only get you a HTTP proxy entry, no SSL certificates. To also enable certificate generation for your container, you need to include two more environment variables: LETSENCRYPT_HOST and LETSENCRYPT_EMAIL.

  • LETSENCRYPT_HOST will generally be the same as VIRTUAL_HOST and will be the host to generate a certificate for
  • LETSENCRYPT_EMAIL is sent to Let's Encrypt as the contact for the domain, and must be defined.

If you also set these two variables, again either from the -e or environments node, the ssl-companion service we defined earlier will retrieve an SSL certificate from Let's Encrypt and inject it into the reverse proxy entry for the site in the proxy service.

Taking Stock

So what do we have now? So far, with about 20 lines of YAML, we have a completely automated, SSL-enabled, NGINX-powered solution for serving any number of virtual hosts from Docker containers on a single host!

Check back in the next post (in around 2 days time) to see how we integrate Ghost blogging (and other services) into this solution!

Comments

comments powered by Disqus