Part 1: Docker and Nginx
After reading Jake Hamilton's interesting post on hosting a Ghost blog with Docker, I thought it would be good to share my own Ghost + Docker blog setup.
This will be a two-part post, with this post focussing on automated NGINX setup, the next (much shorter post) will cover integrating Ghost.
This post was written for Docker Compose version 2. Version 3 removes some options included below. An updated post will follow once I have tested it.
So this post will assume a basic understanding of Docker and how to work with Docker containers.
If you haven't worked with Docker much, make sure to check out Codeship's guide to the Docker ecosystem
I use this same basic setup to run a whole suite of Docker-powered apps from a small hosted VPS (I'm very cheap), including ownCloud, this blog, my website (s), OpenVPN, and a few other tools and apps. This combines both self-built scripts with a few extra third-party images for convenience.
You'll need to have Docker and Docker Compose installed to emulate this setup!
Before we start
Make sure you have Docker and Docker Compose installed and available on the PATH. Since this is how I use it myself, the examples below will mostly be working out of a
/apps directory I create on all my app servers.
Setting up NGINX
Now, you can manually create your own NGINX sites and configurations to proxy (and this is how Jake did it), but there is another, easier option: Use Jason Wilder's awesome
nginx-proxy image. This simple image (explained in full in this blog post) uses Docker's events and inspection APIs to generate new reverse proxy configurations for NGINX whenever a Docker container is started. Basically, you just need to start the
nginx-proxy container first, then any containers started with a
VIRTUAL_HOST environment variable will be reverse proxied through the running NGINX container.
But what about SSL?
Ah, but of course you were going to ask that, because it's 2017 and you're using SSL for all of your sites now, right?! (if not, you should be!)
To fit in with our automated approach to proxy sites, it makes sense to use Let's Encrypt, the free and automated certificate authority. Fortunately for us, there is a "companion" container available for
nginx-proxy to enable automated retrieval of SSL certificates!
Introducting Compose to the mix
Okay, so we now have a few more moving parts and are clearly angling towards full automation, so let's set up Docker Compose to simplify running and managing our containers.
Our compose file is going to define two services:
proxy for the NGINX reverse proxy and
ssl for the Let's Encrypt companion. This way, we can easily define the dependency and bring both up and down together easily.
While you can do these steps using a text editor direct-on-server, I strongly recommend using a YAML IDE locally, then copying to your server, even if you don't have Docker locally.
So first, the very basics, let's define our
nginx-proxy container in a
If you were to run
docker-compose up now, you would get a new container (named
nginx-proxy) built, created and started. The container would have NGINX listening on port 80 and 443, but with no sites to proxy for. We also don't have SSL support, so let's add that to the same
docker-letsencrypt-nginx-proxy-companionyou need to add a few volume mappings. Check the repo for more info.
- /var/run/docker.sock:/tmp/docker.sock:ro #same as before
- /etc/nginx/vhost.d # to update vhost configuration
- /usr/share/nginx/html # to write challenge files
- /apps/web/ssl:/etc/nginx/certs:ro # update this to change cert location
- /apps/web/ssl:/etc/nginx/certs:rw # same path as above, now RW
Thanks to tiby for noticing an error in the volumes options above. Should work better now!
That may seem like a lot, but there's not much to it:
proxyservice is largely the same, but now with a couple of extra mappings to cooperate with the SSL companion
- The SSL companion uses the
- Using the
volumes_fromoption means that
ssl-companionwill mount all the volumes from
- Using the
depends_onoption will only start
proxyhas started (when running
If you haven't already, run
docker-compose up from the same directory and watch to see the messages as your containers are built and created. You can use
docker-compose ps to check on the current containers (useful if
docker ps includes a lot of unrelated containers).
If you've already run
docker-compose up, you can use
docker-compose up --force-recreateto force re-creating a running service.
Now, with that container running in the background, any containers you start with the
VIRTUAL_HOST environment variable defined will get added to NGINX as a new host to proxy. When running from
docker run, you can specify environment variables using the
docker run -e "VIRTUAL_HOST=foo.bar.com"
Or when using Compose, you can include the
nginx-proxy container defaults to port 80, or (if there's only one) the port your container
EXPOSEs. You can also set
VIRTUAL_PORT to set this explicitly.
VIRTUAL_HOST will only get you a HTTP proxy entry, no SSL certificates. To also enable certificate generation for your container, you need to include two more environment variables:
LETSENCRYPT_HOSTwill generally be the same as
VIRTUAL_HOSTand will be the host to generate a certificate for
LETSENCRYPT_EMAILis sent to Let's Encrypt as the contact for the domain, and must be defined.
If you also set these two variables, again either from the
environments node, the
ssl-companion service we defined earlier will retrieve an SSL certificate from Let's Encrypt and inject it into the reverse proxy entry for the site in the
So what do we have now? So far, with about 20 lines of YAML, we have a completely automated, SSL-enabled, NGINX-powered solution for serving any number of virtual hosts from Docker containers on a single host!
Check back in the next post (in around 2 days time) to see how we integrate Ghost blogging (and other services) into this solution!