1 min read

Ghost Part 2

My god, that was a drama setting up a hardened install of Ghost. This was supposed to be easy, but that has not proved to be. The issues seemed to be more network-related. The architecture was to have nginx act as a proxy to the outside world with TLS/SSL configured. This also supported the routing of the site to always land on the SSL-certified version (https). The idea was to only expose the proxy externally and have Ghost, the cert service, and the backed database on the internal network. In principle, this is a great idea, but it caused one of many problems that have taken some time to straighten out.

Problem 1 - strong passwords caused problems in the Docker Compose file, as in certain circumstances, name-value pairs are not taken as literal values so any special characters need to be escape sequenced to correct the interpretation of the values. This sent me round in circles for a few hours as nothing would connect. I ended up changing the passwords to remove reserved symbols in to get them rendering correctly.

Problem 2 - working across 2 networks and DNS. This was the biggest challenge, as I didn't want to change the design to a single network, as this would weaken the security stance. After a lot of digging and understanding, the change is that the DNS setup when having Docker containers on an internal network can't resolve outbound to external servers. This caused Ghost problems when trying to update the site with a new theme. This involved making the Ghost container bridge the internal and external network, but only having ports exposed on the internal. This allows the Ghost container to reference external sites (to pull new themes) and also set up internally to talk to the database container that resides on the internal network only.

It has been a painful experience, but as you can see from the site running, we are up and running with a non-default theme. I have learnt a lot about Docker and networks within Docker, and how DNS does or does not work with the Docker architecture. I now need to tidy up all the configuration files, sort out the backup scripts, and version control the setup. This architecture is going to come in handy further down the line.