If you went to blog.rendle.io to get here, you’ll have been redirected. I’ve moved to my rendlelabs.com business domain, because business. All the old content is still here, although I might be missing some images.
The blog, along with my website, is now running on Azure Container Service (AKS), which is the one that gives you a managed Kubernetes cluster. It’s a good deal: you pay for the nodes that are running your containers, but not for the management infrastructure. And Azure takes care of upgrades and all that scary stuff too.
Getting Ghost running in AKS, and nested within my main site, was interesting, so I thought I’d make my first post in my new home about that.
Running Ghost in Docker
OK, so, a thing you need to know about Ghost. It supports MySQL; it does not like any other databases. So I spun up the cheapest Azure Database for MySQL instance that I could, which is £10/month, and I pointed a Ghost container at it by setting environment variables in a
docker-compose file, which looked like this:
version: '3' services: ghost: build: . image: ghost:1.22.0-alpine ports: - 2368:2368 environment: database__client: mysql database__connection__host: [myserver].mysql.database.azure.com database__connection__user: [myuser] database__connection__password: [mypassword] database__connection__database: ghost
After making sure I’d added my local IP address to the MySQL server’s firewall, I ran
docker-compose up and the database migration ran and I had a Ghost blog. Yay!
Next, I wanted to add the Ghostium theme that I use. For that, I decided to create a simple Dockerfile based on the official image, and add my cloned copy of Ghostium to it, like this:
FROM ghost:1.22.0-alpine COPY ghostium /var/lib/ghost/content/themes/ghostium/
(Sometimes Docker files are really small.)
I ran this image and set Ghostium as the theme in settings, and all was good. Now it was time to try and get this running in my AKS cluster. And then the murders began.
Running Ghost in AKS
You see, containers are transient, ephemeral things, especially in orchestration clusters like Kubernetes. A node might go offline, so all the pods running on it get rebalanced to other nodes. You want at least a couple of instances of each pod running, in case that happens. So storing state inside containers is A Bad Idea. And for most software, that’s not a problem: everything is stored in a database or some other persistent storage. But not Ghost. Oh no. Here’s another thing you need to know: Ghost stores images in its
content directory, along with everything else that isn’t plain text data. I looked into this a bit, and yes, there are pluggable storage providers for Ghost, and yes, there is one for Azure Storage, but no, it doesn’t work. At least, not for me.
This is not the end of the world, though, because Kubernetes also supports pluggable storage providers, and they work just fine, after a bit of dithering around. The best provider for working in Azure is the File Storage one, which will allow multiple instances of a pod to claim read/write access to a file share. Here is an exhaustive list of the dithering around I went through to get it working:
- Create a new Azure Storage Account
- Follow the instructions to create an Azure Files volume
- Deploy the Service to AKS
- Read the logs (via Containership) to find that the permissions are wrong on the mounted volume
- Research changing the permissions on mounted volumes
- Decide you can’t change the permissions on mounted volumes
- Notice that you can change the permissions on Persistent Volumes
- Follow the instructions to create a Persistent Azure Files volume
- Setting the permissions to 0777 (read/write for everyone)
- Redeploy the Service to AKS
- Read the logs to find it still isn’t working
- Check the PersistentVolumeClaim to find it “can’t be bound”
- Discover that you have to create the Storage Account inside the automatically-generated AKS Resource Group to be able to use it
- Create another new Azure Storage Account
- Recreate the StorageClass and PersistentVolumeClaim
- Redeploy the Service to AKS
- Read the logs to find that Ghost can’t start because Ghostium has vanished
OK, that last one, that’s on me. I’m mounting an external volume onto the pod over
/var/lib/ghost/content, which means anything that was in the container’s content directory is now gone. I needed to upload the Ghostium directory into the File Share, and that’s when I was reminded of something really neat: you can now
net use Azure File Storage from your Windows PC. In the Azure Portal, I browsed to the File service, found the Share that Kubernetes had automatically created, clicked the “Connect” icon and it just gave me a command I could paste into PowerShell, and now I have a
Z: drive that is the file share my Ghost service is using for
content. I literally just copied and pasted the Ghostium theme using File Explorer. And since then I’ve opened the theme directory from the
Z: drive using VS Code to edit settings. It’s like being back in 1999 with a text editor that can open files over FTP!
For posterity, here are the (redacted) Kubernetes files:
The Persistent Volume
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azurefile namespace: website provisioner: kubernetes.io/azure-file mountOptions: - dir_mode=0777 - file_mode=0777 - mfsymlinks parameters: storageAccount: [redacted, just the name, not the full URL] --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: azurefile namespace: website spec: accessModes: - ReadWriteMany storageClassName: azurefile resources: requests: storage: 5Gi
Because I’m not adding in themes anymore, I can just use the base image here, overriding configuration with environment variables:
apiVersion: apps/v1 kind: Deployment metadata: name: blog-deployment labels: app: blog namespace: website spec: replicas: 1 selector: matchLabels: app: blog template: metadata: labels: app: blog spec: containers: - image: ghost:1.22.0-alpine imagePullPolicy: Always name: blog env: - name: url value: "https://rendlelabs.com/blog/" # Elided MySQL connection strings, coming from from Secrets ports: - containerPort: 2368 volumeMounts: - name: azure mountPath: /var/lib/ghost/content volumes: - name: azure persistentVolumeClaim: claimName: azurefile
Making it visible
At this point, when I checked the logs, the container reported that it had run the migration and Ghost was listening on
http://0.0.0.0:2368. I had to take its word for that, because it wasn’t accessible from outside the cluster. Fixing that is really easy, though, thanks to Kubernetes Ingress and Nginx.
See, the first thing to set up on any cluster you’re going to be running web things on is an Ingress Controller, and the Nginx one is really easy. The instructions are here, and consist of downloading a YAML file, patching it with something Azure needs, and deploying it. Then put that YAML file in your Dropbox Folder of Useful YAML Files, because it’s the same every time. You can also deploy Nginx Ingress using Helm, which I haven’t tried yet; I’ll post about it when I do.
Once you’ve got the Ingress Controller running, adding services to it is just a couple more bits of config:
apiVersion: v1 kind: Service metadata: name: blog namespace: website spec: ports: - port: 80 protocol: TCP name: http targetPort: 2368 selector: app: blog sessionAffinity: ClientIP --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: blog namespace: website annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/proxy-body-size: 16m spec: rules: - host: rendlelabs.com http: paths: - path: /blog backend: serviceName: blog servicePort: 80
The key things in there are the
ports section of the
Service spec, where we map port 80 on the service to port 2368 on the pod. I tried just running Ghost on port 80, but it didn’t like it. And then we have the
Ingress spec, which is where we tell the Nginx proxy what domain and path this service will provide; in this case
rendlelabs.com/blog. (Note that I also told Ghost that was its address using the
url environment variable earlier.) The
path: /blog setting tells Nginx to route all requests below that path to this service; anything else will be handled by the main website service, which doesn’t have a
path settings. Finally, I had to add the
nginx.ingress.kubernetes.io/proxy-body-size: 16m annotation to the
Ingress configuration because it defaults to
1m (one megabyte), which stopped me from importing the content from my old blog.
Kubernetes Ingress running in AKS automatically integrates with the Azure Load Balancer, so you don’t have to set anything up at all there. Just find the IP address it’s using for ports 80 and 443 and use those in your DNS Zone file.
And that was basically it. I don’t need SSL at this level because I use Cloudflare and they provide it for free, which is nice. For more secure requirements I’d use LetsEncrypt to make sure that things are encrypted all the way to the origin, but that’s for another post.
I guess now I’ve gone to all this trouble, I should probably post more often…