Deploying Ghost CMS on Kubernetes

Been wanting to start a blog for a while, you know, a sort of documentation for the things I do but tend to forget. Never got around to it because it felt like an extra layer on top of what I'm already up to, plus, I'm not a fan of wrestling with WordPress and all its vulnerabilities. Had two main criteria in mind: pack everything in a single container with minimal resource hogging, including the DB, and have full markdown support via some slick text editor.
Then, I stumbled upon Ghost, and I thought, "Yep, this is it for my blog." But, of course, it came with its own set of challenges. No official docs on how to run it on Kubernetes, and they really frown upon using sqlite
in production. Well, I'm not signing up for the database maintenance party. So, I did a bit of a workaround. Got my own Kubernetes cluster with nodes chillin' in 7 regions, running k3s – a lightweight Kubernetes setup. So this blog? It's all about how I got Ghost up and running on K3s – deploy and forget, my friend.
Alright, hold on tight because we've got a to-do list. First up, this blog assumes you know a little bit around kubernetes. And let me spill the beans on my k3s server specs – using wireguard-native
, traefik
is disabled so exposing throughingress-nginx
- the one maintained by kubernetes
and cert-manager
with staging and production SSL issuers. I won't dive too deep into the nitty-gritty details, but you get the gist. Ready? Let's roll.
Configuration
So, their docs spilled the beans on using a custom config file to show defaults who's boss. Nice move. My plan? Create a Secret, toss it into a volume, and voila – configuration magic. This is also how I hush Sqlite's party noise for a smooth production deployment. Here's the backstage pass to what it should look like:
apiVersion: v1
kind: Secret
metadata:
name: ghost-conf-prod
namespace: tahmid
labels:
app.kubernetes.io/name: tahmid-blog
app.kubernetes.io/component: ghost-prod-conf
app.kubernetes.io/version: "1.0"
type: Opaque
stringData:
config.production.json: |
{
"url": "https://blog.tahmid.org",
"server": {
"port": 2368,
"host": "0.0.0.0"
},
"database": {
"client": "sqlite3",
"connection": {
"filename": "/var/lib/ghost/content/data/ghost.db"
},
"useNullAsDefault": true,
"debug": false
},
"caching": {
"contentAPI": {
"maxAge": 3600
},
"publicAssets": {
"maxAge": 3600
}
},
"mail": {
"from": "'Tahmids Blog' <newsletter@tahmid.org>",
"transport": "SMTP",
"options": {
"host": "<smtp-host>",
"port": 465,
"secure": true,
"auth": {
"user": "<your-mail-here>",
"pass": "<your-pass-here>"
}
}
}
}
secret
This pretty much ticks off all the config needs for the big league – production deployment, here we come!
Persistance
Buckle up! Our app's bringing its own database, so it's got to be a bit clingy – you know, stateful. We need some storage mojo to make sure that if a container decides to play hide and seek, my precious blog posts don't vanish into the digital abyss. But let's not get greedy;
1 Gi should cover my life story, memes and all.
Time to dive into PersistentVolumes and get my storage game on.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: tahmid-ghost-pvc
namespace: tahmid
labels:
app.kubernetes.io/name: tahmid-blog
app.kubernetes.io/component: ghost-pvc
app.kubernetes.io/version: "1.0"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
pvc
Why go for ReadWriteOnce
, you ask? Well, it's like making a room reservation – only one lucky node gets the read-write privileges. As I mentioned before, I'm going for that solo, small-running-instance vibe. Just me and my data, no third wheels allowed.
Application
Alright, here comes the good stuff – deploying the real deal. I'm rolling with a StatefulSet, just 1 Replica in the gang. Why? StatefulSets are like the rock stars for stateful apps, famous and has identity
- even if it's just a solo act. That clingy database? It's right there with the application instance.
Now, let's talk probes – livenessProbe
first. It's like having a health check for your container. If it's down and out, Kubernetes will swoop in, restart the show. I mean, who's going to keep browsing my blogs if the server's stuck in a deadlock?
Then there's the readinessProbe
– why bother? Well, we want to make sure the container's not just awake but also ready to handle requests. It might be doing some post-reboot rituals, and we're not serving traffic until it's all set to roll.
Rest is a piece of cake – set that imagePullPolicy to Always
for easy minor version upgrades, mount your secret, and oh, create a symlink because, well, Ghost likes it that way and hushes down with the startup ritual noise.
Toss in a nodeselector to let it chill on any node available to ensure failover. I have mine labelled with regions. And there you have it, the magic code:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: tahmid-ghost-cms
namespace: tahmid
labels:
app.kubernetes.io/name: tahmid-blog
app.kubernetes.io/component: ghost-statefulset
app.kubernetes.io/version: "1.0"
spec:
serviceName: tahmid-blog-svc
replicas: 1
selector:
matchLabels:
app: tahmid-ghost-cms
template:
metadata:
labels:
app: tahmid-ghost-cms
app.kubernetes.io/name: tahmid-blog
app.kubernetes.io/component: acs-db
spec:
nodeSelector:
region: "india"
volumes:
- name: ghost-content-data
persistentVolumeClaim:
claimName: tahmid-ghost-pvc
- name: ghost-conf
secret:
secretName: ghost-conf-prod
defaultMode: 420
restartPolicy: Always
terminationGracePeriodSeconds: 30
containers:
- name: ghost-cms
image: ghost:5.75-alpine
imagePullPolicy: Always
env:
- name: NODE_ENV
value: production
ports:
- name: cms-port
containerPort: 2368
protocol: TCP
volumeMounts:
- name: ghost-conf
readOnly: true
mountPath: /var/lib/ghost/config.production.json
subPath: config.production.json
- name: ghost-content-data
mountPath: /var/lib/ghost/content/
resources:
requests:
cpu: "0.1"
memory: 64Mi
limits:
cpu: "0.5"
memory: 512Mi
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 60
successThreshold: 1
timeoutSeconds: 5
httpGet:
path: /ghost/api/admin/site/
port: cms-port
httpHeaders:
- name: X-Forwarded-Proto
value: https
- name: Host
value: blog.tahmid.org
readinessProbe:
failureThreshold: 6
initialDelaySeconds: 30
periodSeconds: 60
successThreshold: 1
timeoutSeconds: 5
httpGet:
path: /ghost/api/admin/site/
port: cms-port
httpHeaders:
- name: X-Forwarded-Proto
value: https
- name: Host
value: blog.tahmid.org
statefulset
Smooth sailing! The application is up and running, but hold on – there's a twist. It's like a hidden gem, not exposed to the internet, not even within the cozy confines of the cluster. But fear not, we're about to flip the switch on that. Stay tuned for the exposure magic.
Internals
Time to spill the beans to the cluster – "Hey, got this cool app here, might need some load balancing love (not that it really needs it), and, you know, route those incoming requests to the right pod." Enter the Service – it's like the bouncer at the club, making sure everyone gets to the right party.
So, I'll conjure up a Service, work my port mapping magic, let it shine within the cluster with a private IP, and voila! This, my friends, is what I'll throw out to the internet using my trusty Ingress.
apiVersion: v1
kind: Service
metadata:
name: tahmid-blog-svc
namespace: tahmid
labels:
app: tahmid-ghost-cms
app.kubernetes.io/name: tahmid-blog
app.kubernetes.io/component: ghost-service
app.kubernetes.io/version: "1.0"
spec:
type: ClusterIP
selector:
app: tahmid-ghost-cms
app.kubernetes.io/component: acs-db
ports:
- name: cms
protocol: TCP
port: 2368
targetPort: 2368
svc
Exposing
Get ready for the big reveal! I'm rolling with ingress-nginx – the cream of the crop. It's the maestro that directs external traffic, gracefully juggles the load, manages SSL termination, and flaunts a bunch of annotations for fine-tuning.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tahmid-blog-ingress
namespace: tahmid
labels:
app.kubernetes.io/name: tahmid-blog
app.kubernetes.io/component: ghost-ingress
app.kubernetes.io/version: "1.0"
annotations:
nginx.ingress.kubernetes.io/limit-connections: "5"
nginx.ingress.kubernetes.io/limit-rpm: "60"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
cert-manager.io/cluster-issuer: letsencrypt-production
acme.cert-manager.io/http01-edit-in-place: "true"
kubernetes.io/tls-acme: "true"
spec:
ingressClassName: nginx
rules:
- host: blog.tahmid.org
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: tahmid-blog-svc
port:
name: cms
tls:
- hosts:
- blog.tahmid.org
secretName: blog-tahmid-tls
ingress
Noticed those limit annotations? It's my way of throwing a little shade at those pesky bots – a touch of spam protection. Oh, and don't forget to set the body size limit using nginx.ingress.kubernetes.io/proxy-body-size
if you need more – the default is a measly 1M. Gotta keep things in check, you know? the ssl is configured through Cert Manager
, with cluster issuers. Production means the real deal - not a self signed one.
Watch your step
I built this to dance solo – no partner, no troupe. No clustering, sharding, or any multi-server shenanigans. This means your rollout options are as limited as a one-man show, making it trickier to ensure a flawless, downtime-free performance.
That's a wrap. If you made it this far, kudos to you, you've sailed through like a champ. And if you spot any loopholes in my implementation, go ahead, break into it – could be a good learning experience. Until next time, sleep tight, and may the debug plague stay far, far away.