Deploying Ghost CMS on Kubernetes
Been wanting to start a blog for a while, you know, a sort of documentation for the things I do but tend to forget. Never got around to it because it felt like an extra layer on top of what I'm already up to, plus, I'm not a fan of wrestling with WordPress and all its vulnerabilities. Had two main criteria in mind: pack everything in a single container with minimal resource hogging, including the DB, and have full markdown support via some slick text editor.
Then, I stumbled upon Ghost, and I thought, "Yep, this is it for my blog." But, of course, it came with its own set of challenges. No official docs on how to run it on Kubernetes, and they really frown upon using sqlite
in production. Well, I'm not signing up for the database maintenance party. So, I did a bit of a workaround. Got my own Kubernetes cluster with nodes chillin' in 7 regions, running k3s – a lightweight Kubernetes setup. So this blog? It's all about how I got Ghost up and running on K3s – deploy and forget, my friend.
Alright, hold on tight because we've got a to-do list. First up, this blog assumes you know a little bit around kubernetes. And let me spill the beans on my k3s server specs – using wireguard-native
, traefik
is disabled so exposing throughingress-nginx
- the one maintained by kubernetes
and cert-manager
with staging and production SSL issuers. I won't dive too deep into the nitty-gritty details, but you get the gist. Ready? Let's roll.
Configuration
So, their docs spilled the beans on using a custom config file to show defaults who's boss. Nice move. My plan? Create a Secret, toss it into a volume, and voila – configuration magic. This is also how I hush Sqlite's party noise for a smooth production deployment. Here's the backstage pass to what it should look like:
This pretty much ticks off all the config needs for the big league – production deployment, here we come!
Persistance
Buckle up! Our app's bringing its own database, so it's got to be a bit clingy – you know, stateful. We need some storage mojo to make sure that if a container decides to play hide and seek, my precious blog posts don't vanish into the digital abyss. But let's not get greedy;
1 Gi should cover my life story, memes and all.
Time to dive into PersistentVolumes and get my storage game on.
Why go for ReadWriteOnce
, you ask? Well, it's like making a room reservation – only one lucky node gets the read-write privileges. As I mentioned before, I'm going for that solo, small-running-instance vibe. Just me and my data, no third wheels allowed.
Application
Alright, here comes the good stuff – deploying the real deal. I'm rolling with a StatefulSet, just 1 Replica in the gang. Why? StatefulSets are like the rock stars for stateful apps, famous and has identity
- even if it's just a solo act. That clingy database? It's right there with the application instance.
Now, let's talk probes – livenessProbe
first. It's like having a health check for your container. If it's down and out, Kubernetes will swoop in, restart the show. I mean, who's going to keep browsing my blogs if the server's stuck in a deadlock?
Then there's the readinessProbe
– why bother? Well, we want to make sure the container's not just awake but also ready to handle requests. It might be doing some post-reboot rituals, and we're not serving traffic until it's all set to roll.
Rest is a piece of cake – set that imagePullPolicy to Always
for easy minor version upgrades, mount your secret, and oh, create a symlink because, well, Ghost likes it that way and hushes down with the startup ritual noise.
Toss in a nodeselector to let it chill on any node available to ensure failover. I have mine labelled with regions. And there you have it, the magic code:
Smooth sailing! The application is up and running, but hold on – there's a twist. It's like a hidden gem, not exposed to the internet, not even within the cozy confines of the cluster. But fear not, we're about to flip the switch on that. Stay tuned for the exposure magic.
Internals
Time to spill the beans to the cluster – "Hey, got this cool app here, might need some load balancing love (not that it really needs it), and, you know, route those incoming requests to the right pod." Enter the Service – it's like the bouncer at the club, making sure everyone gets to the right party.
So, I'll conjure up a Service, work my port mapping magic, let it shine within the cluster with a private IP, and voila! This, my friends, is what I'll throw out to the internet using my trusty Ingress.
Exposing
Get ready for the big reveal! I'm rolling with ingress-nginx – the cream of the crop. It's the maestro that directs external traffic, gracefully juggles the load, manages SSL termination, and flaunts a bunch of annotations for fine-tuning.
Noticed those limit annotations? It's my way of throwing a little shade at those pesky bots – a touch of spam protection. Oh, and don't forget to set the body size limit using nginx.ingress.kubernetes.io/proxy-body-size
if you need more – the default is a measly 1M. Gotta keep things in check, you know? the ssl is configured through Cert Manager
, with cluster issuers. Production means the real deal - not a self signed one.
Watch your step
I built this to dance solo – no partner, no troupe. No clustering, sharding, or any multi-server shenanigans. This means your rollout options are as limited as a one-man show, making it trickier to ensure a flawless, downtime-free performance.
That's a wrap. If you made it this far, kudos to you, you've sailed through like a champ. And if you spot any loopholes in my implementation, go ahead, break into it – could be a good learning experience. Until next time, sleep tight, and may the debug plague stay far, far away.