Scaling Github Actions Runner on Kubernetes
Alright, let's dive into this scaling adventure. So, why bother scaling GitHub Actions runners? Well, let me drop some wisdom—it's all about the money $$. Those GitHub-hosted Linux runners don't come free ( 0.008$/minute ), and when your organization is juggling multiple teams, products, and staging environments, those costs start doing the cha-cha.
But hey, I am just one guy with my 2000 minutes of free runtime per month, you might wonder, why even bother? Well, call it a casual experiment. Let's get to the nuts and bolts.
Requirements
First off, you'll need a Kubernetes cluster. I'm rolling with k3s, the lightweight Kubernetes cool kid. Got Helm installed on your master node? If not, get it here
Operator
Alright, let's unravel the operator mystery. An operator? It's like the puppet master of your Kubernetes cluster, keeping everything in check. Imagine this: it's on a constant loop, keeping its eagle eyes on the state of things. If reality (the actual state of objects) doesn't match the dream (the desired state), it takes action.
Now, here's the secret sauce. Operators wield Custom Resource Definitions (CRDs). Think of these as special instructions that give the operator the lowdown about your application. So, it's like having a guidebook that tells the operator how to handle specific tasks. Simple, right? The operator sees, compares, acts, and keeps your cluster running as smooth as butter.
Configure Operator
GitHub's official one is Actions Runner Controller (ARC). Forget the old chart , github no longer maintains it; we're rolling with the new gha-runner-scale-set-controller. Install it like a boss:
Easy, right? That --create-namespace
flag does the namespace magic.
Configure Runner
Authentication time! Create a GitHub app owned by your organization. Not going into details; GitHub's got you covered here. You'll get an
- App ID
- Installation ID
- A Private Key
Got a need for speed? If you're eager to get your workflow job picked up faster than a fresh cup of coffee, just give a nudge to the minRunners
line. It's like telling your Kubernetes cluster, "Hey, let's hustle!" Uncomment that line, and watch your runners sprint into action. Now, create a runner-conf.yml
file:
Caution, Captain! Navigate your way through updating the INSTALLATION_NAME
value with utmost precision. This name becomes the guiding star for runs-on
in your workflows. And here's a cybersecurity tip for a smooth sail: create your runner pods in a different namespace than the one housing your operator pods. It's like keeping your ship and treasure on separate islands for that extra layer of security. Safe sailing!
Thats it, you're done. Check if you're in the game:
helm list -A
outputs
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
arc arc-system 1 2024-01-02 16:15:45.21305972 +0000 UTC deployed gha-runner-scale-set-controller-0.8.1 0.8.1
k3s-gha-runners arc-runners 1 2024-01-02 16:16:37.91947708 +0000 UTC deployed gha-runner-scale-set-0.8.1 0.8.1
Check the manager pods
kubectl get pods -n arc-system
outputs
NAME READY STATUS RESTARTS AGE
arc-gha-rs-controller-86f76f55cf-xzmjh 1/1 Running 0 2m57s
k3s-gha-runners-754b578d-listener 1/1 Running 0 116s
Using Runners
Remember that <INSTALLATION_NAME>
? Reference it in your workflow:
name: Actions Runner Controller Demo
on:
workflow_dispatch:
jobs:
Explore-GitHub-Actions:
# You need to use the INSTALLATION_NAME from the previous step
# mines k3s-gha-runners
runs-on: k3s-gha-runners
steps:
- run: echo "🎉 This job uses runner scale set runners!"
I ran 3 jobs, and they started in 3 parallel pods.
k3s-gha-runners-sklxr-runner-8f8cv 1/1 Running 0 14s
k3s-gha-runners-sklxr-runner-f2qvx 1/1 Running 0 23s
k3s-gha-runners-sklxr-runner-v4k4p 1/1 Running 0 22s
Hope you enjoyed this scaling fiesta! 🚀