Unleashing Loki: Taming Logs in Kubernetes

Unleashing Loki: Taming Logs in Kubernetes

Greetings, code warriors! Brace yourselves for a tech adventure. Today, we're exploring Loki, the log-aggregation maestro crafted to untangle your app's log mayhem. Fasten your seatbelts as we walk through setting up Loki in a K3s self-hosted cluster. It's the next chapter in my kube-prometheus-stack saga. Ready? Let's roll.

Gregory Szorc's Digital Home | Thoughts on Logging - Part 1 - Structured  Logging

Unlike other logging systems, Grafana Loki is built around the idea of only indexing metadata about your logs: labels (just like Prometheus labels). Log data itself is then compressed and stored in chunks in object stores such as S3 or GCS, or even locally on the filesystem. A small index and highly compressed chunks simplifies the operation and significantly lowers the cost of Loki.

Prepare for Loki's conquest in your Kubernetes realm. Verify a running Kubernetes cluster, wield Helm >=3 (the Kubernetes package manager), and have your trusty terminal by your side. Don't forget the coffee—tech's spartan gods insist on it.

Loki Deployment Modes

  1. Simple Scalable Mode:
Simple scalable mode diagram
Simple Scalable Loki, image by Grafana Labs

Loki’s simple scalable deployment mode separates execution paths into read, write, and backend targets. These targets can be scaled independently, letting you customize your Loki deployment to meet your business needs for log ingestion and log query so that your infrastructure costs better match how you use Loki. The simple scalable deployment mode can scale up to a few TBs of logs per day

  1. Monolithic Mode
monolithic mode diagram
Monolithic Loki, Image by Grafana Labs

The simplest mode of operation is the monolithic deployment mode. This mode runs all of Loki’s microservice components inside a single process as a single binary or Docker image. It is good for upto 100GB of logs per day.

  1. Microservices Mode
Microservices mode diagram
Microservices Loki, Image by Grafana Labs

The microservices deployment mode runs components of Loki as distinct processes, allowing limitless scale.

Summoning the Log Sleuth:

Steering clear of unnecessary complexities, I'm opting for the monolithic Loki setup, fine-tuning it precisely for my needs. Rolling with Loki version 2.8 or higher, I'll stick to the recommended TSDB and local file storage, giving object storage the subtle nod of avoidance. Because, really, who needs excess baggage? Keep it lean, keep it efficient.

  1. Helm Up!: Add the Grafana chart repository:
helm repo add grafana https://grafana.github.io/helm-charts

adding grafana charts to your helm repo

  1. Loki Configuration: Channel your inner Norse god and tweak Loki's config to your liking.
loki:
  auth_enabled: false
  commonConfig:
    replication_factor: 1
  storage:
    type: "filesystem"
  schemaConfig:
    configs:
      - from: 2024-01-31
        store: tsdb
        object_store: filesystem
        schema: v12
        index:
          prefix: loki_index_
          period: 24h
  rulerConfig:
    storage:
      type: "local"
    wal:
      dir: /var/loki/ruler-wal
    rule_path: "/data/rules"
  storage_config:
    tsdb_shipper:
      active_index_directory: /data/tsdb-index
      cache_location: /data/tsdb-cache
      shared_store: filesystem
  query_scheduler:
    # the TSDB index dispatches many more, but each individually smaller, requests.
    # We increase the pending request queue sizes to compensate.
    max_outstanding_requests_per_tenant: 32768

singleBinary:
  replicas: 1
  resources:
    limits:
      cpu: 1
      memory: 6Gi
    requests:
      cpu: 0.5
      memory: 2Gi
  persistence:
    enabled: true
    enableStatefulSetAutoDeletePVC: true
    size: 10Gi
    storageClass: "local-path"
    accessModes:
      - ReadWriteOnce
  extraVolumes:
    - name: data
      emptyDir: {}
  extraVolumeMounts:
    - name: data
      mountPath: /data

ingress:
  enabled: false

gateway:
  enabled: false

test:
  enabled: false

monitoring:
  serviceMonitor:
    enabled: true
    metricsInstance:
      enabled: false

  selfMonitoring:
    enabled: false
    grafanaAgent:
      installOperator: false

  lokiCanary:
    enabled: false

loki-monolith.yml, to be used for loki installation

No need to showcase Loki to the world, so no Ingress. Since it's a monolith, no internal request load balancing either - hence, no gateway. Extra data mounts keep Loki from tap-dancing into a crash-backoff loop. No expert moves here; just lots of doc reading, multiple trial and error rounds, and a scenic tour through GitHub issues. This setup is my victory dance. For a deeper dive into configurations, the official chart doc is the treasure map.

  1. Unleash the Sleuth: Deploy Loki with Helm , Specify the absolute path when creating the values file, because precision matters.
helm install loki grafana/loki --namespace monitoring --values path/to/loki-monolith.yml

loki installation using helm

  1. Let the Logs Flow: Deploy Promtail on your kubernetes nodes—Loki's trusty sidekick. It discovers targets, scrapes logs from your apps, attaches labels and sends them to the Loki instance. Config is a breeze, with a default Daemonset. The URL resolves locally since both run in the same namespace, monitoring.
Automating Social Media Contests with Web Scraping | by Jason Yip | Towards  Data Science
resources:
  limits:
    cpu: 100m
    memory: 512Mi

config:
  clients:
    - url: http://loki:3100/loki/api/v1/push

promtail.yml, to be used as values for promtail helm installation

helm install promtail grafana/promtail --namespace monitoring --values path/to/promtail.yml

promtail installation command

  1. Grafana is Your Watson: Loki integrates seamlessly with Grafana, giving you a powerful dashboard to visualize and analyze your logs. Write LogQL queries (think Sherlock's deductions) to uncover hidden patterns and troubleshoot issues.

In a prior post, I covered setting up Kube-Prometheus-Stack. I'll reuse those values but integrate Loki as a datasource for Grafana.

# previous configs..
grafana:
  # other configs..
  # .........
  additionalDataSources:
    - name: Loki
      type: loki
      access: proxy
      url: http://loki:3100/
      isDefault: false
      manageAlerts: true

That's a wrap! Now, with Grafana, I can seamlessly query for logs, using the built in visual query builder, allowing me to query Namespaces, Containers or basically any labels. Thanks for joining me on this journey!

querying logs of loki instance, saved by loki 😏

Final Words:

Realtime Server Log Monitoring | Pathway
me after visualizing how bots are terrorizing me and now banning them

With Loki in your arsenal, your kubernetes cluster's logs become a treasure trove of insights, not a confusing mess. Remember, with great log power comes great responsibility (don't unleash Ragnarok on your storage!). Now go forth, tame those logs, and uncover the secrets they hold!

Read more