Keep a keen eye: Logging for applications on Kubernetes

20 Aug 2025 - 6 min read

Logging isn't just a good practice – it's an absolute necessity for applications on Kubernetes.

Logging is indispensable for:

  • Troubleshooting and Debugging: When things go wrong (and they inevitably will!), logs are your primary source of information to pinpoint the root cause of failures, performance bottlenecks, and unexpected behavior.
  • Monitoring and Alerting: Analyzing log patterns allows you to identify potential issues proactively. You can set up alerts based on specific error messages or unusual activity, enabling you to respond before they escalate.
  • Security Analysis: Logs can provide valuable insights into security events, such as unauthorized access attempts or suspicious activity.
  • Auditing and Compliance: In many regulated industries, maintaining detailed logs is a legal requirement for auditing and compliance purposes.
  • Performance Analysis: Logs can help you understand application performance over time, identify areas for optimization, and track the impact of changes.

Kubernetes logging: How is it different from traditional applications

Logging in Kubernetes presents unique challenges compared to traditional, monolithic environments:

  • Ephemeral Nature of Pods: Pods are designed to be transient. They can be created, destroyed, and rescheduled frequently. Relying on logs stored within a pod's filesystem means losing that data when the pod terminates.
  • Distributed Architecture: Applications span multiple nodes and pods, making it difficult to correlate events and trace issues across different components.
  • Orchestration Complexity: Kubernetes introduces an additional layer of orchestration. Logs from the Kubernetes control plane (API server, kubelet, etc.) are also crucial for understanding the overall health of the cluster.

Therefore, a centralized and persistent logging strategy is paramount in Kubernetes.

Architecting Your Logging Strategy in Kubernetes 🏗️

Software teams have adapted several logging architectures for applications running on Kubernetes:

  • Node-Level Logging Agent: This is the most common and recommended approach. A dedicated logging agent (like Fluentd or Fluent Bit) runs on each node as a DaemonSet. It collects logs from all containers on the node and forwards them to a central backend. This provides a unified view of logs from the entire cluster.
  • Sidecar Container: In this pattern, a logging agent runs as a sidecar container within the same pod as your application. The application writes logs to a shared volume, and the sidecar forwards them. This is useful for applications that cannot directly write to stdout and stderr.
  • Application-Level Logging: Applications can be configured to directly send logs to a logging backend using client libraries. While feasible, this can tightly couple your application to a specific logging system.

The node-level logging agent approach generally offers the best balance of simplicity, efficiency, and comprehensiveness for most use cases.

Best practices for keeping your Kubernetes logs in check ✅

To make the most of your Kubernetes logging setup, consider these best practices:

  1. Log to stdout and stderr: Design your applications to write logs to standard output and standard error streams. Kubernetes and container runtimes are designed to capture these streams.
  2. Embrace Structured Logging: Instead of plain text, format your logs in a structured format like JSON. This makes it significantly easier to parse, filter, and analyze logs. Include relevant context like timestamps, log levels, and application-specific identifiers.
  3. Enrich Logs with Kubernetes Metadata: Ensure your logging solution automatically adds Kubernetes metadata (pod name, namespace, node name, labels) to your logs. This context is invaluable for tracing issues within the cluster.
  4. Centralize Your Logs: Always forward your logs to a dedicated, persistent logging backend. Don't rely on local storage on nodes or within pods.
  5. Implement Log Rotation and Retention Policies: Configure your logging system to rotate logs to prevent disk exhaustion and define appropriate retention policies based on your compliance and analysis needs.
  6. Secure Your Logs: Implement appropriate access controls and encryption to protect sensitive log data.
  7. Set Up Meaningful Alerts: Configure alerts based on key log patterns to proactively detect and respond to errors, performance degradations, and security threats.
  8. Visualize Your Logs: Utilize dashboards and visualization tools to gain insights from your log data and identify trends.

Tools of the Trade: Kubernetes Logging Solutions 🛠️

A rich ecosystem of tools and solutions exists for Kubernetes logging:

  • The EFK Stack (Elasticsearch, Fluentd/Fluent Bit, Kibana): A popular open-source stack providing robust log aggregation, storage, search, and visualization capabilities. Fluent Bit is often preferred for its lightweight nature in Kubernetes environments.
  • Cloud-Native Logging Services: Major cloud providers offer managed logging services like Google Cloud Logging, Amazon CloudWatch Logs, and Azure Monitor Logs, which integrate seamlessly with their respective Kubernetes offerings.
  • Loki and Alloy (Grafana Labs): Loki is a horizontally scalable, highly available log aggregation system inspired by Prometheus. Alloy is the agent that collects logs and sends them to Loki. (Alloy is the replacement for Promtail which is not deprecated)
  • Splunk: A powerful commercial platform offering comprehensive log management and analytics.
  • Datadog: Another popular commercial monitoring and logging platform with excellent Kubernetes integration.

Choosing the right tool depends on your specific requirements, technical expertise, budget, and existing infrastructure.


Effective logging is not an optional extra in Kubernetes; it's a fundamental pillar for building and operating resilient applications.

As an aspiring DevOps engineer, you should be skilled in setting up logging for applications running on Kubernetes. To build these skills there's no better way than setting up a home lab.

Here's what you need to do:

  1. Create a dummy web app (You can use Node.js, Python, or any preferred language) that writes a dummy stream of logs
  2. Create a Kubernetes cluster
  3. Deploy your dummy app
  4. Setup a logging system with Grafana Alloy and Loki (This is the easiest open-source logging system)
  5. Collect and analyze the logs from your dummy app

You can do this right on your laptop, or on a cloud provider infra.

Though we describe it in five simple steps, there's a lot of stuff happening in this home lab. Be prepared to spend a few days or even one full week to complete it. It's a lot of work.

But the experience you gain and the skills you develop will be invaluable.

Read the next post in your Inbox