Sign up for a live Kubernetes or DevSecOps demo

Click here


Learn how to get started with Kubernetes including how to monitor and manage your clusters, view your Kubernetes logs, and how to improve your Kubernetes security

Kubernetes is first and foremost an orchestration engine that has well-defined interfaces that allows for a wide variety of plugins and integrations to make it the industry leading platform in the battle to run the world’s workloads. From machine learning to running the applications a restaurant needs, Kubernetes has proven it can run things.

All these workloads, and the Kubernetes platform itself, produce output that is most often in the form of logs. Kubernetes has some very limited capabilities to view – and in some cases collect – its internal logs, and the logs generated by all the individual workloads it is running most often in the form of ephemeral containers. This article will cover how Kubernetes logging is structured, how to use its native functionality, and how to use a third-party logging engine to really enhance what can be done with logos generated within a Kubernetes environment.

Basic Logging Architecture and Node-Level Logging

The most basic form of logging in Kubernetes is the output generated by individual containers using stdout and stderr. The output for the current running container instance is available to be accessed via the kubectl logs command.

The next level up of logging in the Kubernetes world is called node level logging. This is broken down into two components: the actual log files being stored; and the Kubernetes side, which allows the logs to be viewed remotely and removed, under certain circumstances.

The actual files are created by the container runtime engine – like Docker containerd – and contain the output from stdout and stderr. There are files for every running container on the host, and these are what Kubernetes reads when kubectl logs is run. Kubernetes is configured to know where to find these log files and how to read them through the appropriate log driver, which is specific to the container runtime.

Log Rotating

Kubernetes has some log rotating capabilities, but it is limited to when a pod is evicted or restarted. When a pod is evicted, all logs are removed by kubelet. When a pod is restarted, kubelet keeps the current logs and the most recent version of the logs from before the restart. Any older logs are removed. This is great, but does not help keep the logs from long-running pods under control.

For any live environment with a constant stream of new log entries being generated, the reality of disk space not being infinite becomes very real the first time an application crashes due to no available space. To mitigate this, it is best practice to implement some kind of log rotation on each node that will take into account both the number of pods that potentially will be run on the node, and the disk space that is available to support logging.

While Kubernetes itself can not handle scheduled log rotation, there are many tools available that can. One of the more popular tools is logrotate, and like most other tools in the space, it can rotate based on time (like once a day), the size of the file, or a combination of both. Using size as one of the parameters makes it possible to do capacity planning to ensure there is adequate disk space to handle all the number of pods that could potentially run any given node.

System Components

There are two types of system components within Kubernetes: those that run as part of the OS, and those that run as containers managed by kubelet. As kubelet and the container runtime run as part of the operating system, their logs are consumed using the standard OS logging frameworks. As most modern Linux operating systems use systemd, all the logs are available via journalctl. In non-systemd Linux distributions these processes create “.log” files in the /var/logs/ directory.

The second type of system components that run as containers – like the schedule, api-manager, and cloud-controller-manager -– have their logs managed by the same mechanisms as any other container on any host in that Kubernetes cluster.