Back to blog results

September 25, 2019 By Katie Lane

Why Traditional Kubernetes Monitoring Solutions Fail

Kubernetes has several key differences that push the limits of traditional application monitoring. Due to the distributed ephemeral nature of Kubernetes, most existing solutions fail to give the visibility we might expect, resulting in longer resolution times. Looking at these potential pitfalls can help guide us as we take a fresh look at Kubernetes management and monitoring.

Infrastructure focused

Traditional monitoring solutions look at applications from a hardware or server-centric perspective. This makes sense for legacy solutions where the underlying infrastructure would often stay the same for months or even years, but this is no longer the case.

Pods, Nodes, even clusters can all be destroyed and rebuilt with ease. Effectively monitoring what is running in Kubernetes means to monitor at the application level, focusing on the Service and Deployment abstractions. Understanding what is happening from a service and deployment perspective is critical to understanding the overall health of your application, and by extension, the customer experience. Monitoring solutions should align with the way Kubernetes is organized, as opposed to trying to fit Kubernetes into our legacy modes.

Fragmented visibility

Most solutions only provide visibility into a piece of the Kubernetes environment. Admins are forced to navigate between tools for logs, metrics, events, and security threats to build a real-time picture of application health.

Lack of correlation

Furthermore, not only the tools but the data are also fragmented. It is near impossible to connect the dots between metrics on a node to logs from a pod in that node.

This is because the metadata tagging of the data being collected is not consistent. A metric might be tagged with the pod and cluster it was collected from, while a log might be labeled using a different naming convention. The metadata enrichment process must be streamlined and centralized to gain consistent tagging, and therefore, correlation.

In traditional solutions, log and event collection and enrichment happens separately from metric collection enrichment, inhibiting the ability to correlate data during troubleshooting.

Security vulnerabilities

Unfortunately, security visibility is often a low priority for teams running Kubernetes, and existing toolsets rarely capture any sort of security events for Kubernetes. Due to the lack of end-to-end visibility into Kubernetes environments, the risk of undetected security threats is a real issue. Kubernetes also makes it challenging to identify vulnerabilities in images at runtime, enforce security policies, and detect and remediate threats.

That said, end-users won’t care about the difficulties involved when their data is compromised. It is essential to take a more DevSecOps-style approach in Kubernetes environments that incorporates security considerations into the CI/CD lifecycle, and elevates security visibility to the same importance as operational visibility.

Kubernetes Observability

Monitoring, troubleshooting and securing Kubernetes with Sumo Logic

Katie Lane

Katie Lane

Product Marketing Manager - Operational Analytics

More posts by Katie Lane.

People who read this also enjoyed