Crash Course to Monitoring Kubernetes: The K8s Anatomy (Part 1)
Sign Up Free Request Demo

Crash Course to Monitoring Kubernetes: The K8s Anatomy (Part 1)

Kubernetes Is Winning

It’s no secret that Kubernetes is the leader when it comes to container orchestration platforms. Since its 2014 release, the open source project has taken the world by storm and become one of the biggest success stories in the open source community. Housed by the Cloud Native Computing Foundation, Kubernetes has the backing and support of top enterprise companies including Amazon Web Services (AWS), Google and Microsoft. With 37,000+ stars on GitHub, 66,000+ commits, and 1,600+ contributors, Kubernetes continues to eat the container orchestration world.

Container technologies like Docker have primarily fueled the massive paradigm shift from monolithic applications to distributed microservices architectures. However, this comes with a significant amount of overhead when it comes to managing, deploying and scaling containers. In order to be successful, you need a container orchestration tool, which is exactly where Kubernetes comes in. Kubernetes solves much of this overhead for you by adding some powerful abstractions to use in the lifecycle of your containerized applications. The benefits that Kubernetes provides to running your containerized applications come with a couple of challenges, particularly with monitoring and troubleshooting.

This three-part series is going to take you through the challenges of monitoring and troubleshooting Kubernetes and the containerized applications it is running and how Sumo Logic can help. First, we are going to take a journey deep into the Kubernetes architecture, getting a solid understanding of all the components involved. Second, we will dive into what is critical to monitor and how Sumo Logic collects this data. In the final post of the series, we are going to use this data we have gathered to gain full visibility into everything happening in Kubernetes.

Anatomy of Kubernetes

I like to think about Kubernetes like a vehicle. You can use a vehicle to get you to places you want to go, and Kubernetes helps you get to places where you have better control of your containers. Just like a car, Kubernetes consists of multiple layers of components working together to get you to those places.

Let’s dive into some of those individual components, or “car parts” if you will…

Kubernetes Abstractions

Kubernetes provides multiple abstractions that play an integral role in the orchestration of your containers. These abstractions are like the various components of a car that you use when driving such as the radio, the speedometer or the gas pedal.

  • Pods are the lowest level of compute in Kubernetes. A pod consists of one or more co-located and co-scheduled containers. They share the same network namespace, allowing them to communicate over localhost, and they share the same storage.
  • ReplicaSets ensure that the specified number of pod replicas are running. Should a pod crash, the Replication Controller in the Controller Manager will replace it.
  • Deployments give you declarative management of your ReplicaSets. You describe your desired state, and the deployment handles creating the ReplicaSets.
  • Daemonsets ensure that all desired Nodes are running a copy of a Pod. When a new Node joins the cluster, the DaemonSet will run a copy of that Pod is running there.
  • Services provide reliable communication between pods. Since pods can come and go, relying on the IP address is insufficient, and services solve this problem by providing reliable communication between pods.
  • Namespaces are virtual clusters which are backed by the physical cluster. They provide a way to group logical components and define the resources that grouping can use.

Infrastructure Layer

At an infrastructure level, a Kubernetes cluster is made up of two types of components: the Master, and the Nodes. These may be virtual or physical machines as Kubernetes can run anywhere. The Master is the brains of the operation, overseeing the Nodes which are the collective resources the Master can use to run your containers. Think of the entire vehicle as a Kubernetes cluster. The Master is like the engine that powers the vehicle, and the wheels are like the Nodes that allow the engine to push it forward.

Node Components

Like the tires of a vehicle are made up of multiple parts, so are the Nodes of a Kubernetes cluster. These components work together to provide the Kubernetes Control Plane with a resource it can use to schedule your containers.

  • Kubelet is an agent running on every node in the cluster that ensures the containers are running and is the mechanism by which the Master communicates with the Node.
  • Kube Proxy is the component that enables the Kubernetes Service abstraction, providing reliable and consistent communication between pods.
  • Container Runtime Software is responsible for running containers. While Docker is the most prevalent, there are several other supported runtimes such as rkt and runc.

Kubernetes Control Plane

The Master runs all the components that make up the Kubernetes Control Plane. The Control Plane components work together to orchestrate your containers. In the same way that an engine has a lot of pieces that work together to power your vehicle, the Kubernetes Control Plane is no different. Each piece is responsible for a specific area that powers Kubernetes.

  • API Server is the component that exposes the Kubernetes API, which is the front-end for the Kubernetes cluster where all Control Plane components and users will interact.
  • Etcd is a consistent and highly-available key-value store where all Kubernetes cluster data resides.
  • Scheduler watches for new pods that need to be scheduled and identifies which Node they should run on.
  • Controller Manager runs multiple controllers responsible for handling pod replication, endpoint creation, taking action when a Node goes down and many other actions.  Controllers are loops that continuously check the desired state and take action to ensure that Kubernetes is running in the desired state.

Putting these pieces together, we can start to see the architecture of Kubernetes:

Monitoring Kubernetes

Now that we know what’s under the hood of Kubernetes, we can see it is a complex machine with a lot of components operating together.  This machine gives you the ability to manage, deploy and scale your containerized applications with ease. You might even be using a managed service such as Amazon EKS or Google GKE to provide you a highly available, secure, and managed control plane, letting you focus on your containerized applications. In both of these scenarios, the ability to monitor and troubleshoot your applications — and potentially the Kubernetes Control Plane — is critical to delivering reliable, fault-tolerant distributed applications running in containers.

So how do we monitor all of these pieces? In our next post in this series, we are going to go through each component and identify what is critical to monitor, and how you can use Sumo Logic’s machine data analytics platform to stay on top of what’s happening in your Kubernetes clusters.

Additional Resources

Get Started Today!

Sign up for your FREE Sumo Logic Trial.

Free Trial

Request A Free Sumo Logic Demo

Fill out the form below and a Sumo Logic representative will contact you to schedule your free demo.
“Sumo Logic brings everything together into one interface where we can quickly scan across 1,000 servers and gigabytes of logs and quickly identify problems. It’s awesome software and awesome support.”

Jon Dokuli,
VP of Engineering

Sign up for Sumo Logic Free
Sign up for Sumo Logic Free
  • No credit card required to sign-up
  • Create your account in minutes
  • No expiration date*
  • *After 30 day trial period, reverts to Sumo Logic Free
    View All Pricing Options
    Already have an account? Login