Understanding K8s and Its Value for DevOps
Kubernetes is an open-source container management system developed by Google and made available to the public in June 2014. The goal is to make deploying and managing complex distributed systems easier for developers interested in Linux containers. It was designed by Google engineers experienced with writing applications that run in a cluster.
Kubernetes—or K8s as it is commonly called—was the third container cluster manager developed by Google, following internal-use only Borg and Omega. Similar to Omega, K8s has improved core scheduling architecture and a shared persistent store at its core. It differs from Omega in that the store is not exposed directly to trusted control-plane components but accessed through a REST API. Kubernetes APIs that process REST operations are similar to other APIs.
In 2015 the Linux Foundation and Google joined forces to form the Cloud Native Computing Foundation (CNCF) and Kubernetes was donated as a seed technology. A stable release of K8s was launched in December, 2017.
What Can You Do With Kubernetes?
Kubernetes allows companies to harness more computing power when running software applications. It automates the deployment, scheduling, and operation of application containers on clusters of machines—often hundreds of thousands or more—in private, cloud or hybrid environments. It also allows developers to create a “container-centric” environment with container images deployed on Kubernetes or integrate with a continuous integration (CI) system.
As a platform, K8s can be combined with other technologies for added functionality and does not limit the types of applications or services that are supported. Some container-based Platform-as-a-Service (PaaS) systems run on Kubernetes. As a platform K8s differs from these PaaS systems in that it is not all-inclusive and does not provide middleware, deploy source code, build an application, or have a click-to-deploy marketplace.
The Value of Kubernetes and Container Services
Kubernetes and container services enable software to run reliably when moved from one computing environment to another, regardless of compatibility. It allows application developers and IT administrators to run multiple application containers on a common shared operating system (OS) across clusters of servers, called nodes.
Application containers are isolated from each other, but they share the OS kernel, and the host (i.e. shared parts of the operating system) are read-only. In this way, all components of an application are separate from the underlying host infrastructure, which makes deploying and scaling in different cloud and OS environments easier.
Containers are more lightweight—gigabytes as opposed to megabytes—and use fewer resources than virtual machines (VMs). A container typically consists of an application, its dependencies, library, binaries, and configuration files. A VM contains the runtime environment plus its own operating system, making it more cumbersome and less portable.
A Kubernetes orchestration platform is virtualization at the OS level. It provides a virtual platform for applications to run on with OS resources called via a REST API. It is a form of microservices architecture using portable executable images that contain software and all of its dependencies.
In the past heavy, non-portable applications were the standard. Now, with automated container systems like Kubernetes, applications can be built with a single OS operation supporting multiple containers across different computing environments—regardless of platform. As an example, Google runs billions of containers weekly.
There are a few Kubernetes-specific terms that are useful to know when starting out with K8s:
Kubernetes API – flexible API (can be accessed directly or with tools) with RESTful interface that stores the state of the cluster.
Kubectl – command line interface for running commands.
Kubelet – an agent that uses PodSpecs to ensure containers are healthy and running according to specifications.
Image – files that make up the application that runs inside the container.
Pod – a set of containers that are running on a cluster.
Cluster – master with multiple worker machines (called nodes) that run the applications in a container.
Node – a worker machine with services to run a pod, managed by the master component.
Minikube – a tool that runs a cluster node inside a VM on a local computer.
Controller – a control loop that ensures the desired state matches the observed state of the cluster.
DaemonSet – ensures nodes run a copy of a pod when a node is added to a cluster.
A glossary with more terms and their definition can be found in the Kubernetes Standardized Glossary.
Getting Started with Kubernetes
Kubernetes container management system allows enterprises to create an automated, virtual, microservices application platform. By using container services, organizations can build, deploy, and horizontally scale lightweight applications across multiple types of server hosts, cloud environments, and other infrastructure more efficiently.
To get started using Kubernetes with Google Container Engine (GCE), check out our step-by-step guide to creating clusters in GCE.
Complete visibility for DevSecOps
Reduce downtime and move from reactive to proactive monitoring.