Pricing Login
Pricing
Support
Demo
Interactive demos

Click through interactive platform demos now.

Live demo, real expert

Schedule a platform demo with a Sumo Logic expert.

Start free trial
Back to blog results

December 13, 2019 By Sumo Logic

How to View Logs in Kubectl

Kubernetes has become the de-facto solution for container orchestration. While it has, in some ways, simplified the management and deployment of your distributed applications and services, it has also introduced new levels of complexity. When maintaining a Kubernetes cluster, one must be mindful of all the different abstractions in its ecosystem and how the various pieces and layers interact with each other in order to avoid failed deployments, resource exhaustion, and application crashes.

[Read more: Kubernetes vs Docker

Kubectl cheat sheet

When it comes to troubleshooting your Kubernetes cluster and your applications running on it, understanding and using logs are a must! Like most systems, Kubernetes maintains thorough logs of activities happening in your cluster and applications, which you can leverage to narrow down root causes of any failures.

Kubectl cheat sheet: What are Kubernetes logs?

Logs in Kubernetes can give you insight into resources such as nodes, pods, containers, deployments and replica sets. This insight allows you to observe the interactions between those resources and see the effects that one action has on another. Generally, logs in the Kubernetes ecosystem can be divided into the cluster level (logs outputted by components such as the kubelet, the API server, the scheduler) and the application level (logs generated by pods and containers).

Kubectl cheat sheet: How to view Kubernetes logs?

The built-in way to view logs on your Kubernetes cluster is with kubectl. This, however, may not always meet your business needs or more sophisticated application setups. In this article, we will look into the inner workings of kubectl, how to view Kubernetes logs with kubectl, explore its pros and cons, and look at alternate solutions.

What is Kubectl?

Kubectl defined: Kubectl (pronounced “cube CTL”, “kube control”, “cube cuttle”, ...) is a robust command line interface that runs commands against the Kubernetes cluster and controls the cluster manager. Since the command line interface (CLI) is essentially a wrapper around the Kubernetes API, you can do everything directly with the API instead of using the CLI, if it suits your purposes.

Other interesting concepts to note is that Kubernetes is designed to be a declarative resource-based system. This means that there is a centralized state of resources maintained internally which you can perform CRUD operations against. By manipulating these resources with the API, you control Kubernetes. To further illustrate how central the API is to the Kubernetes system, all the components except for the API server and etcd, use the same API in order to read and write to the resources in etcd, the storage system.

How do you get Kubectl pod logs?

To get Kubectl pod logs, you can access them by adding the -p flag. Kubectl will then get all of the logs stored for the pod. This includes lines that were emitted by containers that were terminated. 

How do you use tail in Kubectl logs?

The -tail flag takes into account the number of line you want and the las N lines of logs from the pod. If you want to see more logs, then you would increase the -tail number.

Kubectl cheat sheet: How it works

Every time that you run a command with kubectl, it builds an HTTP REST API request under the hood, sends the request to the Kubernetes API server, and then retrieves the result and displays it on your terminal. In fact, if you want to execute any Kubernetes operation, you can simply make an HTTP request to its corresponding API endpoint.

For more details on the latest version of the Kubernetes API, go here.

Kubectl knows where the Kubernetes API server is, based on your configuration file that can be found in $HOME/.kube/config. 

Let’s look at an example configuration file.

apiVersion: v1clusters:- cluster:    certificate-authority: /home/user/.minikube/ca.crt    server: https://192.168.99.100:8443 name: minikubecontexts:- context:    cluster: minikube    user: minikube name: minikubecurrent-context: minikubekind: Configpreferences: {}users:- name: minikube user:    client-certificate: /home/user/.minikube/client.crt    client-key: /home/user/.minikube/client.key

As you can observe in the config file, the address of the API server endpoint is located next to the server field. This information tells kubectl how to connect to the cluster. Also included in this file are the credentials used to communicate with the API server, so you can effectively use this same file on a different machine to communicate with the same cluster.

In Kubernetes terminology, files that contain configuration information on how to connect to a cluster are referred to as kubeconfig files. Kubectl will automatically look for a config file in $HOME/.kube, but you can pass a different config file by using the --kubeconfig flag or by setting the environment variable, KUBECONFIG. You can also have multiple cluster information in the kubeconfig file.

Kubectl logs command cheat sheet

To run kubectl commands, you would follow this convention:

    kubectl [command] [TYPE] [NAME] [flags]

To use the kubectl logs command, you would pass either a pod name or a type/name. 

A caveat to note is that if you pass a deployment or a replica set, the logs command will get the logs for the first pod, and only logs for the first container in that pod will be shown as a default. For example, to view and live tail the logs since the last log line for the etcd container in the etcd-minikube pod in the kube-system namespace, you would run:

    kubectl logs etcd-minikube -c etcd -n kube-system --tail 1 --follow 

The output of all kubectl commands is in plain text format by default but you can customize this with the --output flag. For example, to get information on the services in the default namespace in json format, you would run:

kubectl get services -n default -o json

Example output:

{    "apiVersion": "v1",    "items": [      {        "apiVersion": "v1",        "kind": "Service",        "metadata": {          "creationTimestamp": "2019-11-06T14:23:09Z",          "labels": {            "component": "apiserver",            "provider": "kubernetes"          },          "name": "kubernetes",          "namespace": "default",          "resourceVersion": "150",          "selfLink": "/api/v1/namespaces/default/services/kubernetes",          "uid": "43a18e08-7523-4e5b-bb4d-871725afde3a"        },        "spec": {          "clusterIP": "10.96.0.1",          "ports": [            {              "name": "https",              "port": 443,              "protocol": "TCP",              "targetPort": 8443            }          ],          "sessionAffinity": "None",          "type": "ClusterIP"        },        "status": {          "loadBalancer": {}        }      }    ],    "kind": "List",    "metadata": {      "resourceVersion": "",      "selfLink": ""    }}

Kubectl command cheat sheet

Here is a cheat sheet on some common kubectl commands:

kubectl get

  • used to view and find resources
  • can output JSON, YAML, or be directly formatted

kubectl describe

  • retrieve extra information about a resources
  • needs a resource type and (optionally) a resource name

kubectl create

  • create a resource from a file or standard input

kubectl delete

  • can delete resources across multiple namespaces

kubectl label

  • can add/remove/update labels across multiple namespaces

kubectl logs

  • view container logs for debugging

kubectl exec

  • execute a command in a running container

The Limitations of Kubectl Logging Solutions

Logs in a distributed containerized environment will be comprehensive and overwhelming. While kubectl is great for basic interactions with your cluster, and viewing logs with kubectl suffices for ad-hoc troubleshooting, it has a lot of limitations when the size or complexity of your cluster grows. The biggest limitation of viewing logs with kubectl is in live tailing and streaming multiple logs, and obtaining a comprehensive overview of live streams for multiple pods. Let’s explore these limitations by looking into selectors and a third-party solution.

Selectors and Kubectl pods

Selectors are a core grouping mechanism in Kubernetes that you can use to select pods. When you use “kubectl run”, it will automatically apply a label on all the objects that it creates with the name of your deployment. So if you execute “kubectl run hello-world”, the label “run=hello-world” will be applied, which you can use with the --selector flag. For example, to view the last line of the logs for all the pods matching this selector, you would do:

kubectl logs --selector=run=hello-world --tail 1

This is all very useful, but if you try to use the --follow flag with this command, you will encounter an error. This is because --follow streams the logs from the API server. You are opening one connection to the API server per pod, which will open a connection to the corresponding kubelet itself in order to stream the logs continuously. This does not scale well and translates to a lot of inbound and outbound connections to the API server; therefore, it became a design decision to limit multiple connections. So you can either stream the logs of one pod, or select a bunch of pods at the same time without streaming.

Kubectl pod problems and solutions

Other shortcomings with this solution are that logs from different pods are mixed together, which prohibits you from knowing which log line came from which pod; logs from newly added pods are not shown, and the log streaming comes to a halt when pods get restarted or replaced.

Stern is an open-source tool that can help solve part of this problem by allowing you to tail multiple pods on your cluster and multiple containers on each pod. It achieves this by connecting to the Kubernetes API, gets a list of pods, and then streams the logs of all these pods by opening multiple connections. However, on large clusters, the impact and stress on the Kubernetes API can be noticeable. This is why it is an external tool. Other limitations of Stern is that when a node goes down, its logs are not available anymore, since they are only in that node. You cannot do a global search on all of your logs; you can only dump or stream logs.

This is where centralized logging plays an important role. You can send all of your logs to a centralized base, which can also index your logs. You can view everything on a nice dashboard and have access to your logs even if your cluster goes down. You can also conduct searches. Let us look at a more comprehensive log collection and analytics solutions.

Try Sumo Logic's log management

See first-hand how you can better manage and monitor data to improve performance

How to Integrate Kubectl Logs and Sumo Logic

Sumo Logic is a cloud-based data analytics company that offers services for logs and metrics management, taking care of the collection, management and analysis of enterprise log data. With its cloud-based tools, you can leverage the generated data from your distributed applications to gain valuable real-time insights, at scale.

With Sumo Logic’s compelling Kubernetes logging solution, you can gain a comprehensive and centralized view of your kubectl logs across multiple clusters in real-time. You can also filter for events and set up alerts.

Sending Logs from Kubernetes to Sumo Logic

Sending logs from your Kubernetes cluster and applications to Sumo Logic is fairly simple. You can have a look at the sumologic-kubernetes-collection repository, which contains all the required resources to collect data from Kubernetes clusters into Sumo Logic.

In order to collect logs, events, metrics, and security data from Kubernetes clusters, the Sumo Logic system leverages several open-source tools. It uses Fluentd and Fluent Bit to collect, process, and aggregate logs from different sources. For collecting metrics and security data, it runs Prometheus and Falco, respectively.

The collected data passes through a central Fluentd pipeline so that it can be enhanced with metadata about information – like container, pod, node, cluster, service namespace, and deployment – before being sent to Sumo Logic.

To get started, log into your Sumo Logic account or create one on https://www.sumologic.com. Then you will need to create an access ID and an access key. These credentials need to be supplied in order to register new collectors or use the Sumo Logic API. For more information, read https://help.sumologic.com/Manage/Security/Access-Keys.

The quickest way to deploy all the tools and components necessary to start collecting data and forward them to Sumo Logic is with Helm. Luckily, Helm charts have been provided here to help you achieve this.

In your Sumo Logic user interface under the Settings tab, add these fields to your Fields table schema so that your logs will be tagged with the relevant metadata: cluster, container, deployment, host, namespace, node, pod, service. For more details on fields, have a look at https://help.sumologic.com/Manage/Fields.

The Helm chart installation requires three parameters to be overwritten: sumologic.endpoint, sumologic.accessId, sumologic.accessKey. For sumologic.endpoint, refer to https://help.sumologic.com/APIs/General-API-Information/Sumo-Logic-Endpoints-and-Firewall-Security to obtain the appropriate API endpoint.

Assuming that you have Helm installed on the client side and Tiller is running on your Kubernetes cluster, add the sumologic private repo:

helm repo add sumologic https://sumologic.github.io/su...

Install the chart with release name collection and namespace sumologic:

helm install sumologic/sumologic \       --name collection \       --namespace sumologic \        --set sumologic.endpoint=<SUMO_ENDPOINT> \        --set sumologic.accessId=<SUMO_ACCESS_ID> \       --set sumologic.accessKey=<SUMO_ACCESS_KEY> \       --set prometheus-operator.prometheus.prometheusSpec.externalLabels.cluster="<my-cluster-name>" \       --set sumologic.clusterName="<my-cluster-name>"

You should see a lot of information about the deployment:

NAME:  collectionLAST DEPLOYED: Mon Nov 4 11:19:52 2019NAMESPACE: sumologicSTATUS: DEPLOYEDRESOURCES:==> v1beta1/ClusterRoleBindingNAME                     AGEcollection-falco               2scollection-kube-state-metrics        2spsp-collection-prometheus-node-exporter 2s
...NOTES:Thank you for installing sumologic.
A Collector with the name kubernetes-<TIMESTAMP> has been created in your Sumo Logic account.Check the release status by running: kubectl --namespace sumologic get pods -l "release=collection"To check the status of the pods and make sure they are running, type:   kubectl get pods --namespace sumologic

When you go back to your Sumo Logic account and click on the Collection tab, you should see the new Kubernetes collector displayed:

How to View Kubectl Logs with Sumo Logic

Monitoring, filtering, searching, and troubleshooting becomes easier when you have dashboards configured. Sumo Logic provides a lot of visibility into your Kubernetes clusters and applications via its Kubernetes App. Paired with the Kubernetes Control Plane App, it has never been easier to monitor the state and health of your entire Kubernetes ecosystem.

With metrics, logs, events, and security data across clusters combined and enhanced with consistent metadata, Sumo Logic is able to give you a uniform and comprehensive overview of your Kubernetes clusters via predefined dashboards. The dashboards will update in real-time and correspond to any changes happening to the state of your Kubernetes clusters.

Sumo Logic offers multiple apps that you can install to your account to access the numerous predefined dashboards. The Kubernetes App provides dashboards that give you visibility into the application logs of the worker nodes of your clusters. It should be used together with the Kubernetes Control Plane App, which provides dashboards that give you visibility of your control plane node (including the API server and the storage backend).

Here are some dashboards that you might see:


For more vendor-specific dashboards, Sumo Logic offers the AKS, EKS, and GKE Control Plane Apps, which give you visibility into the control plane of your vendor-specific managed Kubernetes clusters.

To install any of these Apps, go to the App Catalog, search for “Kubernetes”, select the app and add it to the library.

Kubectl Best Practices to manage Logs

When managing Kubernetes logs at scale, there are a few things to keep in mind. Kubernetes monitors the stdout and stderr of all pods, so if your applications are not sending logs to the standard output and error streams, Kubernetes won’t be able to collect the logs, and most likely, neither will your third-party logging solution. Create automation for your logs to be collected and shipped to a separate centralized location. This will enable you to cross examine information easier, prevent you from losing valuable log data, and allow for easier access to other members of your organization.

Note: Kubectl tail logs

Be sure to monitor and livetail logs across all system layers and components. Having insight into all the events emitted from your entire stack is vital to obtaining a well-rounded view of how your system is performing and what the end user is experiencing. Moreover, due to the dynamic nature of containers and pods, they can be really difficult to keep track of and filter for, unless you explicitly tag them with consistent labels. Thus, it is good to make use of features like selectors and metadata enrichment in Sumo Logic.

Lastly, leverage an external log management tool for its live-tail debugging, search, and filtering capabilities. For example, Sumo Logic Live Tail gives you the ability to tail log events originating from Sources configured on Installed Collectors. There is also a Live Tail CLI that allows you to start and stop live tail sessions from the command line. To drill down into the data, you can filter by keyword or highlight keywords that appear in the live tail session. You can use its centralized logs and saved live tail searches to gain insight and evaluate key trends across your entire system. Harness the value of the aggregated data to improve and optimize your system and make better business decisions.

Conclusion: Kubectl Logs

All in all, Kubernetes tail logs are full of useful information about the health of your cluster and applications. "Kubectl logs" is fine during your first steps with Kubernetes, but it quickly shows its limits. A tool like Stern is great on small, development clusters, but inadequate for production. As you scale up, it makes sense to move to an enterprise log management solution to make it easier to monitor, filter, and troubleshoot problems more efficiently.

Try Sumo Logic's log management

See first-hand how you can better manage and monitor data to improve performance

Categories

Sumo Logic cloud-native SaaS analytics

Build, run, and secure modern applications and cloud infrastructures.

Start free trial

Sumo Logic

More posts by Sumo Logic.

People who read this also enjoyed