Collecting Metrics
This document covers multiple use cases related to scraping custom application metrics exposed in Prometheus format.
There are three major sections:
- Scraping metrics. Describes how to send your application metrics to Sumo Logic.
- Metrics modifications. Describes how to filter metrics and rename both metrics and metric metadata.
- Kubernetes metrics. Describes the metrics we collect from the Kubernetes components.
Scraping metrics
This section describes how to scrape metrics from your applications. The following scenarios are covered:
- Application metrics are exposed (one endpoint scenario)
- Application metrics are exposed (multiple endpoints scenario)
- Application metrics are not exposed
Application metrics are exposed (one endpoint scenario)
If there is only one endpoint in the Pod you want to scrape metrics from, you can use annotations. Add the following annotations to your Pod definition:
# ...
annotations:
prometheus.io/port: '<port name or number>' # Port which metrics should be scraped from
prometheus.io/scrape: 'true' # Set if metrics should be scraped from this Pod
prometheus.io/path: '/metrics' # Path which metrics should be scraped from
If you add more than one annotation with the same name, only the last one will be used.
Application metrics are exposed (multiple endpoints scenario)
Use sumologic.metrics.additionalServiceMonitors instead of kube-prometheus-stack.prometheus.additionalServiceMonitors. They have identical behavior and can even be used in tandem, but the latter only works if Prometheus is enabled, which has been deprecated in v4 and removed in v5, and won't work with the OpenTelemetry metrics collector, which is the default starting from v4 of the Chart.
If you want to scrape metrics from multiple endpoints in a single Pod, you need a Service that points to the Pod and also to configure sumologic.metrics.additionalServiceMonitors in your user-values.yaml:
sumologic:
metrics:
additionalServiceMonitors:
- name: <service monitor name>
endpoints:
- port: "<port name or number>"
path: <metrics path>
namespaceSelector:
matchNames:
- <namespace>
selector:
matchLabels:
<identyfing label 1>: <value of indentyfing label 1>
<label2>: <value of identyfing label 2>
For advanced serviceMonitor configuration, see Prometheus documentation
Example
Let's consider a Pod that exposes the following metrics:
my_metric_cpu
my_metric_memory
on the following endpoints:
:3000/metrics
:3001/custom-endpoint
The Pod's definition looks like the following:
apiVersion: v1
kind: Pod
metadata:
labels:
app: my-custom-app
name: my-custom-app-56fdc95c9c-r5pvc
namespace: my-custom-app-namespace
# ...
spec:
containers:
- ports:
- containerPort: 3000
protocol: TCP
- containerPort: 3001
protocol: TCP
# ...
There is also a Service which exposes Pod ports:
apiVersion: v1
kind: Service
metadata:
labels:
app: my-custom-app-service
managedFields:
name: my-custom-app-service
namespace: my-custom-app-namespace
spec:
ports:
- name: "some-port"
port: 3000
protocol: TCP
targetPort: 3000
- name: "another-port"
port: 3001
protocol: TCP
targetPort: 3001
selector:
app: my-custom-app
In order to scrape metrics from the above objects, the following configuration should be applied to user-values.yaml:
sumologic:
metrics:
additionalServiceMonitors:
- name: my-custom-app-service-monitor
endpoints:
- port: some-port
path: /metrics
- port: another-port
path: /custom-endpoint
namespaceSelector:
matchNames:
- my-custom-app-namespace
selector:
matchLabels:
app: my-custom-app-service
Application metrics are not exposed
If you want to scrape metrics from an application that does not expose a Prometheus endpoint, you can use the telegraf operator. It will scrape metrics according to the configuration and expose them on port 9273 so that the OpenTelemetry metrics collector can scrape them.
For example, to expose metrics from the Nginx Pod, you can use the following annotations:
annotations:
telegraf.influxdata.com/inputs: |+
[[inputs.nginx]]
urls = ["http://localhost/nginx_status"]
telegraf.influxdata.com/class: sumologic-prometheus
telegraf.influxdata.com/limits-cpu: '750m'
sumologic-prometheus defines how the telegraf operator exposes metrics. They are going to be exposed in Prometheus format on port 9273 and /metrics path.
If you apply annotations to a Pod that's owned by another object, such as a DaemonSet, they won't take effect. In such a case, the annotation should be added to the Pod specification in the DaemonSet template.
After a restart, the Pod should have an additional telegraf container.
To scrape and forward exposed metrics to Sumo Logic, follow one of the following scenarios:
- Application metrics are exposed (one endpoint scenario)
- Application metrics are exposed (multiple endpoints scenario)
Metrics modifications
This section covers the following metrics modifications:
Filtering metrics
See the doc about filtering data.
Default attributes
By default, the following attributes should be available:
| Attribute name | Description |
|---|---|
| _collector | Sumo Logic collector name |
| _origin | Sumo Logic origin metadata ("kubernetes") |
| _sourceCategory | Sumo Logic source category |
| _sourceHost | Sumo Logic source host |
| _sourceName | Sumo Logic source Name |
| cluster | Cluster Name |
| endpoint | Metrics endpoint |
| instance | Pod instance |
| job | Prometheus job name |
| k8s.container.name | Kubernetes Container name |
| k8s.deployment.name | Kubernetes Deployment name |
| k8s.namespace.name | Kubernetes Namespace name |
| k8s.node.name | Kubernetes Node name |
| k8s.pod.name | Kubernetes Pod name |
| k8s.pod.pod_name | Kubernetes Pod name |
| k8s.replicaset.name | Kubernetes Replicaset name |
| k8s.service.name | Kubernetes Service name |
| k8s.statefulset.name | Kubernetes Statefulset name |
| podlabels<label_name> | Kubernetes Pod label. Every label is a different attribute |
| prometheus_service | OpenTelemetry Service name |
Before ingestion into Sumo Logic, attributes are renamed according to the sumologicschemaprocessor documentation.
Renaming metric
To rename metrics, you can use the transformprocessor. Look at the following snippet:
sumologic:
metrics:
otelcol:
extraProcessors:
- transform/1:
metric_statements:
- context: metric
statements:
## Renames <old_name> to <new_name>
- set(name, "<new_name>") where name == "<old_name>"
Adding or renaming metadata
To add or rename metadata, you can use the transformprocessor. Look at the following snippet:
sumologic:
metrics:
otelcol:
extraProcessors:
- transform/1:
metric_statements:
- context: resource
statements:
## adds <new_name> metadata
- set(attributes["<new_name>"], attributes["<old_name>"])
## adds <new_static_name> metadata
- set(attributes["<new_static_name>"], "<static_value>")
## removes <old_name> metadata
- delete_key(attributes, "<old_name>")
See Default attributes for more information about attributes.
Investigation
If you do not see your metrics in Sumo Logic, ensure that you have followed the steps outlined in this document.
Kubernetes metrics
By default, we collect selected metrics from the following Kubernetes components:
Kube API Serverconfigured withkube-prometheus-stack.kubeApiServer.serviceMonitorKubeletconfigured withkube-prometheus-stack.kubelet.serviceMonitorKube Controller Managerconfigured withkube-prometheus-stack.kubeControllerManager.serviceMonitorCoreDNSconfigured withkube-prometheus-stack.coreDns.serviceMonitorKube EtcDconfigured withkube-prometheus-stack.kubeEtcd.serviceMonitorKube Schedulerconfigured withkube-prometheus-stack.kubeScheduler.serviceMonitorKube State Metricsconfigured withkube-prometheus-stack.kube-state-metrics.prometheus.monitorPrometheus Node Exporterconfigured withkube-prometheus-stack.prometheus-node-exporter.prometheus.monitor