Log Monitoring Classic
DESK Log Monitoring supports collecting logs from Kubernetes container orchestration systems via OneAgent.
As an alternative to OneAgent-based log collection, you can stream logs to DESK via the logs ingest API with an integration such as Fluent Bit, Fluentd, Logstash, or DESK Collector.
DESK Log Monitoring supports various Kubernetes-based container platforms like Upstream Kubernetes or Red Hat OpenShift using containerd, or CRI-O as container runtime.
Docker isn’t compliant with CRI, the Container Runtime Interface. For this reason, Kubernetes setups using Docker are only partially supported. Kubernetes deprecated Docker as a container runtime after v1.20.
For more details regarding supported versions of Kubernetes, check DESK support lifecycle for Kubernetes and Red Hat OpenShift Full-Stack Monitoring.
OneAgent autodiscovers logs written by the containerized application to its stdout/stderr streams. Kubernetes Engine saves these log streams to a file on the Kubernetes node. OneAgent autodiscovers these log files, and reports the container logs under the Container Output log source.
Logs written directly to the pods filesystem are not discovered by OneAgent. In this case, use a log shipper integration, such as Fluent Bit.
OneAgent Log module decorates the ingested logs with the following Kubernetes metadata: k8s.cluster.name, k8s.cluster.uid, k8s.namespace.name, k8s.workload.name, k8s.workload.kind, dt.entity.kubernetes_cluster, k8s.pod.name, k8s.pod.uid, k8s.container.name, dt.entity.kubernetes_node. This metadata is used to map the logs to the entity model of Kubernetes clusters, namespaces, workloads, and pods.
Also, any pod annotations starting with the metadata.desk.com/ prefix are added to the log records.
You can control logs from Kubernetes ingestion with log ingest rules in DESK. You can configure these rules at the Kubernetes cluster level to allow cluster-specific log ingestion. The rules use matchers for Kubernetes metadata and other common log entry attributes to determine which logs are to be ingested. Standard log processing features from OneAgent, including sensitive data masking, timestamp configuration, and automatic enrichment of log records, are also available and enabled here.
Use the following recommended matching attributes when configuring log ingestion from Kubernetes.
Ingesting logs from Kubernetes requires log ingest rules definition. The configuration is based on a hierarchy of rules that use matchers for Kubernetes and other common log entry attributes. These rules determine which log files, among those detected by OneAgent are ingested.
Use the following recommended matching attributes when configuring log ingestion from Kubernetes.
Log ingest rules can be defined on environment scope but also on host or host group. The matching hierarchy is as follows:
Consult the Configuration scopes for the four scopes of the configuration hierarchy.
Explore the following use cases for log ingestion from Kubernetes environments using DESK. By configuring log ingestion with different matchers, you can control which logs are captured in the system. The use cases below offer guidance on configuring DESK to capture logs based on your specific monitoring needs, whether it's from a particular namespace, container, or other criteria.
Go to Settings and select Log Monitoring > Log ingest rules.
Select Add rule and provide the name for your configuration in the Rule name field.
Make sure that the Include in storage button is turned on, so logs matching this configuration will be stored in DESK.
Select Add condition.
From the Matcher attribute dropdown, select K8s namespace name.
Select the namespace from the dropdown inside the Value field, and select Add matcher.
Select Save changes.
You can now analyze the logs in the log viewer or notebooks after fitering the proper namespace. You can also find the logs in context in the Kubernetes application by selecting the Logs tab.
Go to Settings and select Log Monitoring > Log ingest rules.
Select Add rule and provide the name for your configuration in the Rule name field.
Make sure that the Include in storage button is turned on, so logs matching this configuration will be stored in DESK.
Select Add condition.
From the Matcher attribute dropdown, select K8s namespace name.
Select the namespace from the dropdown inside the Value field, and select Add matcher.
Add a new matcher, this time, select K8s container name, and input the container name in the Value field. You can add multiple container names in this configuration step.
Select Save changes.
You can now analyze the logs in the log viewer or notebooks after fitering the proper namespace and container. You can also find the logs in context in the Kubernetes application by selecting the Logs tab.
On the Log ingest rules screen, arrange the configured rules to prioritize the excluded namespaces rule at the top and the rule including all namespaces at the bottom.
You can use the Settings API to manage your log ingest rules:
To check the current schema version for log ingest rules, list all available schemas and look for the builtin:logmonitoring.log-storage-settings schema identifier.
Log ingest rule objects can be configured for the following scopes:
To create a log ingest rule using the API:
Create an access token with the Write settings (settings.write) and Read settings (settings.read) scopes.
Use the GET a schema endpoint to learn the JSON format required to post your configuration. The log ingest rules schema identifier (schemaId) is builtin:logmonitoring.log-storage-settings. Here is an example JSON payload with the log ingest rules:
{
"items": [
{
"objectId": "vu9U3hXa3q0AAAABACpidWlsdGluOmxvZ21vbml0b3JpbmcubG9nLXN0b3JhZ2Utc2V0dGluZ3MABEhPU1QAEEFEMDVFRDZGQUUxNjQ2MjMAJDZkZGU3YzY5LTMzZjEtMzNiZC05ZTAwLWZlNDFmMjUxNzUzY77vVN4V2t6t",
"value": {
"enabled": true,
"config-item-title": "Send kube-system logs",
"send-to-storage": true,
"matchers": [
{
"attribute": "k8s.container.name",
"operator": "MATCHES",
"values": [
"kubedns",
"kube-proxy"
]
},
{
"attribute": "k8s.namespace.name",
"operator": "MATCHES",
"values": [
"kube-system"
]
}
]
}
}
],
"totalCount": 1,
"pageSize": 100
}
The examples that follow show the results of various combinations of rules and matchers.
This task requires setting one rule with one matcher.
[{
"schemaId": "builtin:logmonitoring.log-storage-settings",
"scope": "tenant",
"value": {
"enabled": true,
"config-item-title": "All logs from kube-system namespace",
"send-to-storage": true,
"matchers": [
{
"attribute": "k8s.namespace.name",
"operator": "MATCHES",
"values": [
"kube-system"
]
}
]
}
}]
This task requires setting one rule with three matchers.
[{
"schemaId": "builtin:logmonitoring.log-storage-settings",
"scope": "tenant",
"value": {
"enabled": true,
"config-item-title": "Error logs from kube-proxy and kube-dns containers",
"send-to-storage": true,
"matchers": [
{
"attribute": "k8s.namespace.name",
"operator": "MATCHES",
"values": [
"kube-system"
]
},
{
"attribute": "k8s.container.name",
"operator": "MATCHES",
"values": [
"kubedns",
"kube-proxy"
]
},
{
"attribute": "log.content",
"operator": "MATCHES",
"values": [
"*ERROR*"
]
}
]
}
}]
This task requires setting two rules.
[{
"schemaId": "builtin:logmonitoring.log-storage-settings",
"scope": "HOST_GROUP-1D91E46493049D07",
"value": {
"enabled": true,
"config-item-title": "Exclude logs from kube-system namespace",
"send-to-storage": false,
"matchers": [
{
"attribute": "k8s.namespace.name",
"operator": "MATCHES",
"values": [
"kube-system"
]
}
]
}
},{
"schemaId": "builtin:logmonitoring.log-storage-settings",
"scope": "HOST_GROUP-1D91E46493049D07",
"value": {
"enabled": true,
"config-item-title": "All Kubernetes logs",
"send-to-storage": true,
"matchers": [
{
"attribute": "k8s.namespace.name",
"operator": "MATCHES",
"values": [
"*"
]
}
]
}
}]
The requirements for autodiscovery and ingestion of Kubernetes logs are the following:
No, OneAgent doesn't offer such a functionality yet, although it is planned in future releases.
For more ingest related FAQs, please consult the Log ingest rules page.