There are several ways to collect logs in Kubernetes. For Sitecore applications running in a Kubernetes cluster, the official Kubernetes specifications and the Sitecore images support the following two approaches to expose logs outside of containers: using the default stdout container logs or using a persistent volume to share logs outside containers.
The first approach relies on the fact that Sitecore images have been instrumented with LogMonitor, a Microsoft container tool responsible to monitor log sources inside the container and to pipe a formatted output to the stdout container output. Log sources can include Windows system event logs, IIS logs and Sitecore application logs. The collected logs are all piped together and it can be challenging to filter and analyze them, because LogMonitor doesn’t enrich the logs with a source property. This challenge becomes even bigger when the Sitecore application generates multiline logs, for example when an error with its stack trace is logged: each line of a multiline log record is collected separately and they can get separated and mingled in the piped stdout output with other source logs (for example IIS logs).
The second approach consists instead in storing logs in a persistent volume, so that can be made available and stored outside the containers, in an Azure File storage resource. This configuration has been introduced recently with Sitecore 10.2 Kubernetes specifications and it has been configured for Sitecore application logs only. The same configuration could be applied for other file sources inside the containers, like for example IIS logs. This approach helps to keep different source logs separated outside the containers.
Once the logs are available outside the containers, there are different solutions that can be implemented to collect them in the desired log analytics platform. In Azure Kubernetes Service, the simplest solution is to rely on the out-of-the-box capabilities of Container Insights, that automatically collects stdout and stderr output container logs in the associated Azure Log Analytics workspace resource thanks to its containerized monitoring agents. Alternative solutions to collect and forward logs outside Azure to other analytics platforms (like for example Elasticsearch) are more complex. They usually require to use a log forwarder application (like for example FluentBit) that can be setup to run as daemon set with a pod in each cluster node to collect and forward stdout logs or as deployment to read logs from a persistent volume storage and forward them.
In the next sections I am going to describe how to enhance the collection of stdout logs of AKS cluster containers to avoid a fragmented collection of multiline logs and how to query them in the Azure Monitor targeting each source separately.
CONTINUE READING