Illuminate: September 28-29 - A global virtual experience Register now!

Back to blog results

February 19, 2016 By Michael Floyd

Sumo Logic’s Christian Beedgen Speaks on Docker Logging and Monitoring

Support for Docker logging has evolved over the past two years, and the improvements made from Docker 1.6 to today have greatly simplified both the process and the options for logging. However, DevOps teams are still challenged with monitoring, tracking and troubleshooting issues in a context where each container emits its own logging data. Machine data can come from numerous sources, and containers may not agree on a common method. Once log data has been acquired, assembling meaningful real-time metrics such as the condition of your host environment, the number of running containers, CPU usage, memory consumption and network performance can be arduous. And if a logging method fails, even temporarily, that data is lost.

Sumo Logic’s co-founder and CTO, Christian Beedgen presented his vision for comprehensive container monitoring and logging to the 250+ developers that attended the Docker team’s first Meetup at Docker HQ in San Francisco this past Tuesday.

Docker Logging

When it comes to logging in Docker, the recommended pathway for developers has been for the container to write to its standard output, and let Docker collect the output. Then you configure Docker to either store it in files, or send it to syslog. Another option is to write to a directory, so the plain log file is the typical /var/log thing, and then you share that directory with another container.

In practice, When you stop the first container, you indicate that /var/log will be a “volume,” essentially a special directory, that can then be shared with another container. Then you can run tail -f in a separate container to inspect those logs. Running tail by itself isn’t extremely exciting, but it becomes much more meaningful if you want to run a log collector that takes those logs and ships them somewhere. The reason is you shouldn’t have to synchronize between application and logging containers (for example, where the logging system needs Java or Node.js because it ships logs that way). The application and logging containers should not have to agree on specific dependencies, and risk breaking each others’ code.

But as Christian showed, this isn’t the only way to log in Docker. Christian began the presentation by reminding developers of the 12-Factor app, a methodology for building SaaS applications, recommending that you limit to one process per container as a best practice, with each running unbuffered and sending data to Stdout. He then introduced the numerous options for container logging from the pre-Docker 1.6 days forward, and quickly enumerated them saying that some were better than others. You could:

  1. Log Directly from an Application
  2. Install a File Collector in the Container
  3. Install a File as a Container
  4. Install a Syslog Collector as a Container
  5. Use Host Syslog for Local Syslog
  6. Use a Syslog Container for Local Syslog
  7. Log to Stdout and use a file collector
  8. Log to StdOut and use Logspout
  9. Collect from the Docker File systems (Not recommended)
  10. Inject Collector via Docker Exec

Logging Drivers in Docker Engine

Christian also talked about logging drivers, which he believes have been a very large step forward in the last 12 months. He stepped through incremental logging enhancements made to Docker from 1.6 to today. Docker 1.6 added 3 new log drivers: docker logs, syslog, and log-driver null. The driver interface was meant to support the smallest subset available for logging drivers to implement their functionality. Stdout and stderr would still be the source of logging for containers, but Docker takes the raw streams from the containers to create discrete messages delimited by writes that are then sent to the logging drivers. Version 1.7 added the ability to pass in parameters to drivers, and in Docker 1.9 tags were made available to other drivers. Importantly, Docker 1.10 allows syslog to run encrypted, thus allowing companies like Sumo Logic to send securely to the cloud.

He noted recent proposals for Google Cloud Cloud Logging driver, and the TCP, UDP, Unix Domain Socket driver. “As part of the Docker engine, you need to go through the engine commit protocol. This is good, because there’s a lot of review stability. But it is also suboptimal because it is not really modular, and it adds more and more dependencies on third party libraries.” So he poses the question of whether this should be decoupled.

In fact, others have suggested the drivers be external plugins, similar to how volumes and networks work. Plugins would allow developers to write custom drivers for their specific infrastructure, and it would enable third-party developers to build drivers without having to get them merged upstream and wait for the next Docker release.

A Comprehensive Approach for Monitoring and Logging

As Christian stated, “you can’t live on logs alone.” To get real value from machine-generated data, you need to look at what he calls “comprehensive monitoring.” There are five requirements to enable comprehensive monitoring:

  • Events
  • Configurations
  • Logs
  • Stats
  • Host and daemon logs

For events, you can send each event as a JSON message, which means you can use JSON as a way of logging each event. You enumerate all running containers, then start listening to the event stream. Then you start collecting each running container and each start event. For configurations, you call the inspect API and send that in JSON, as well. “Now you have a record,” he said. “Now we have all the configurations in the logs, and we can quickly search for them when we troubleshoot.” For logs, you simply call the logs API to open a stream and send each log as, well, a log.

Similarly for statistics, you call the stats API to open a stream for each running container and each start event, and send each received JSON message as a log. “Now we have monitoring,” says Christian. “For host and daemon logs, you can include a collector into host images or run a collector as a container. This is what Sumo Logic is already doing, thanks to the API.”

Docker Logging Analyzer Dashboard


Perhaps it is a testament to the popularity of Docker, but even the Docker team seemed surprised by the huge turnout for this first meetup at HQ. As proud sponsor Sumo Logic of the meetup, we look forward to new features in Docker 1.10 aimed at enhancing container security including temporary file systems, seccomp profiles, user namespaces, and content addressable images. If you’re interested in learning more about Docker logging and monitoring, you can download Christian’s Docker presentation on Slideshare.

Complete visibility for DevSecOps

Reduce downtime and move from reactive to proactive monitoring.

Sumo Logic Continuous Intelligence Platform™

Build, run, and secure modern applications and cloud infrastructures.

Start free trial

Michael Floyd

More posts by Michael Floyd.

People who read this also enjoyed