Pricing Login
Pricing
Support
Demo
Interactive demos

Click through interactive platform demos now.

Live demo, real expert

Schedule a platform demo with a Sumo Logic expert.

Start free trial
Back to blog results

April 2, 2018 By Chris Tozzi

How Log Analysis Has Evolved

One" class="redactor-autoparser-object">https://www.sumologic.com/blog... way of assessing just how much IT trends have changed over the past several decades is to look at how log analysis tools and processes have evolved.

I know: Studying the history of log analysis may not seem quite as exciting as evaluating how virtualization technology has evolved from primitive functionality on CP-40 systems 40 years ago to Docker containers today, for example. Nor is it as fun as tracing how far desktop environments have come since the days of Windows 3.1.

However, like virtualization and interface design, log analysis is something that IT admins (or SREs, or DevOps professionals, or whatever you’d like to call them) have been doing for decades. As such, tracing the history of log analysis is a useful way of understanding how one core task within software administration and development has changed, along with technology and mindsets, over an extended period.

This article discusses log analysis tool sets and strategies at various points in time, and explains how past log analysis tools and practices vary from those currently in use.

Log Analysis in the Early Days of Unix

Unix, the operating system that originated with Bell Labs in 1969, laid the architectural foundation for a number of operating systems that are widely used today, including but not limited to Linux.

Not surprisingly, if you look around the CLI environment of your up-to-date Linux or other Unix-like system, you’ll notice that a number of the most basic tools that are installed in it help you search through or transform text. Examples include grep, head, tail and sed, to name a few.

While these tools are useful for a variety of text manipulation tasks, for early Unix system admins, they formed the basis of the tool set available for performing log analysis.

In other words, Unix in its early days didn’t come with tools for aggregating log files from multiple sources or automatically converting log files from one format to another. It offered no tools for monitoring logging data in real time and sending alerts to admins. And it certainly didn’t have user-friendly GUIs for visualizing data derived from log analysis.

Instead, admins used basic text manipulation and search tools to make sense of log files on an as-needed basis. This was the best they could manage, given the primitive nature of software at the time.

Log Analysis in the Days of Waterfall

Fast-forward a couple of decades (to the 1990s and 2000s) and you’re in the age of waterfall software delivery.

Log analysis had become more complicated by this time. There were more logs to analyze. Operating systems kept separate logs for tasks like boot-up and system events. Applications often kept their own logs, too, though not necessarily in a centralized location. In addition, software deployments were starting to become distributed, increasing the importance of remote log aggregation.

These demands led to the creation of a new generation of log analysis tools, such as syslog-ng and rsyslog, which debuted in 1998 and 2004, respectively. These two tools offered the crucial feature of supporting log collection over the network.

At the same time, a litany of proprietary log analysis tools arose in the Windows world, with many of them dedicated to specific tasks. For example, there was BootHawk (which appears to be still around), which supported analysis of boot and log data in order to improve startup and login times for Windows systems.

Tools like these gave admins easier visibility into log data, but they did not remove the need to perform manual log analysis. That was not a big deal at the time, because this was, again, the age of waterfall software delivery. It didn’t matter if it took awhile to respond to a software problem by sorting manually through log data, because users were accustomed to waiting years between software updates. Continuous delivery and user-first software development were not yet common practices.

Log Analysis and DevOps

Today, that has changed. The world of waterfall software delivery has given way to DevOps. New log analysis techniques have arisen to accommodate it.

In DevOps, automation is everything. For that reason, manual log analysis no longer works. It undercuts the continuous delivery, continuous feedback and continuous visibility processes.

Similarly, separately analyzing log data from multiple sources is inefficient. DevOps champions the breaking down of siloes that prevent smooth workflows. Although most people may not think of disparate log files as a type of silo, they essentially fit that bill. If you have to establish separate log analysis workflows for each type of log you want to analyze, or for each host, you end up with a very siloed workflow.

That is why, in the DevOps world, log aggregation tools have become essential. Modern log aggregation tools collect log data from multiple locations, as well as multiple log types, and allow admins to study the data through a single pane of glass.

In short, in the world of DevOps, IT teams no longer deploy a half-dozen different tools to collect and analyze logs. Today’s log analysis tool sets allow one tool to meet multiple log analysis needs.

Conclusion

Log analysis tools and practices have come far over the past several decades. The changes have mirrored the evolution of log data itself, which has grown from a primitive and basic form of information that could be analyzed manually to one that needs to be interpreted on a massive scale, and in real time, in order to meet DevOps demands for continuous visibility.

https://www.sumologic.com/blog... class="at-below-post-recommended addthis_tool">

Complete visibility for DevSecOps

Reduce downtime and move from reactive to proactive monitoring.

Sumo Logic cloud-native SaaS analytics

Build, run, and secure modern applications and cloud infrastructures.

Start free trial
Chris Tozzi

Chris Tozzi

Chris Tozzi has worked as a journalist and Linux systems administrator. He has particular interests in open source, agile infrastructure, and networking. He is Senior Editor of content and a DevOps Analyst at Fixate IO. His latest book, For Fun and Profit: A History of the Free and Open Source Software Revolution, was published in 2017.

More posts by Chris Tozzi.

People who read this also enjoyed