Sign up for a live Kubernetes or DevSecOps demo

Click here
Back to blog results

March 23, 2012 By Kumar Saurabh

What the heck is LogReduce

As anybody who has worked with log data will tell you, one of the major problems is its sheer volume of this data—and the horsepower required to crunch it. And even if you can process it, you’re faced with a second problem: how to make sense of it all. And while there’s been progress on both fronts in the past ten years, the tools and techniques haven’t kept up with the explosion in data volume.

You can spend hours looking into logs, and still only understand a tiny fraction of it. It’s become such an overwhelming task that IT has generally given up on looking at logs proactively. And on the occasions when they do, it’s because something bad has happened, which means they’re in reactive mode, forced to dive CSI-style into the log forensics in the hope of finding the answer.

Luckily, log data is heavily repetitive. Some products have the explicit requirement that a message needs to have a well-known structure or it can’t be admitted into the system. That is a very high ask, most notably when it comes to application logs. Forcing everyone to log in a structured format has not gained traction because often the information you want to convey is multi-dimensional. Structure is geared towards machines; we humans think and log in unstructured ways.

LogReduce is our way of automatically putting structure on unstructured data. After all, before you can analyze your logs you need to put some structure on them, usually by extracting interesting fields, and often also by doing group-by-style aggregation on those fields. Putting this structure manually on millions of lines logs can be a herculean effort.

Our LogReduce engine is able to reverse engineer the inherent patterns in the log data. Sometimes a line of logging code can generate a million lines of logs. This engine can automatically boil all of them down to the one pattern which resembles the line of code that generated it, that one particular “printf” statement. Consider this semantics-preserving compression—boiling down an ocean of logs to its core structure. Using this engine, you can very quickly get a 100,000-foot summary of all of your log data, and then drill down into the most relevant or interesting data.

Take it for a spin yourself, try our demo, or sign up for a trial. Feed some of your logs and see how well it works on your data.

Complete visibility for DevSecOps

Reduce downtime and move from reactive to proactive monitoring.

Kumar Saurabh

Kumar Saurabh

More posts by Kumar Saurabh.

People who read this also enjoyed