- Stefan Zier (12)
- Sanjay Sarathy (10)
- Joan Pepin (10)
- Christian Beedgen (9)
- Bruno Kurtic (9)
- David Andrzejewski (7)
- Vance Loiselle (7)
- Russell Cohen (6)
- Kumar Saurabh (5)
- Ben Newton (5)
- Amanda Saso (4)
- Praveen Rangnath (3)
- Brandon Mensing (3)
- Ariel Smoliar (3)
- Dwayne Hoover (2)
- Rishi Divate (2)
- Johnathan Hodge (2)
- Jacek Migdal (2)
- Jana Lass (1)
- Megha Bangalore (1)
- Bright Fulton (1)
- Vivek Kaushal (1)
- Mark Musselman (1)
- Derek Hall (1)
- Garrett Nano (1)
- Karthik Anantha Padmanabhan (1)
- Bill Lazar (1)
- Robert Sloan (1)
- Manish Khettry (1)
- Caleb Sotelo (1)
- Jim Wilson (1)
- David Carlton (1)
- Mike Cook (1)
- Yan Qiao (1)
- Eillen Voellinger (1)
- Ozan Unlu (1)
- Máté Kovács (1)
- Remy Guercio (1)
- CloudPassage: Cloud Security Guest Account (1)
- Binh Nguyen (1)
- Zack Isaacson (1)
- Mycal Tucker (1)
- Sebastian Mies (1)
- Vera Chen (1)
Blog › Authors › Ben Newton
09.22.2014 | Posted by Ben Newton, Senior Product Manager
Two things still amaze me about the San Francisco Bay area two years on after moving here from the east coast – the blindingly blue, cloudless skies – and the fog. It is hard to describe how beautiful it is to drive up the spine of the San Francisco Peninsula on northbound I-280 as the fog rolls over the Santa Cruz mountains. You can see the fog pouring slowly over the peaks of the mountains, and see the highway in front of you disappear into the white, fuzzy nothingness of its inexorable progress down the valley. There is always some part of me that wonders what will happen to my car as I pass into the fog. But then I look at my GPS, know that I have driven this road hundreds of times, and assure myself that my house does still exist in there – somewhere.
Now, I can contrast that experience with learning to drive in the Blue Ridge Mountains of North Carolina. Here’s the background – It’s only my second time behind the wheel, and my Mom takes me on this crazy stretch of road called the Viaduct. Basically, imagine a road hanging off the side of a mountain, with a sheer mountain side on the one side, and a whole lot of nothing on the other. Now, imagine that road covered in pea-soup fog with 10 ft visibility, and a line of a half dozen cars being led by a terrified teenager with white knuckled hands on the wheel of a minivan hoping he won’t careen off the side of the road to a premature death. Completely different experience.
So, what’s the difference between those two experiences. Well, 20 years of driving, and GPS for starters. I don’t worry about driving into the thick fog as I drive home because I have done it before, I know exactly where I am, how fast I am going, and I am confident that I can avoid obstacles. That knowledge, insight, and experience make all the difference between an awe-inspiring journey and a gut-wrenching nail-biter. This is really not that different from running a state of the art application. Just like I need GPS and experience to brave the fog going home, the difference between confidently innovating and delighting your customers, versus living in constant fear of the next disaster, is both driven by technology and culture. Here are some ways I would flesh out the analogy:
GPS for DevOps
An app team without visibility into their metrics and errors is a team that will never do world-class operations. Machine Data Analytics provides the means to gather the telemetry data and then provide that insight in real-time. This empowers App Ops and DevOps teams to move more quickly and innovate.
Fog Lights for Avoiding Obstacles
You can’t avoid obstacles if you can’t see them in time. You need the right real-time analytics to quickly detect issues and avoid them before they wreck your operations.
Experience Brings Confidence
If you have driven the road before, it is always increases confidence and speed. Signature-Based anomaly detection means that the time that senior engineers put in to classify previous events gives the entire team the confidence to classify and debug issues.
So, as you drive your Application Operations and DevOps teams to push your application to the cutting edge of performance, remember that driving confidently into the DevOps fog is only possible with the right kind of visibility.
Images linked from:
05.01.2013 | Posted by Ben Newton, Senior Product Manager
Of all of the new tools spawned by the DevOps movement, I find Etsy’s open-source tool, statsd, the most interesting. The enterprise software market is being shaken to its foundation, and statsd is one of the tools providing the vibrations. Instead of relying on the more generic metrics provided by application performance management (APM) vendors, Etsy, and others like them, is delivering highly specific, and highly relevant metrics directly from their code with statsd. With just a few lines of code, developers can measure any part of their application they choose, in the way they choose. This is very similar to the freedom that developers gain with a proper log analysis tool – they can dump any data they want to a log and analyze it later. Freed from the issue of storage, and of the mechanics of log analysis, they can focus on using the data to enhance performance management, troubleshooting, business intelligence etc.
For current users of statsd, the question might be – why would I want to put this in Sumo Logic, as opposed to using a tool like graphite for dashboard purposes? First of all, Sumo Logic provides analytics that supplement basic statsd metrics very well. For example, if you are watching your error count skyrocket and your user performance plummet, your next step will usually be to look for specific applications errors and do root cause analysis, which is a perfect use case for Sumo Logic. Secondly, there is a lot of value of both having the statsd and Sumo Logic metrics in “single pane of glass”, where performance metrics can be viewed alongside more complex analytics. Finally, for current users of Sumo Logic, statsd is a simple way to push application performance data straight into Sumo Logic, without filling up log files or worrying about data volumes.
Background for Statsd
First a little background on StatsD. The basis for the project started at Flickr, and was expanded at Etsy. This is appropriate since John Allspaw and his team helped kick-start the DevOps movement at Flickr, before coming over to Etsy. From the technical perspective statsd is, in their own words:
A network daemon that runs on the Node.js platform and listens for statistics, like counters and timers, sent over UDP and sends aggregates to one or more pluggable backend services.
So, statsd modules forward clear-text metrics over UDP. StatsD supports a few different types of metrics, as well as analytics, but for the sake of simplicity, I will only cover two areas here: Counting and Timing. The counting metric sends the metric name, the amount to increment/decrement, and possibly the sampling interval:
The timing metric looks very similar, with a metric name and value:
Generating the Metrics
To generate the data, I created a simple perl script using the statsd perl module Net::Statsd. I then created a Syslog Source on a Linux Collector over the standard port of 514. The Sumo Logic Syslog Source, essentially a listener for text over UDP, can receive the statsd message just fine. One caveat, though – since the statsd messages do not include a timestamp, Sumo Logic will assign the ingest time as the timestamp. This means that is essential that you set the timezone setting correctly. I tested this with thousands of events, and there were no issues. To make some interesting, and relevant, metrics I added extra logic to my perl script to create some patterns with the rand() function and some math:
# Configure where to send events
# That’s where your statsd daemon is listening.
$Net::Statsd::HOST = ‘localhost’; # Default
$Net::Statsd::PORT = 514; # Default
# Initial Values
$basepercent = 0.50;
$webTime = 50;
$appTime = 100;
$dbTime = 150;
$basecount = 5;
# Infinite loop
$basepercent = ($basepercent + (rand(100) + 50)/100)/2;
$webTime = $basepercent*($webTime + 50 + rand(750))/2;
$appTime = $basepercent*($appTime + 100 + rand(1000))/2;
$dbTime = $basepercent*($dbTime + 150 + rand(1200))/2;
$k = 0;
$basecount = $basepercent*($basecount + rand(5))/2;
while($k < $basecount)
sleep(5 + rand(10))
Making sense of the Metrics
Once the metrics were successfully being ingested into Sumo Logic, I needed to create some useful searches and Dashboard Monitors. With the statsd counter function, I simply wanted to extract the data, drop it into 1m buckets, and sum up the number of increments to the counter over each minute. The key-value structure of a statsd message can be easily parsed with our keyvalue operator. Basically, I just told Sumo Logic to look for a lower case key name with “.” in it [a-z\.]+ and a numerical value \d+. I only searched for “site.logins”, but you could use the statement to look for any number of different counters in the same dashboard.
| keyvalue regex “([a-z\.]+?):(\d+?)\|c” “site.logins” as logins
| timeslice by 1m
| sum(logins) by _timeslice
With the timing metrics, an average over each minute seems most relevant (though other functions like max, min, or standard deviations could be useful here). I pulled out all three timings together, by looking for key that looks like *.time – ?<tier>[a-z]+).time . Since I named my metrics web.time, app.time, and db.time, I was able to put each of the “tier” metrics on the same graph.
_sourceCategory=*statsd* AND time
| parse regex “(?<tier>[a-z]+).time:(?<test_time>\d+)\|ms”
| timeslice by 1m
| avg(test_time) by _timeslice, tier
| transpose row _timeslice column tier
As I ran each of these searches, I clicked the “Add to Dashboard” button on the far right to add them a newly created StatsD dashboard. I included a screenshot below (the tier metrics are on the left, and the counter is on the right):
You can see from this example how easy it is to analyze data in the statsd format. Once the data is in Sumo Logic, the sky is the limit to what you can do with it. There are other metrics and backend functions that Sumo Logic can support over the long term, but this simple integration provides the majority of functionality needed. Let us know you think, and sign up for a free account to try it out yourself.
03.28.2013 | Posted by Ben Newton, Senior Product Manager
Do It Faster, Makes Us Stronger
More Than Ever Hour After
Our Work Is Never Over
Daft Punk – “Harder, Better, Faster, Stronger”
When trying to explain the essence of DevOps to colleagues last week, I found myself unwittingly quoting the kings of electronica, the French duo Daft Punk (and Kanye West, who sampled the song in “Stronger”). So often, I find the “spirit” of DevOps being reduced to mere automation, the takeover of Ops by Dev (or vice versa), or other over-simplications. This is natural for any new, potentially over-hyped, trend. But how do we capture the DevOps “essence” – programmable architecture, agile development, and lean methodology – in a few words? It seems like the short lyrics really sum up the essence of the flexible, agile, constantly improving ideal of a DevOps “team”, and the continuous improvement aspects of lean and agile methodology.
So, what does this have to do with machine data analytics and Sumo Logic? Part of the DevOps revolution is a deep and wrenching re-evaluation of the state of IT Operations tools. As the pace of technological change and ferocity of competition keep increasing for any company daring to make money on the Internet (which is almost everybody at this point), the IT departments are facing a difficult problem. Do they try to adapt the process-heavy, tops-down approaches as exemplified by ITIL, or do they embrace a state of constant change that is DevOps? In the DevOps model, the explosion of creativity that comes with unleashing your development and operations teams to innovate quickly overwhelms traditional, static tools. More fundamentally, the continuous improvement model of agile development and DevOps is only as good as the metrics used to measure success. So, the most successful DevOps teams are incredibly data hungry. And this is where machine data analytics, and Sumo Logic in particular, really comes into its own, and is fundamentally in tune with the DevOps approach.
1. Let the data speak for itself
Unlike the management tools of the past, Sumo Logic makes only basic assumptions about the data being consumed (time stamped, text-based, etc.). The important patterns are determined by the data itself, and not by pre-judging what patterns are relevant, and which are not. This means that as the application rapidly changes, Sumo Logic can detect new patterns – both good and ill – that would escape the inflexible tools of the past.
2. Continuous reinterpretation
Sumo Logic never tries to force the machine data into tired old buckets that are forever out of date. The data is stored raw so that it can continually be reinterpreted and re-parsed to reveal new meaning. Fast moving DevOps teams can’t wait for the stodgy software vendor to change their code or send their consultant onsite. They need it now.
3. Any metric you want, any time you want it
The power of the new DevOps approach to management is that the people that know the app the best, the developers, are producing the metrics needed to keep the app humming. This seems obvious in retrospect, yet very few performance management vendors support this kind of flexibility. It is much easier for developers to throw more data at Sumo Logic by outputting more data to the logs than to integrate with management tools. The extra insight that this detailed, highly specific data can provide into your customers’ experience and the operation of your applications is truly groundbreaking.
4. Set the data free
Free-flow of data is the new norm, and mash-ups provide the most useful metrics. Specifically, pulling business data from outside of the machine data context allows you to put it in the proper perspective. We do this extensively at Sumo Logic with our own APIs, and it allows us to view our customers as more than nameless organization ID numbers. DevOps is driven by the need to keep customers happy.
5. Develop DevOps applications, not DevOps tools
The IT Software industry has fundamentally failed its customers. In general, IT software is badly written, buggy, hard to use, costly to maintain, and inflexible. Is it any wonder that the top DevOps shops overwhelmingly use open source tools and write much of the logic themselves?! Sumo Logic allows DevOps teams the flexibility and access to get the data they need when they need it, without forcing them into a paradigm that has no relevance for them. And why should DevOps teams even be managing the tools they use? It is no longer acceptable to spend months with vendor consultants, and then maintain extra staff and hardware to run a tool. DevOps teams should be able to do what they are good at – developing, releasing, and operating their apps, while the vendors should take the burden of tool management off their shoulders.
The IT industry is changing fast, and DevOps teams need tools that can keep up with the pace – and make their job easier, not more difficult. Sumo Logic is excited to be in the forefront of that trend. Sign up for Sumo Logic Free and prove it out for yourself.
03.19.2013 | Posted by Ben Newton, Senior Product Manager
As with any new, innovative feature in a product, it is one thing to say it is helpful for customers – it is quite another to see it in action in the wild. Case in point, I had a great discussion with a customer about using LogReduce™ in their environment. LogReduce is a groundbreaking tool for uncovering the unknown in machine data, and sifting through the inevitable noise in the sea of log data our customers put in Sumo Logic. The customer in question had some great use cases for LogReduce that I would like to share.
With massive amounts of log data flowing through modern data centers, it is very difficult to get a bird’s eye view of what is happening. More importantly, the kind of summary that provides actionable data about the day’s events is elusive at best. In our customer example, they have been using LogReduce to provide exactly that type of daily, high-level overview of the previous day’s log data. How does it work? Instead of using obvious characteristics to group log data like the source (e.g. Window’s Events) or host (e.g. server01 in data center A), LogReduce uses “fuzzy logic” to look for patterns across all of your machine data at once – letting the data itself dictate the summary. Log data with the same patterns, or signatures, are grouped together – meaning that new patterns in the data will immediately stand out, and the noise will be condensed to a manageable level.
Our customer is also able to supply context to the LogReduce results – adjusting and extending signatures, and adjusting relevance as necessary. In particular, by adjusting the signatures that LogReduce finds, the customer is to “teach” LogReduce to provide the best results in the most relevant way. This allows them to separate the critical errors out, while still acknowledging the background noise of known messages. The end-result is a daily summary that is both more relevant because of the user-supplied, business context as well as being flexible enough to find important, new patterns.
Discovering the Unknown
And finding those new patterns is the essential essence of Big Data analytics. A machine-data analytics tool should be able to find unknown patterns, not simply reinforce the well-known ones. In this use case, our customer already has alerting established for known, critical errors. The LogReduce summary provides a way to identify, and proactively address, new, unknown errors. In particular, by using LogReduce’s baseline and compare functionality, Sumo Logic customers can establish a known state for log data and then easily identify anomalies by comparing the current state to the known, baselined state.
In summary, LogReduce provides the essence of Big Machine Data analytics to our customers – reducing the the constant noise of today’s datacenter, while finding those needles in the proverbial haystack. This is good news for customers who want to leverage the true value of their machine data without the huge investments in the time and expertise required in the past.
01.28.2013 | Posted by Ben Newton, Senior Product Manager
We make hundreds of decisions every day, mostly small ones, that are just part of life’s ebb and flow. And then there are the big decisions that don’t merely create ripples in the flow of your life - they redirect it entirely. The massive, life-defining decisions like marriage and children; the career-defining decisions like choosing your first job after college. I’ve had my share of career-defining decisions – leaving a physics graduate program to chase after the dot com craze, leaving consulting for sales engineering, etc. The thing about this latest decision is that it combines both. I am joining Sumo Logic, leaving behind a safe job in marketing, and moving to Silicon Valley – away from my friends, family, and community. So, why did I do it?
Now is the time for Start-Ups in Enterprise Software.
Consumer start-ups get all the press, but the enterprise startups are where the real action is. The rash of consolidations in the last five years or so has created an innovation gap that companies like Sumo Logic are primed to exploit. The perfect storm of cloud computing, SaaS, Big Data, and DevOps/Agile is forcing customers to start looking outside of their comfort zones to find the solutions they need. Sumo Logic brings together all of that innovation in a way that is too good to not be a part of it.
The Enterprise SaaS Revolution is Inevitable.
The SaaS business model, combined with Agile development practices, is completely changing the ways companies buy enterprise software. Gartner sees companies replacing legacy software with SaaS more than ever. The antiquated term-licenses of on-premise software with its massive up-front costs, double digit maintenance charges, and “true-ups” seem positively barbaric by comparison to the flexibility of SaaS. And crucially for me, Sumo Logic is also one of the few true SaaS companies that is delving into the final frontier of the previously untouchable data center.
Big Data is the “Killer App” for the Cloud.
“Big Data” analytics, using highly parallel-ized architectures like Hadoop or Cassandra, is one of the first innovations in enterprise IT to truly be “born in the cloud”. These new approaches were built to solve problems that just didn’t exist ten, or even five, years ago. The Big Data aspect of Sumo Logic is exciting to me. I am convinced that we are only scratching the surface of what is possible with Sumo Logic’s technology, and I want to be there on the bleeding edge with them.
Management Teams Matter.
When it really comes down to it, I joined Sumo Logic because I have first-hand knowledge of the skills that Sumo Logic’s management team brings to the table. I have complete confidence in Vance Loiselle’s leadership as CEO, and Sumo Logic has an unbeatable combination of know-how and get-it-done people . And clearly some of the top venture capital firms in the world agree with me. This is a winning team, and I like to win!
Silicon Valley is still Nirvana for Geeks and the best place for Start-Ups.
Other cities are catching up, but Silicon Valley is still the best place to start a tech company. The combination of brainpower, money, and critical mass is just hard to beat. On a personal level I have resisted the siren call of San Francisco Bay Area for too long. I am strangely excited to be in a place where I can wear my glasses as a badge of honor, and discuss my love for gadgets and science fiction without shame. Luckily for me, I am blessed with a wife that has embraced my geek needs, and supports me whole heartedly (and a 21-month-old who doesn’t care either way).
So, here’s to a great adventure with the Sumo Logic team, to a new life in Silicon Valley, and to living on the edge of innovation.
P.S. If you want to see what I am so excited about, get a Sumo Logic Free account and check it out.