Blog › Authors › Bruno Kurtic

Bruno Kurtic, Founding Vice President of Product and Strategy

The New Era of Security – yeah, it’s that serious!

02.23.2014 | Posted by Bruno Kurtic, Founding Vice President of Product and Strategy

Security is a tricky thing and it means different things to different people.   It is truly in the eye of the beholder.  There is the checkbox kind, there is the “real” kind, there is the checkbox kind that holds up, and there is the “real” kind that is circumvented, and so on.  Don’t kid yourself: the “absolute” kind does not exist. 

I want to talk about security solutions based on log data.  This is the kind of security that kicks in after the perimeter security (firewalls), intrusion detection (IDS/IPS), vulnerability scanners, and dozens of other security technologies have done their thing.  It ties all of these technologies together, correlates their events, reduces false positives and enables forensic investigation.  Sometimes this technology is called Log Management and/or Security Information and Event Management (SIEM).  I used to build these technologies years ago, but it seems like decades ago. 

SIEM

A typical SIEM product is a hunking appliance, sharp edges, screaming colors – the kind of design that instills confidence and says “Don’t come close, I WILL SHRED YOU! GRRRRRRRRRR”.

Ahhhh, SIEM, makes you feel safe doesn’t it.  It should not.  I proclaim this at the risk at being yet another one of those guys who wants to rag on SIEM, but I built one, and beat many, so I feel I’ve got some ragging rights.  So, what’s wrong with SIEM?  Where does it fall apart?

SIEM does not scale

It is hard enough to capture a terabyte of daily logs (40,000 Events Per Second, 3 Billion Events per Day) and store them.  It is couple of orders of magnitude harder to run correlation in real time and alert when something bad happens.  SIEM tools are extraordinarily difficult to run at scales above 100GB of data per day.  This is because they are designed to scale by adding more CPU, memory, and fast spindles to the same box.  The exponential growth of data over the two decades when those SIEM tools were designed has outpaced the ability to add CPU, memory, and fast spindles into the box.

Result: Data growth outpaces capacity → Data dropped  from collection → Significant data dropped from correlation → Gap in analysis → Serious gap in security

SIEM normalization can’t keep pace

SIEM tools depend on normalization (shoehorning) of all data into one common schema so that you can write queries across all events.  That worked fifteen years ago when sources were few.  These days sources and infrastructure types are expanding like never before.  One enterprise might have multiple vendors and versions of network gear, many versions of operating systems, open source technologies, workloads running in infrastructure as a service (IaaS), and many custom written applications.  Writing normalizers to keep pace with changing log formats is not possible.

Result: Too many data types and versions → Falling behind on adding new sources → Reduced source support → Gaps in analysis → Serious gaps in security

SIEM is rule-only based

This is a tough one.  Rules are useful, even required, but not sufficient.  Rules only catch the thing you express in them, the things you know to look for.   To be secure, you must be ahead of new threats.  A million monkeys writing rules in real-time: not possible.

Result: Your rules are stale → You hire a million monkeys → Monkeys eat all your bananas → You analyze only a subset of relevant events → Serious gap in security

SIEM is too complex

DuckTapeSIEM

It is way too hard to run these things.  I’ve had too many meetings and discussions with my former customers on how to keep the damned things running and too few meetings on how to get value out of the fancy features we provided.  In reality most customers get to use the 20% of features because the rest of the stuff is not reachable.  It is like putting your best tools on the shelf just out of reach.  You can see them, you could do oh so much with them, but you can’t really use them because they are out of reach.

Result: You spend a lot of money → Your team spends a lot of time running SIEM → They don’t succeed on leveraging the cool capabilities → Value is low → Gaps in analysis → Serious gaps in security   

So, what is an honest, forward-looking security professional who does not want to duct tape a solution to do?  What you need is what we just started: Sumo Logic Enterprise Security Analytics.  No, it is not absolute security, it is not checkbox security, but it is a more real security because it:

Scales

Processes terabytes of your data per day in real time. Evaluates rules regardless of data volume and does not restrict what you collect or analyze.  Furthermore, no SIEM style normalization, just add data, a pinch of savvy, a tablespoon of massively parallel compute, and voila.

Result: you add all relevant data → you analyze it all → you get better security 

Simple

It is SaaS, there are no appliances, there are no servers, there is no storage, there is just a browser connected to an elastic cloud.

Result: you don’t have to spend time on running it → you spend time on using it → you get more value → better analysis → better security

Machine Learning

SecurityAnomaliesRules, check.  What about that other unknown stuff?  Answer: machine that learns from data.  It detects patterns without human input.  It then figures out baselines and normal behavior across sources.  In real-time it compares new data to the baseline and notifies you when things are sideways.  Even if “things” are things you’ve NEVER even thought about and NOBODY in the universe has EVER written a single rule to detect.  Sumo Logic detects those too. 

Result: Skynet … nah, benevolent overlord, nah, not yet anyway.   New stuff happens → machines go to work → machines notify you → you provide feedback → machines learn and get smarter → bad things are detected → better security

Read more: Sumo Logic Enterprise Security Analytics

Bruno Kurtic, Founding Vice President of Product and Strategy

Sumo Logic Application for AWS CloudTrail

11.13.2013 | Posted by Bruno Kurtic, Founding Vice President of Product and Strategy

Cloud is opaque

One of the biggest adoption barriers of SaaS, PaaS, and IaaS is the opaqueness and lack of visibility into changes and activities that affect cloud infrastructure.  While running an on-premise infrastructure, you have the ability to audit activity ; for example, you can easily tell who is starting and stopping VMs in virtualization clusters, see who is creating and deleting users, and watch who is making firewall configuration changes. This lack of visibility has been one of the main roadblocks to adoption, even though the benefits have been compelling enough for many enterprises to adopt the Cloud.

This information is critical to securing infrastructure, applications, and data. It’s critical to proving and maintaining compliance, critical to understanding utilization and cost, and finally, it’s critical for maintaining excellence in operations.

Not all Clouds are opaque any longer

Today, the world’s biggest cloud provider, Amazon Web Services (AWS),  announced a new product that, in combination with Sumo Logic, changes the game for cloud infrastructure audit visibility.  AWS CloudTrail is the raw log data feed that will tell you exactly who is doing what, on which sets of infrastructure, at what time, from which IP addresses, and more.  Sumo Logic is integrated with AWS CloudTrail and collects this audit data in real-time and enables SOC and NOC style visibility and analytics.

Here are few examples of what AWS CloudTrail data contains:Network Access

  • Network acl changes.

  • Creation and deletion of network interfaces.

  • Authorized Ingress/Egress across network segments and ports.

  • Changes to privileges, passwords and user profiles.

  • Deletion and creation of security groups.

  • Starting and terminating instances.

  • And much more.

Sumo Logic Application for AWS CloudTrail

Cloud data comes to life with our Sumo Logic Application for AWS CloudTrail, helping our customers across security and compliance, operational visibility, and cost containment. Sumo Logic Application for AWS CloudTrail delivers:

User Activity

  • Seamless integration with AWS CloudTrail data feed.

  • SOC-style, real-time Dashboards in order to monitor access and activity.

  • Forensic analysis to understand the “who, what, when, where, and how” of  events and logs.

  • Alerts when important activities and events occur.

  • Correlation of AWS CloudTrail data with other security data sets, such as intrusion detection system data, operating system events, application data, and more.

This integration delivers improved security posture and better compliance with internal and external regulations that protect your brand.  It also improves operational analytics that can improve SLAs and customer satisfaction.  Finally, it provides deep visibility into the utilization of AWS resources that can help improve efficiency and reduce cost.

The integration is simple: AWS CloudTrail deposits data in near-real time into your S3 account,  and Sumo Logic collects it as soon as it is deposited using an S3 Source.  Sumo Logic also provides a set of pre-built Dashboards and searches to analyze the CloudTrail Data.

To learn more, click here for more details: http://www.sumologic.com/applications/aws-cloudtrail/ and read the documentation: https://support.sumologic.com/entries/30216746-Sumo-Logic-for-Amazon-CloudTrail-App.

Bruno Kurtic, Founding Vice President of Product and Strategy

Akamai and Sumo Logic integrate for real-time application insights!

10.09.2013 | Posted by Bruno Kurtic, Founding Vice President of Product and Strategy

I’m very pleased to announce our strategic alliance with Akamai. Our integrated solution delivers a unified view of application availability, performance, security, and business analytics based on application log data.  Customers who rely on Akamai’s globally distributed infrastructure now can get the real-time feed of all logs generated by Akamai’s infrastructure into their Sumo Logic account in order to integrate and cross-analyze them with their internally generated application data sets!

What problems does the integrated solution solve?

To date, there have been two machine data sets generated by applications that leverage Akamai:

1. Application logs at the origin data centers, which application owners can usually access.

2. Logs generated by Akamai as an application is distributed globally. Application owners typically have zero or limited access to these logs.

Both of these data sets provide important metrics and insights for delivering highly-available, secure applications that also provide detailed view of business results. Until today there was no way to get these data sets into a single tool for real-time analysis, causing the following issues:

  • No single view of performance. While origin performance could be monitored, but that provides little confidence that the app is performant for end users.
  • Difficult to understand user interaction. Without data on how real users interact with an application, it was difficult to gauge how users interacted with the app, what content was served, and ultimately how the app performed for those users (and if performance had any impact on conversions).
  • Issues impacting customer experience remained hidden. The root cause of end-user issues  caused at the origin remained hidden, impacting customer experience for long periods of time.
  • Web App Firewall (WAF) security information not readily available. Security teams were not able to detect and respond to attacks in real-time and take defensive actions to minimize exposure.

The solution!

Quality of Service

Akamai Cloud Monitor and Sumo Logic provide an integrated approach to solving these problems. Sumo Logic has developed an application specifically crafted for customers to extract insights from their Akamai data, which is sent to Sumo Logic in real time.  The solution has been deployed by joint customers (at terabyte scale) to address the following use cases:

  • Real-time analytics about user behavior.  Combine Akamai real-user monitoring data and internal data sets to gain granular insights into user behavior. For example, learn how users behave across different device types, geographies, or even how Akamai quality of service impacts user behavior and business results.

  • AttacksSecurity information management and forensics. Security incidents and attacks on an application can be investigated by deep-diving into sessions, IP addresses, and individual URLs that attackers are attempting to exploit and breach.

  • Application performance management from edge to origin. Quickly determine if an application’s performance issue is caused by your origin or by Akamai’s infrastructure, and which regions, user agents, or devices are impacted.

  • Application release and quality management. Receive an alert as soon as Akamai detects that one or more origins have an elevated number of 4xx or 5xx errors that may be caused by new code push, configuration change, or another issue within your origin application infrastructure.

  • Impact of quality of service and operational excellence. Correlate how quality of service impacts conversions or other business metrics to optimize performance and drive better results

I could go on, but I’m sure you have plenty of ideas of your own.

Join us for a free trial here – as always, there is nothing to install, nothing to manage, nothing to run – we do it all for you.  You can also read our announcement here or read more about the Sumo Logic application for Akamai here.  Take a look at the Akamai press release here.

Bruno Kurtic, Founding Vice President of Product and Strategy

Sumo Logic Anomaly Detection is now in Beta!

09.10.2013 | Posted by Bruno Kurtic, Founding Vice President of Product and Strategy

What is “anomaly detection”?

Here is how the peeps on the interweb and wikipedia define it: Anomaly detection (also known as outlier detection) is the search for events which do not conform to an expected pattern. The detected patterns are called anomalies and often translate to critical and actionable insights that, depending on the application domain, are referred to as outliers, changes, deviations, surprises, intrusions, etc.

The domain: Machine Data

data growthMachine data (most frequently referred to as log data) is generated by applications, servers , infrastructure, mobile devices, web servers, and more.  It is the data generated by machines in order to communicate to humans or other machines exactly what they are doing (e.g. activity), what the status of that activity is (e.g. errors, security issues, performance), and results of their activity (e.g. business metrics). 

The problem of unknown unknowns

Most problems with analyzing machine data orbit around the fact that existing operational analytics technologies enable users to find only those things they know to look for.  I repeat, only things they KNOW they need to look for.  Nothing in these technologies helps users proactively discover events they don’t anticipate getting, events that have not occurred before, events that may have occurred before but are not understood, or complex events that are not easy or even possible to encode into queries and searches.  

Our infrastructure and applications are desperately, and constantly, trying to tell us what’s going on through the massive real-time stream of data they relentlessly throw our way.  And instead of listening, we ask a limited set of questions from some playbook. This is as effective as a patient seeking advice about massive chest pain from a doctor who, instead of listening, runs through a checklist containing skin rash, fever, and runny nose, and then sends the patient home with a clean bill of health.

This is not a good place to be; these previously unknown events hurt us by repeatedly causing downtime, performance degradations, poor user experience, security breaches, compliance violations, and more.  Existing monitoring tools would be sufficient if we lived in static, three system environments where we can enumerate all possible failure conditions and attack vectors.  But we don’t.

unknown events

We operate in environments where we have thousands of sources across servers, networks, and applications and the amount of data they generate is growing exponentially.  They come from a variety of vendors, run a variety of versions, are geographically distributed, and on top of that, they are constantly updated, upgraded, and replaced.  How can we then rely on hard-coded rules and queries and known condition tools to ensure our applications and infrastructure is healthy and secure?  We can’t – it is a fairy tale.  

We believe that three major things are required in order to solve the problem of unknown unknowns at a multi-terabyte scale:

  1. Cloud: enables an elastic compute at the massive scale needed to analyze this scale of data in real-time across all vectors

  2. Big Data technologies: enable a holistic approach to analyzing all data without being bound to schemas, volumes, or batch analytics

  3. Machine learning engine: advanced algorithms that analyze and learn from data as well as humans in order to get smarter over time

Sumo Logic Real-Time Anomaly Detection

anomalyToday we have announced Beta access to our Anomaly Detection engine, an engine that uses thousands of machines in the cloud and continuously and in real-time analyzes ALL of your data to proactively detect important changes and events in your infrastructure.  It does this without requiring users to configure or tune the engine, to write queries or rules, to set thresholds, or to write and apply data parsers.  As it detects changes and events, it bubbles them up to the users for investigation, to add knowledge, classify events, and to apply relevance and severity.  It is in fact this combination of a powerful machine learning algorithm and human expert knowledge that is the real power of our Anomaly Detection engine.

known eventsSo, in essence, Sumo Logic Anomaly Detection continuously turns unknown events into known events.  And that’s what we want: to make events known, because we know how to handle and what to do with known events.  We can alert on them, we can create playbooks and remediation steps, we can prevent them, we can anticipate their impact, and, at least in some cases, we can make them someone else’s problem.

 

In conclusion

Sumo Logic Anomaly Detection has been more than three years in the making.  During that time, it has had the energy of the whole company and our backers behind it.  Sumo Logic was founded with the belief that this capability is transformational in the face of exponential data growth and infrastructure sprawl.  We developed architecture and adopted a business model that enable us to implement an analytics engine that can solve the most complex problems of the Big Data decade.

We look forward to learning from the experience of our Beta customers and soon from all of our customers about how to continue to grow this game changing capability.  Read more here and join us.

Bruno Kurtic, Founding Vice President of Product and Strategy

Pardon me, have you got data about machine data?

01.31.2013 | Posted by Bruno Kurtic, Founding Vice President of Product and Strategy

I’m glad you ask, I just might.  In fact, we started collecting data about machine data some 9 months ago when we participated at the AWS Big Data conference in Boston.  Since then we continued collecting the same data at a variety of industry show and conferences such as VMworld, AWS re: Invent, Velocity, Gluecon, Cloud Slam, Defrag, DataWeek, and others.

The original survey was printed on my home printer, 4 surveys per page, then inexpertly cut with the kitchen scissors the night before the conference – startup style, oh yeah!  The new versions made it onto a shiny new iPad as an IOS App.  The improved method, Apple caché, and a wider reach gave us more than 300 data points and, incidentally, cost us more than 300 Sumo Logic T-Shirts which we were more than happy to give up in exchange for data.  (btw, if you want one come to one of our events, next one coming up will be the Strata Conference).  

As a data junkie, I’ve been slicing and dicing the responses and thought that end of our fiscal year could be the right moment to revisit it and reflect on my first blog post on this data set.

Here is what we asked:

  • Which business problems do you solve by using machine data?
  • Which tools do you use to analyze machine data in order to solve those business problems?
  • What issues do you experience solving those problems with the chosen tools?

The survey was partially designed to help us to better understand the Sumo Logic’s segment of IT Operations Management or IT Management markets as defined by Gartner,  Forrester, and other analysts.  I think that the sample set is relatively representative.  Responders come from shows with varied audiences such as developers at Velocity and GlueCon, data center operators at VMworld, and folks investigating a move to the cloud at AWS re: Invent and Cloud Slam.  Answers were actually pretty consistent across the different “cohorts”.  We have a statistically significant number of responses, and finally, they were not our customers or direct prospects.  So let’s dive in and see what we’ve got and let’s start at the top:

Which business problems do you solve by using logs and other machine data?

  • Applications management, monitoring, and troubleshooting (46%)
  • IT operations management, monitoring, and troubleshooting (33%)
  • Security management, monitoring, and alerting (21%)

Does anything in there surprise?  I guess it depends on what your point of reference is.  Let me compare it to the overall “IT Management” or “IT Operations Management” market.  The consensus(if such a thing exists) is that size by segment is:

  • IT Infrastructure (servers, networks, etc) is up to 50-60% of the total market
  • Application (internal, external, etc.) is just north of 30-40%
  • Security is around 10%

Source: Sumo Logic analysis of aggregated data from various industry analysts who cover IT Management space.

There are a few things that could explain the big difference between how much our subsegment leans more toward Applications vs. IT infrastructure.  

  • (hypothesis #1) analysts measure total product sold to derive the market size which might not be the same as effort people apply to these use cases.  
  • (hypothesis #2) there is more shelfware in IT Infrastructure which overrepresented effort.  
  • (hypothesis #3) there are more home-grown solutions in Application management which underrepresents effort.  
  • (hypothesis #4) our data is an indicator or a result of a shift in the market (e.g., when enterprises shift toward the IaaS, they spend less time managing IT Infrastructure and shift more toward the core competency, their applications).  
  • (obnoxious hypothesis #5) intuitively, it’s the software stupid – nobody buys hardware because they love it, it exists to run software (applications), and we care more about applications, and that’s why it is so.

OK, ok, let’s check the data to see which hypothesis can our narrow response set help test/validate.  I don’t think our data can help us validate hypothesis #1 or hypothesis #2.  I’ll try to come up with additional survey questions that will, in the future, help test these two hypotheses.  

Hypothesis #3 on the other hand might be partially testable.  If we compare responses from users who use commercial vs. who use home-grown, we are left with the following:

Not a significant difference between responders who use commercial vs. responders who use home grown tools.  Hypothesis #3 explains only a couple of percentage points of difference.  

Hypothesis #4 – I think we can use a proxy to test it.  Let’s assume that responders from VMworld are focused on internal data center and the private cloud.  In this case they would not be relying as much on IaaS providers for IT Infrastructure Operations.  On the other hand, let’s also assume that AWS, and other cloud conference attendees are more likely to rely on IaaS for IT Infrastructure Operations.  Data please:

Interesting, seems to explain some shift between security and infrastructure, but not applications.  So, we’re left with:

  • hypothesis #1 – spend vs. reported effort is skewed – perhaps
  • hypothesis #2 – there is more shelfware in IT infrastructure – unlikely
  • obnoxious hypothesis #5 – it’s the software stupid – getting warmer

That should do it for one blog post.  I’ve barely scratched the surface by stopping with the responses to the first question.  I will work to see if I can test the outstanding hypotheses and, if successful, will write about the findings.  I will also follow-up with another post looking at the rest of the data.  I welcome your comments and thoughts.

While you’re at it, try Sumo Logic for free.

Bruno Kurtic, Founding Vice President of Product and Strategy

Real-time Enterprise Dashboards, Really

11.14.2012 | Posted by Bruno Kurtic, Founding Vice President of Product and Strategy

Today we shipped a highly anticipated new capability with a novel approach, novel not only to Sumo Logic, but also novel within our space: Real-time Enterprise Dashboards.  Dashboard technologies have been around for many years, but not all dashboard technologies are created equal.  Most existing technologies leverage either precomputed summary data sets or recompute the entire data set every time a dashboard is viewed.  As such, they suffer from long load times, stale information, an inability to handle the data volume.

Our customers faced a specific challenge: how to take terabytes of machine data per day, crunch it, transform it into information, and render that information in a way that supports making business and IT decisions in real time.  Now they can.

When machine data is used to troubleshoot and monitor today’s production applications or infrastructure, data volume is the enemy.  Large farms of Apache or IIS servers, SaaS and other applications, or data center infrastructure like VMware farms, Cisco networking gear, or Linux or Microsoft Windows server farms generate volumes of data that obey Moore’s Law: the data volume doubles every two years.  It only makes sense that the volume of machine data would follow Moore’s Law – if machine computing capacity doubles, those machines do twice the work, as a result they generate twice the amount of machine data that describes that work.

This exponential growth has put existing dashboarding technologies under an insurmountable strain. Some of us here at Sumo Logic built previous-generation dashboards in our past lives.  From our experience we realized that an entirely new approach is required to enable real-time monitoring and dashboarding and that realization drove development of a new architecture.

First, we adopted the cloud computing paradigm. That turned a data center into an API with lim(capacity)=∞.  This enabled us to spin up and spin down additional capacity truly on demand with a single API call.  Then we built our Streaming Query Engine that leverages that capacity in an elastic manner.  It continuously takes data off the wire and computes results before the data ever hits its permanent resting place.  This “one-time” computing is more efficient and less costly than traditional recompute methods.   When you view a Sumo Logic Dashboard, you simply attach to the existing state, which is continuously computed by our Stream Query Engine in the background.  What you get is freshest data available instantly enabling real-time visibility into your infrastructure or applications.  And they are beautiful to boot. 

Try it for yourself.

Bruno Kurtic, Founding Vice President of Product and Strategy

Securing the Enterprise Cloud – SOC 2 Compliance

10.16.2012 | Posted by Bruno Kurtic, Founding Vice President of Product and Strategy

In our earlier post, Cloudy Compliance Part 1, we discuss general standards, regulations and some basic compliance concepts. In Part 2, we further explore the relevance of current standards and regulations, including the brief explanations of the American Institute of Certified Public Accountants (AICPA) and its Service Organization Control (SOC) reports.

Today we officially announced the successful completion of our SOC 2 Type 1 examination. Based on Trust Services Principles and Criteria, SOC 2 relates to enterprise-grade assurance, management and confidentiality capabilities.  It’s a significant validation for Sumo Logic, and further proof of the enterprise readiness of our cloud-based log management and analytics service.

What the announcement means to you
As part of SOC 2 examination, Sumo Logic received evaluations which reviewed control confidentiality and integrity of customer’s log data and other machine data in the following three, key areas:

  • Security – The system is protected against unauthorized access (both physical and logical).
  • Availability – The system is available for operation and use as committed or agreed.
  • Confidentiality – Information designated as confidential is protected as committed or agreed.

… Continue Reading

Bruno Kurtic, Founding Vice President of Product and Strategy

Sumo Logic at AWS Big Data Boston

05.29.2012 | Posted by Bruno Kurtic, Founding Vice President of Product and Strategy

I recently represented Sumo Logic at the AWS Big Data conference in Boston.  It was a great show, very well-attended.  Sumo Logic was one of the few vendors invited to participate.

During the conference I conducted a survey of the attendees to try to understand how this, emerging early-adopter segment of IT professionals,  manages log data for their infrastructure and applications.  

Common characteristics of attendees surveyed:

  • They run their apps and infrastructure in the cloud
  • They deal with large data sets
  • They came to learn how to better exploit/leverage big data and cloud technologies

What I asked:

  • Do you use logs to help you in your daily work, and if so, how?
  • What types of tools do you use for log analysis and management?
  • What are the specific pain points associated with your log management solutions?

The findings were interesting.  Taking each one in turn:  

No major surprises here.  Enterprises buy IaaS in order to run applications, either for burst capacity or because they believe it’s the wave of the future.  The fact that someone else manages the infrastructure does not change the fact that you have to manage and monitor your applications, operating systems, and virtual machines.


A bit of a surprise here.  In my previous analysis, some 45% of enterprises use homegrown solutions, but in this segment it’s 70%.  Big difference with the big data and cloud crowd.  A possible explanation for this is that existing commercial solutions are not easy to deploy and run in the cloud and don’t scale to handle big data.  So, the solution = build it yourself.  Hmm.

Yes, yes, I know, it adds up to more than 100%.  That’s because the question was stated as “select as many as apply” and many respondents have more than one problem.  So, nothing terribly interesting in there.  But let me dig a bit deeper into issues associated with homegrown vs. commercial.

 

This makes a bit more sense.  For the home grown, it looks like complexity is the biggest pain – which makes sense.  Assembling together huge systems to support big volumes of log data is more difficult than many people anticipate.  Hadoop and other similar solutions are not optimized to simply and easily deliver answers.  This then leads to the next pain point:  if it is not easy to use, then you don’t use it = does not deliver enough value.  

The responses on commercial solutions make sense as well.  Today’s commercial products are expensive and hard to operate.  On top of the sticker price, you have to spend precious employee time to perform frequent software upgrades and implement “duct tape” scaling.  If you don’t have expertise internally you buy it from vendors’ professional services at beaucoup $$$$$.  You have to get your own compute and storage, which grow as your data volume grows.  So, commercial “run yourself” solutions = very high CAPEX (upfront capital expenditures) and OPEX (ongoing operational expenditures).  In the end (as the second pain point highlights), commercial solutions are also complex to operate and hard to use, requiring highly skilled and hard to find personnel.

Pretty bleak – what now?
At Sumo Logic, we think we have a solution.  The pain points associated with home-grown and commercial solutions that were architected in the last decade are exactly what we set out to solve. We started this company after building, selling and supporting the previous generation of log management and analysis solutions.  We’ve incorporated our collective experience and customer feedback into Sumo Logic.

Built for the cloud
The Sumo Solution is fundamentally different from anything else out there.  It is built for big data and is “cloud native”.  All of the complexities associated with deploying, managing, upgrading, and scaling are gone – we do all that for you.  Our customers get a simple-to-use web application, and we do all the rest.

Elastic scalability
Our architecture is true cloud, not a “cloud-washed” adaptation of on-premise single-instance software solutions that are trying to pass themselves off as cloud.  Each of our services are separate and can be scaled independently.  It takes us minutes to triple the capacity of our system.

Insights beyond your wildest dreams
Because of our architecture, we are able to build analytics at scale.  Our LogReduce™ and Push Analytics™ uncover things that you didn’t even know you should be paying attention to.  The whole value proposition is turned on its head – instead of having to do all the work yourself, our algorithms do the work for you while you guide them to get better over time.

Come try it out and see for yourself: https://www.sumologic.com/free-trial/

Bruno Kurtic, Founding Vice President of Product and Strategy

Sumo Logic at RSA: Showcasing data security in cloud-based log management

03.16.2012 | Posted by Bruno Kurtic, Founding Vice President of Product and Strategy

I’m proud to announce that Sumo Logic was one of the top 10 finalists at the 2012 Innovation Sandbox, at last week’s RSA Conference Event.  While I’ve been to the RSA Conference many times, this was my first time at the Innovation Sandbox. This year’s conference showcased three important themes in log analysis today:  Big Data volumes, data privacy, and the need for better analytics. 

… Continue Reading

Twitter