Blog › Amazon Web Services

Sanjay Sarathy, CMO

The Three Questions Customers Invariably Ask Us

10.08.2014 | Posted by Sanjay Sarathy, CMO

For almost all DevOps, App Ops and Security teams, finding that needle in the haystack, that indicator of cause, the unseen effect, and finding it quickly is fundamental to their success. Our central mission is to enable the success of these teams via rapid analysis of their machine data.  During their process of researching and investigating Sumo Logic, customers invariably ask us three questions:

  • How long will it take to get value from Sumo Logic?
  • Everyone provides analytics – what’s different about yours?
  • How secure is my data in the cloud?

Let’s address each of these questions.

Time to Value

A key benefit we deliver revolves around speed and simplicity: no hardware, storage or deployment overhead. Beyond the fact that we’re SaaS the true value, however, revolves around how quickly we can turn data into actionable information.  

shutterstock_103031303

First, our cloud-based service integrates quickly into any environment (on-premises, cloud, hybrid) that generates machine data. Because we’re data source agnostic, our service can quickly correlate logs across various systems, leading to new and relevant analyses.  For example, one of our engineers has written a post on how we use Sumo Logic internally to track what’s happening with Amazon SES messages and how others can very quickly set this up as well.  

Second, value is generated by how quickly you uncover insights. A Vice President of IT at a financial services firm that is now using Sumo Logic shared with us that incidents that used to take him 2 hours to discover and fix now takes him 10 minutes. Why? Because the machine learning that underpins our LogReduce pattern recognition engine surfaces the critical issues that his team can investigate and remediate, without the need to write any rules.

Analytics Unleashed

Sumo Logic was founded on the idea that powerful analytics are critical to making machine data a corporate resource to be valued rather than ignored. Our analytics engine combines the best of machine learning, real-time processing, and pre-built applications to provide rapid value.  

Fuze recently implemented Sumo Logic to help gain visibility of its technical infrastructure. They are now able to address incidents and improvements in its infrastructure much more quickly with specific insights. They report saving 40% in management time savings and a 5x improvement in “signal-to-noise” ratio.  A critical reason why InsideView chose Sumo Logic was the availability of our applications for AWS Elastic Load Balancing and AWS CloudTrail to help monitor their AWS infrastructure and to get immediate value from our service.     

Security In the Cloud 

Customers are understandably curious about our security processes, policies and infrastructure that would help them mitigate concerns about sending their data to a 3rd party vendor.  Given that our founding roots are in security and that our entire operating model is to securely deliver data insights at scale, we have a deep appreciation for the natural concerns prospects might have.

We’ve crafted a detailed White Paper that outlines how we secure our service, but here are a few noteworthy highlights.

  • Data encryption:  we encrypt log data both in motion and at rest and each customer’s unique keys are rotated daily
  • Certifications:  we’ve spent significant resources on our current attestations and certifications (e.g., HIPAA, SOC 2 Type 2 and others) and are actively adding to this list
  • Security processes: included in this bucket are centrally managed FIPS-140 two-factor authentication devices, biometric controls, whitelists for users, ports, and addresses, and more

Our CISO has discussed the broader principles of managing security in the cloud in an on-demand webinar and of course you can always start investigating our service via Sumo Logic Free to understand for yourself how we answer these three questions.

Cloud Log Management for Control Freaks

10.02.2014 | Posted by Bright Fulton

The following is a guest post from Bright Fulton, Director of Engineering Operations at Swipely.

Like other teams that value their time and focus, Swipely Engineering strongly prefers partnering with third party infrastructure, platform, and monitoring services. We don’t, however, like to be externally blocked while debugging an issue or asking a new question of our data. Is giving up control the price of convenience? It shouldn’t be. The best services do the heavy lifting for you while preserving flexibility. The key lies in how you interface with the service: stay in control of data ingest and code extensibility.

A great example of this principle is Swipely’s log management architecture. We’ve been happily using Sumo Logic for years. They have an awesome product and are responsive to their customers. That’s a strong foundation, but because logging is such a vital function, we retain essential controls while taking advantage of all the power that Sumo Logic provides.


Get the benefits

Infrastructure services have flipped our notion of stability: instead of being comforted by long uptime, we now see it as a liability. Instances start, do work for an hour, terminate. But where do the logs go? One key benefit of a well integrated log management solution is centralization: stream log data off transient systems and into a centralized service.

Once stored and indexed, we want to be able to ask questions of our logs, to react to them. Quick answers come from ad-hoc searches:

  • How many times did we see this exception yesterday?

  • Show me everything related to this request ID.

Next, we define scheduled reports to catch issues earlier and shift toward a strategic view of our event data.

  • Alert me if we didn’t process a heartbeat job last hour.

  • Send me a weekly report of which instance types have the worst clock skew.

Good cloud log management solutions make this centralization, searching, and reporting easy.


Control the data

It’s possible to get these benefits without sacrificing control of the data by keeping the ingest path simple: push data through a single transport agent and keep your own copy. Swipely’s logging architecture collects with rsyslog and processes with Logstash before forwarding everything to both S3 and Sumo Logic.

Swipely’s Logging Architecture

Put all your events in one agent and watch that agent.

You likely have several services that you want to push time series data to: logs, metrics, alerts. To solve each concern independently could leave you with multiple long running agent processes that you need to install, configure, and keep running on every system. Each of those agents will solve similar problems of encryption, authorization, batching, local buffering, back-off, updates. Each comes with its own idiosyncrasies and dependencies. That’s a lot of complexity to manage in every instance.

The lowest common denominator of these time series event domains is the log. Simplify by standardizing on one log forwarding agent in your base image. Use something reliable, widely deployed, open source. Swipely uses rsyslog, but more important than which one is that there is just one.

Tee time

It seems an obvious point, but control freaks shouldn’t need to export their data from third parties. Instead of forwarding straight to the external service, send logs to an aggregation server first. Swipely uses Logstash to receive the many rsyslog streams. In addition to addressing vendor integrations in one place, this point of centralization allows you to:

  • Tee your event stream. Different downstream services have different strengths. Swipely sends all logs to both Sumo Logic for search and reporting and to S3 for retention and batch jobs.

  • Apply real-time policies. Since Logstash sees every log almost immediately, it’s a great place to enforce invariants, augment events, and make routing decisions. For example, logs that come in without required fields are flagged (or dropped). We add classification tags based on source and content patterns. Metrics are sent to a metric service. Critical events are pushed to an SNS topic.


Control the code

The output is as important as the input. Now that you’re pushing all your logs to a log management service and interacting happily through search and reports, extend the service by making use of indexes and aggregation operators from your own code.

Wrap the API

Good log management services have good APIs and Sumo Logic has several. The Search Job API is particularly powerful, giving access to streaming results in the same way we’re used to in their search UI.

Swipely created the sumo-search gem in order to take advantage of the Search Job API. We use it to permit arbitrary action on the results of a search.

Custom alerts and dashboards

Bringing searches into the comfort of the Unix shell is part of the appeal of a tool like this, but even more compelling is bringing them into code. For example, Swipely uses sumo-search from a periodic job to send alerts that are more actionable than just the search query results. We can select the most pertinent parts of the message and link in information from other sources. 

Engineers at Swipely start weekly tactical meetings by reporting trailing seven day metrics. For example: features shipped, slowest requests, error rates, analytics pipeline durations. These indicators help guide and prioritize discussion. Although many of these metrics are from different sources, we like to see them together in one dashboard. With sumo-search and the Search Job API, we can turn any number from a log query into a dashboard widget in a couple lines of Ruby.


Giving up control is not the price of SaaS convenience. Sumo Logic does the heavy lifting of log management for Swipely and provides an interface that allows us to stay flexible. We control data on the way in by preferring open source tools in the early stages of our log pipeline and saving everything we send to S3. We preserve our ability to extend functionality by making their powerful search API easy to use from both shell and Ruby.

We’d appreciate feedback (@swipelyeng) on our logging architecture. Also, we’re not really control freaks and would love pull requests and suggestions on sumo-search!

Vivek Kaushal

Debugging Amazon SES message delivery using Sumo Logic

10.02.2014 | Posted by Vivek Kaushal

 

We at Sumo Logic use Amazon SES (Simple Email Service) for sending thousands of emails every day for things like search results, alerts, account notifications etc. We need to monitor SES to ensure timely delivery and know when emails bounce.

Amazon SES provides notifications about status of email via Amazon SNS (Simple Notification Service). Amazon SNS allows you to send these notifications to any HTTP endpoint. We ingest these messages using Sumo Logic’s HTTP Source.

Using these logs, we have identified problems like scheduled searches which always send results to an invalid email address; and a Microsoft Office 365 outage when a customer reported having not received the sign up email.

 

Here’s a step by step guide on how to send your Amazon SES notifications to Sumo Logic.

1. Set Up Collector. The first step is to set up a hosted collector in Sumo Logic which can receive logs via HTTP endpoint. While setting up the hosted collector, we recommend providing an informative source category name, like “aws-ses”.  

2. Add HTTP Source. After adding a hosted collector, you need to add a HTTP Source. Once a HTTP Source is added, it will generate a URL which will be used to receive notifications from SNS. The URL looks like https://collectors.sumologic.com/receiver/v1/http/ABCDEFGHIJK.  

3. Create SNS Topic. In order to send notifications from SES to SNS, we need to create a SNS topic. The following picture shows how to create a new SNS topic on the SNS console. We uses “SES-Notifications” as the name of the topic in our example.

4. Create SNS Subscription. SNS allows you to send a notification to multiple HTTP Endpoints by creating multiple subscriptions within a topic. In this step we will create one subscription for the SES-Notifications topic created in step 3 and send notifications to the HTTP endpoint generated in step 2.

5. Confirm Subscription. After a subscription is created, Amazon SNS will send a subscription confirmation message to the endpoint. This subscription confirmation notification can be found in Sumo Logic by searching for: _sourceCategory=<name of the sourceCategory provided in step 1>

For example: _sourceCategory=aws-ses 

Copy the link from the logs and paste it in your browser.

6. Send SES notifications to SNS. Finally configure SES to send notifications to SNS. For this, go to the SES console and select the option of verified senders on the left hand side. In the list of verified email addresses, select the email address for which you want to configure the logs. The page looks like

On the above page, expand the notifications section and click edit notifications. Select the SNS topic you created in step 3.

 

7. Switch message format to raw (Optional). SES sends notifications to SNS in a JSON format. Any notification sent through SNS is by default wrapped into a JSON message. Thus in this case, it creates a nested JSON, resulting in a nearly unreadable message. To remove this problem of nested JSON messages, we highly recommend configuring SNS to use raw message delivery option.

Before setting raw message format

After setting raw message format

 

 

JSON operator was used to easily parse the messages as show in the queries below:

1. Retrieve general information out of messages
_sourceCategory=aws-ses | json “notificationType”, “mail”, “mail.destination”, “mail.destination[0]“, “bounce”, “bounce.bounceType”, “bounce.bounceSubType”, “bounce.bouncedRecipients[0]” nodrop

2. Identify most frequently bounced recipients
_sourceCategory=aws-ses AND !”notificationType\”:\”Delivery” | json “notificationType”, “mail.destination[0]” as type,destination  nodrop | count by destination | sort by _count

Sanjay Sarathy, CMO

Machine Data Analytics, Down Under

08.20.2014 | Posted by Sanjay Sarathy, CMO

Not often have I spent two weeks in August in a “winter” climate, but it was a great opportunity to spend some time with our new team in Australia, visit with prospects, customers and partners, and attend a couple of Amazon Web Service Summits to boot.  

Here are some straight-off-the-plane observations.

A Local “Data Center” Presence Matters:  We now have production instances in Sydney, Dublin and the United States.  In conversations with Australian enterprises and government entities, the fact that we have both a local team and a local production instance went extremely far when determining whether we were a good match for their needs.  This was true whether their use case centered around supporting their security initiatives or enabling their DevOps teams to release applications faster to market.  You can now select where your data resides when you sign up for Sumo Logic Free.

Australia is Ready For the Cloud:  From the smallest startup to extremely large mining companies, everyone was interested in how we could support their cloud initiatives.  The AWS Summits were packed and the conversations we had revolved not just around machine data analytics but what we could do to support their evolving infrastructure strategy.  The fact that we have apps for Amazon S3, Cloudfront, CloudTrail and ELB made the conversations even more productive, and we’ve seen significant interest in our special trial for AWS customers.  

We’re A Natural Fit for Managed Service Providers:  As a multi-tenant service born in the Cloud, we have a slew of advantages for MSP and MSSPs looking to embed proactive analytics into their service offering, as our work with The Herjavec Group and Medidata shows.  We’ve had success with multiple partners in the US and the many discussions we had in Australia indicate that there’s a very interesting partner opportunity there as well.    

Analytics and Time to Insights:  In my conversations with dozens of people at the two summits and in 1-1 meetings, two trends immediately stand out.  While people remain extremely interested in how they can take advantage of real-time dashboards and alerts, one of their bigger concerns typically revolved around how quickly they could get to that point.  ”I don’t have time to do a lot of infrastructure management” was the common refrain and we certainly empathize with that thought.  The second is just a reflection on how we sometimes take for granted our pattern recognition technology, aka, LogReduce.  Having shown this to quite a few people at the booth, the reaction on their faces never gets old especially after they see the order of magnitude by which we reduce the time taken to find something interesting in their machine data.  

At the end of the day, this is a people business.  We have a great team in Australia and look forward to publicizing their many successes over the coming quarters.

photo (6)

Ariel Smoliar, Senior Product Manager

AWS Elastic Load Balancing – New Visibility Into Your AWS Load Balancers

03.06.2014 | Posted by Ariel Smoliar, Senior Product Manager

After the successful launch of the Sumo Logic Application for AWS CloudTrail last November and with numerous customers now using this application, we were really excited to work again on a new logging service from AWS, this time providing analytics around log files generated by the AWS Load Balancers.

Our integration with AWS CloudTrail targets use cases relevant to security, usage and operations. With our new application for AWS Elastic Load Balancing, we provide our customers with dashboards that provide real-time insights into operational data. You will also be able to add additional use cases based on your requirements by parsing the log entries and visualizing the data using our visualization tools.

Insights from ELB Log Data

Sumo Logic runs natively on the AWS infrastructure and uses AWS load balancers, so we had plenty of raw data to work with during the development of the content. You will find 12 fields in the ELB logs covering the entire request/response lifecycle. By adding the request, backend and response processing time, we can highlight the total time (latency) from when the load balancer started reading the request headers to when the load balancer started sending the response headers to the client. The Latency Analysis dashboard presents a granular analysis per domain, client IP and backend instance (EC2).

The Application also provides analysis of the status codes based on the ELB and backend instances status codes. Please note that the total count for the status codes will be similar for both the ELB and the instances most of the time, unless there are issues, such as no backend response or client rejected request. Additionally, for ELBs that have been configured with a TCP listener (layer 4) rather than HTTP, the TCP requests will be logged. In this case, you will see that the URL has three dashes and there are no values for the HTTP status codes.

Alerting Frequency

Often during my discussions with Sumo Logic users, the topic of scheduled searches and alerting comes up. Based on our work with ELB logs, there is no specific threshold that we recommend that covers every single use case scenario. The threshold should be based on the application – e.g., tiny beacon requests versus downloading huge files cause different latencies. Sumo Logic provides you with the flexibility to set threshold in the scheduled search or just to change the color in the graph for monitoring purpose, based on the value range

Visualization

I want to talk a little bit about machine data visualization. While skiing last week in Steamboat Colorado, I kept thinking about the relevance of the beautiful Rocky Mountain landscape with the somewhat more mundane world of load balancer data visualization. So here is what we did to present the load balancers data in a more compelling way:

pic1_blog

You can slice and dice the data using our Transpose operator as we did in the Latency by Load Balancer monitor, but I would like to focus on a different feature that was built by our UI team and share how we used it in this application. This feature combines data about the number of requests, the size of the total requests, the client IP address and integrates these data elements into the Total Requests and Data Volume monitor. 

We first used this visualization approach in our Nginx app (Traffic Volume and Bytes Served monitor). We received very positive feedback and decided it made sense to incorporate this approach into this application as well.

Combining three fields in a single view enables you to get faster overview of your environment and also provides you with the ability to drill-down and investigate any activity.

Screen Shot 2014-03-05 at 6.32.01 PM

It reminds one of the landscape above, right? :-)

To get this same visualization, click on the gear icon in the Search screen and choose the Change Series option. 

pic3_blog

For each data series, you can choose how you would like to represent the data. We used Column Chart for the total requests and Line Chart for the received and sent data. 

pic4_blog

I find it beautiful and useful. I hope you plan to use this visualization approach in your dashboards, and please let us know if any help is required.

One more thing…

Please stay tuned and check our posts next week… we can’t wait to share with you where we’re going next in the world of Sumo Logic Applications.

Manish Khettry

Sumo Logic Deployment Infrastructure and Practices

01.08.2014 | Posted by Manish Khettry

Introduction

Here at Sumo Logic, we run a log management service that ingests and indexes many terabytes of data a day; our customers then use our service to query and analyze all of this data. Powering this service are a dozen or more separate programs (which I will call assembly from now on), running in the cloud, communicating with one another. For instance the Receiver assembly is responsible for accepting log lines from collectors running on our customer host machines, while the Index assembly creates text indices for the massive amount of data pumping into our system constantly being fed by the Receivers.

We deploy to our production system multiple times each week, while our engineering teams are constantly building new features, fixing bugs, improving performance, and, last but not least, working on infrastructure improvements to help in the care and wellbeing of this complex big-data system. How do we do it? This blog post tries to explain our (semi)-continuous deployment system.

Running through hoops

In any continuous deployment system, you need multiple hoops that your software must pass through, before you deploy it for your users. At Sumo Logic, we have four well defined tiers with clear deployment criteria for each.  A tier is an instance of the entire Sumo Logic service where all the assemblies are running in concert as well as all the monitoring infrastructure (health checks, internal administrative tools, auto-remediation scripts, etc) watching over it.

Night

This is the first step in the sequence of steps that our software goes through. Originally intended as a nightly deploy, we now automatically deploy the latest clean builds of each assembly on our master branch several times every day. A clean build means that all the unit tests for the assemblies pass. In our complex system, however, it is the interaction between assemblies which can break functionality. To test these, we have a number of integration tests running against Night regularly. Any failures in these integration tests are an early warning that something is broken. We also have a dedicated person troubleshooting problems with Night whose responsibility it is, at the very least, to identify and file bugs for problems.

Stage

We cut a release branch once a week and use Stage to test this branch much as we use Night to keep master healthy. The same set of integration tests that run against Night also run against Stage and the goal is to stabilize the branch in readiness for a deployment to production. Our QA team does ad-hoc testing and runs their manual test suites against Stage.

Long

Right before production is the Long tier. We consider this almost as important as our Production tier. The interaction between Long and Production is well described in this webinar given by our founders. Logs from Long are fed to Production and vice versa, so Long is used to monitor and trouble shoot problems with Production.

Deployments to Long are done manually a few days before a scheduled deployment to Production from a build that has passed all automated unit tests as well as integration tests on Stage. While the deployment is manually triggered, the actual  process of upgrading and restarting the entire system is about as close to a one-button-click as you can get (or one command on the CLI)!

Production

After Long has soaked for a few days, we manually deploy the software running on Long to Production, the last hoop our software has to jump through. We aim for a full deployment every week and often times will do smaller upgrades of our software between full deploys.

Being Production, this deployment is closely watched and there are a fair number of safeguards built into the process. Most notably, we have two dedicated engineers who manage this deployment, with one acting as an observer. We also have a tele-conference with screen sharing that anyone can join and observe the deploy process.

Social Practices

Closely associated with the software infrastructure are the social aspects of keeping this system running. These are:

Ownership

We have well defined ownership of these tiers within engineering and devops which rotate weekly. An engineer is designated Primary and is responsible for Long and Production. Similarly we have a designated Jenkins Cop role, to keep our continuous integration system and Night and Stage healthy.

Group decision making and notifications

We have a short standup everyday before lunch, which everyone in engineering attends. The Primary and Jenkins Cop update the team on any problems or issues with these tiers for the previous day.

In addition to a physical meeting, we use Campfire, to discuss on-going problems and notifying others of changes to any of these tiers. If someone wants to change a configuration property on night to test a new feature, the person would update everyone else on campfire. Everyone (and not just the Primary or Jenkins Cop) is in the loop about these tiers and can jump in to troubleshoot problems.

Automate almost everything. A checklist for the rest.

There are certain things that are done or triggered manually. In cases where humans operate something (a deploy to Long or Production for instance), we have a checklist for engineers to follow. For more on checklists, I refer you to an excellent book, The Checklist Manifesto.

Conclusion

This system has been in place since Sumo Logic went live and has served us well. It bears mentioning that the key to all of this is automation, uniformity, and well-delineated responsibilities. For example, spinning up a complete system takes just a couple of commands in our deployment shell. Also, any deployment (even a personal one for development) comes up with everything pre-installed and running, including health checks, monitoring dashboards or auto-remediation scripts. Identifying and fixing a problem on Production is no different from that on Night. In almost every way (except for waking up the Jenkins Cop in the middle of the night and the sizing), these are identical tiers!

While automation is key, it doesn’t take away the fact that people who run and keep things healthy. A deployment to production can be stressful, more so for the Primary than anyone else and having a well defined checklist can take away some of the stress.

Any system like this needs constant improvements and since we are not sitting idle, there are dozens of features, big and small that need to be worked on. Two big ones are:

  • Red-Green deployments, where new releases are rolled out to a small set of instances and once we are confident they work, are pushed to the rest of the fleet.

  • More frequent deployments of smaller parts of the system. Smaller more frequent deployments are less risky.

In other words, there is a lot of work to do. Come join us at Sumo Logic!

Bruno Kurtic, Founding Vice President of Product and Strategy

Sumo Logic Application for AWS CloudTrail

11.13.2013 | Posted by Bruno Kurtic, Founding Vice President of Product and Strategy

Cloud is opaque

One of the biggest adoption barriers of SaaS, PaaS, and IaaS is the opaqueness and lack of visibility into changes and activities that affect cloud infrastructure.  While running an on-premise infrastructure, you have the ability to audit activity ; for example, you can easily tell who is starting and stopping VMs in virtualization clusters, see who is creating and deleting users, and watch who is making firewall configuration changes. This lack of visibility has been one of the main roadblocks to adoption, even though the benefits have been compelling enough for many enterprises to adopt the Cloud.

This information is critical to securing infrastructure, applications, and data. It’s critical to proving and maintaining compliance, critical to understanding utilization and cost, and finally, it’s critical for maintaining excellence in operations.

Not all Clouds are opaque any longer

Today, the world’s biggest cloud provider, Amazon Web Services (AWS),  announced a new product that, in combination with Sumo Logic, changes the game for cloud infrastructure audit visibility.  AWS CloudTrail is the raw log data feed that will tell you exactly who is doing what, on which sets of infrastructure, at what time, from which IP addresses, and more.  Sumo Logic is integrated with AWS CloudTrail and collects this audit data in real-time and enables SOC and NOC style visibility and analytics.

Here are few examples of what AWS CloudTrail data contains:Network Access

  • Network acl changes.

  • Creation and deletion of network interfaces.

  • Authorized Ingress/Egress across network segments and ports.

  • Changes to privileges, passwords and user profiles.

  • Deletion and creation of security groups.

  • Starting and terminating instances.

  • And much more.

Sumo Logic Application for AWS CloudTrail

Cloud data comes to life with our Sumo Logic Application for AWS CloudTrail, helping our customers across security and compliance, operational visibility, and cost containment. Sumo Logic Application for AWS CloudTrail delivers:

User Activity

  • Seamless integration with AWS CloudTrail data feed.

  • SOC-style, real-time Dashboards in order to monitor access and activity.

  • Forensic analysis to understand the “who, what, when, where, and how” of  events and logs.

  • Alerts when important activities and events occur.

  • Correlation of AWS CloudTrail data with other security data sets, such as intrusion detection system data, operating system events, application data, and more.

This integration delivers improved security posture and better compliance with internal and external regulations that protect your brand.  It also improves operational analytics that can improve SLAs and customer satisfaction.  Finally, it provides deep visibility into the utilization of AWS resources that can help improve efficiency and reduce cost.

The integration is simple: AWS CloudTrail deposits data in near-real time into your S3 account,  and Sumo Logic collects it as soon as it is deposited using an S3 Source.  Sumo Logic also provides a set of pre-built Dashboards and searches to analyze the CloudTrail Data.

To learn more, click here for more details: http://www.sumologic.com/applications/aws-cloudtrail/ and read the documentation: https://support.sumologic.com/entries/30216746-Sumo-Logic-for-Amazon-CloudTrail-App.

Sanjay Sarathy, CMO

Universal Collection of Machine Data

04.18.2013 | Posted by Sanjay Sarathy, CMO

Customers love flexibility, especially if that flexibility drives additional business value.  In that vein, today we announced an expansion of our log data collection capabilities with our hosted HTTPS and Amazon S3 collectors that eliminate the need for any local software installation.  There may be a variety of reasons why you don’t want or can’t have local collectors  - for example, not having access to the underlying infrastructure as often happens with Infrastructure-As-A-Service (IaaS) environments.  Or you simply don’t feeling like deploying any local software into your current infrastructure. Defining these hosted collectors is now baked into the set-up process, whether you’re using Sumo Logic Free or our Enterprise product.    

 

 

With these new capabilities, companies can now unify how they collect and analyze log data generated from private clouds, public clouds, and their on-premise infrastructure.  They can then apply our unique analytics capabilities like LogReduce to generate insight across every relevant application and operational tier.

With companies increasingly moving towards the Cloud to power different parts of their business, it’s imperative that they have the necessary means to troubleshoot and monitor their diverse infrastructure.  Sumo Logic provides that flexibility.

Bruno Kurtic, Founding Vice President of Product and Strategy

Pardon me, have you got data about machine data?

01.31.2013 | Posted by Bruno Kurtic, Founding Vice President of Product and Strategy

I’m glad you ask, I just might.  In fact, we started collecting data about machine data some 9 months ago when we participated at the AWS Big Data conference in Boston.  Since then we continued collecting the same data at a variety of industry show and conferences such as VMworld, AWS re: Invent, Velocity, Gluecon, Cloud Slam, Defrag, DataWeek, and others.

The original survey was printed on my home printer, 4 surveys per page, then inexpertly cut with the kitchen scissors the night before the conference – startup style, oh yeah!  The new versions made it onto a shiny new iPad as an IOS App.  The improved method, Apple caché, and a wider reach gave us more than 300 data points and, incidentally, cost us more than 300 Sumo Logic T-Shirts which we were more than happy to give up in exchange for data.  (btw, if you want one come to one of our events, next one coming up will be the Strata Conference).  

As a data junkie, I’ve been slicing and dicing the responses and thought that end of our fiscal year could be the right moment to revisit it and reflect on my first blog post on this data set.

Here is what we asked:

  • Which business problems do you solve by using machine data?
  • Which tools do you use to analyze machine data in order to solve those business problems?
  • What issues do you experience solving those problems with the chosen tools?

The survey was partially designed to help us to better understand the Sumo Logic’s segment of IT Operations Management or IT Management markets as defined by Gartner,  Forrester, and other analysts.  I think that the sample set is relatively representative.  Responders come from shows with varied audiences such as developers at Velocity and GlueCon, data center operators at VMworld, and folks investigating a move to the cloud at AWS re: Invent and Cloud Slam.  Answers were actually pretty consistent across the different “cohorts”.  We have a statistically significant number of responses, and finally, they were not our customers or direct prospects.  So let’s dive in and see what we’ve got and let’s start at the top:

Which business problems do you solve by using logs and other machine data?

  • Applications management, monitoring, and troubleshooting (46%)
  • IT operations management, monitoring, and troubleshooting (33%)
  • Security management, monitoring, and alerting (21%)

Does anything in there surprise?  I guess it depends on what your point of reference is.  Let me compare it to the overall “IT Management” or “IT Operations Management” market.  The consensus(if such a thing exists) is that size by segment is:

  • IT Infrastructure (servers, networks, etc) is up to 50-60% of the total market
  • Application (internal, external, etc.) is just north of 30-40%
  • Security is around 10%

Source: Sumo Logic analysis of aggregated data from various industry analysts who cover IT Management space.

There are a few things that could explain the big difference between how much our subsegment leans more toward Applications vs. IT infrastructure.  

  • (hypothesis #1) analysts measure total product sold to derive the market size which might not be the same as effort people apply to these use cases.  
  • (hypothesis #2) there is more shelfware in IT Infrastructure which overrepresented effort.  
  • (hypothesis #3) there are more home-grown solutions in Application management which underrepresents effort.  
  • (hypothesis #4) our data is an indicator or a result of a shift in the market (e.g., when enterprises shift toward the IaaS, they spend less time managing IT Infrastructure and shift more toward the core competency, their applications).  
  • (obnoxious hypothesis #5) intuitively, it’s the software stupid – nobody buys hardware because they love it, it exists to run software (applications), and we care more about applications, and that’s why it is so.

OK, ok, let’s check the data to see which hypothesis can our narrow response set help test/validate.  I don’t think our data can help us validate hypothesis #1 or hypothesis #2.  I’ll try to come up with additional survey questions that will, in the future, help test these two hypotheses.  

Hypothesis #3 on the other hand might be partially testable.  If we compare responses from users who use commercial vs. who use home-grown, we are left with the following:

Not a significant difference between responders who use commercial vs. responders who use home grown tools.  Hypothesis #3 explains only a couple of percentage points of difference.  

Hypothesis #4 – I think we can use a proxy to test it.  Let’s assume that responders from VMworld are focused on internal data center and the private cloud.  In this case they would not be relying as much on IaaS providers for IT Infrastructure Operations.  On the other hand, let’s also assume that AWS, and other cloud conference attendees are more likely to rely on IaaS for IT Infrastructure Operations.  Data please:

Interesting, seems to explain some shift between security and infrastructure, but not applications.  So, we’re left with:

  • hypothesis #1 – spend vs. reported effort is skewed – perhaps
  • hypothesis #2 – there is more shelfware in IT infrastructure – unlikely
  • obnoxious hypothesis #5 – it’s the software stupid – getting warmer

That should do it for one blog post.  I’ve barely scratched the surface by stopping with the responses to the first question.  I will work to see if I can test the outstanding hypotheses and, if successful, will write about the findings.  I will also follow-up with another post looking at the rest of the data.  I welcome your comments and thoughts.

While you’re at it, try Sumo Logic for free.

Ben Newton, Senior Product Manager

Why I joined Sumo Logic and Moved to Silicon Valley

01.28.2013 | Posted by Ben Newton, Senior Product Manager

Entering StartUP

We make hundreds of decisions every day, mostly small ones, that are just part of life’s ebb and flow. And then there are the big decisions that don’t merely create ripples in the flow of your life - they redirect it entirely. The massive, life-defining decisions like marriage and children; the career-defining decisions like choosing your first job after college. I’ve had my share of career-defining decisions – leaving a physics graduate program to chase after the dot com craze, leaving consulting for sales engineering, etc. The thing about this latest decision is that it combines both. I am joining Sumo Logic, leaving behind a safe job in marketing, and moving to Silicon Valley – away from my friends, family, and community. So, why did I do it? 

 

Now is the time for Start-Ups in Enterprise Software. 

Consumer start-ups get all the press, but the enterprise startups are where the real action is. The rash of consolidations in the last five years or so has created an innovation gap that companies like Sumo Logic are primed to exploit.  The perfect storm of cloud computing, SaaS, Big Data, and DevOps/Agile is forcing customers to start looking outside of their comfort zones to find the solutions they need. Sumo Logic brings together all of that innovation in a way that is too good to not be a part of it.

The Enterprise SaaS Revolution is Inevitable.

The SaaS business model, combined with Agile development practices, is completely changing the ways companies buy enterprise software. Gartner sees companies replacing legacy software with SaaS more than ever. The antiquated term-licenses of on-premise software with its massive up-front costs, double digit maintenance charges, and “true-ups” seem positively barbaric by comparison to the flexibility of SaaS. And crucially for me, Sumo Logic is also one of the few true SaaS companies that is delving into the final frontier of the previously untouchable data center. 

Big Data is the “Killer App” for the Cloud.
“Big Data” analytics, using highly parallel-ized architectures like Hadoop or Cassandra, is one of the first innovations in enterprise IT to truly be “born in the cloud”. These new approaches were built to solve problems that just didn’t exist ten, or even five, years ago. The Big Data aspect of Sumo Logic is exciting to me. I am convinced that we are only scratching the surface of what is possible with Sumo Logic’s technology, and I want to be there on the bleeding edge with them.

Management Teams Matter.
When it really comes down to it, I joined Sumo Logic because I have first-hand knowledge of the skills that Sumo Logic’s management team brings to the table. I have complete confidence in Vance Loiselle’s leadership as CEO, and Sumo Logic has an unbeatable combination of know-how and get-it-done people . And clearly some of the top venture capital firms in the world agree with me. This is a winning team, and I like to win!

Silicon Valley is still Nirvana for Geeks and the best place for Start-Ups.
Other cities are catching up, but Silicon Valley is still the best place to start a tech company. The combination of brainpower, money, and critical mass is just hard to beat. On a personal level I have resisted the siren call of San Francisco Bay Area for too long. I am strangely excited to be in a place where I can wear my glasses as a badge of honor, and discuss my love for gadgets and science fiction without shame. Luckily for me, I am blessed with a wife that has embraced my geek needs, and supports me whole heartedly (and a 21-month-old who doesn’t care either way). 

So, here’s to a great adventure with the Sumo Logic team, to a new life in Silicon Valley, and to living on the edge of innovation. 

P.S.  If you want to see what I am so excited about, get a Sumo Logic Free account and check it out. 

Twitter