Blog › Amazon Web Services

Sanjay Sarathy, CMO

Machine Data Analytics, Down Under

08.20.2014 | Posted by Sanjay Sarathy, CMO

Not often have I spent two weeks in August in a “winter” climate, but it was a great opportunity to spend some time with our new team in Australia, visit with prospects, customers and partners, and attend a couple of Amazon Web Service Summits to boot.  

Here are some straight-off-the-plane observations.

A Local “Data Center” Presence Matters:  We now have production instances in Sydney, Dublin and the United States.  In conversations with Australian enterprises and government entities, the fact that we have both a local team and a local production instance went extremely far when determining whether we were a good match for their needs.  This was true whether their use case centered around supporting their security initiatives or enabling their DevOps teams to release applications faster to market.  You can now select where your data resides when you sign up for Sumo Logic Free.

Australia is Ready For the Cloud:  From the smallest startup to extremely large mining companies, everyone was interested in how we could support their cloud initiatives.  The AWS Summits were packed and the conversations we had revolved not just around machine data analytics but what we could do to support their evolving infrastructure strategy.  The fact that we have apps for Amazon S3, Cloudfront, CloudTrail and ELB made the conversations even more productive, and we’ve seen significant interest in our special trial for AWS customers.  

We’re A Natural Fit for Managed Service Providers:  As a multi-tenant service born in the Cloud, we have a slew of advantages for MSP and MSSPs looking to embed proactive analytics into their service offering, as our work with The Herjavec Group and Medidata shows.  We’ve had success with multiple partners in the US and the many discussions we had in Australia indicate that there’s a very interesting partner opportunity there as well.    

Analytics and Time to Insights:  In my conversations with dozens of people at the two summits and in 1-1 meetings, two trends immediately stand out.  While people remain extremely interested in how they can take advantage of real-time dashboards and alerts, one of their bigger concerns typically revolved around how quickly they could get to that point.  ”I don’t have time to do a lot of infrastructure management” was the common refrain and we certainly empathize with that thought.  The second is just a reflection on how we sometimes take for granted our pattern recognition technology, aka, LogReduce.  Having shown this to quite a few people at the booth, the reaction on their faces never gets old especially after they see the order of magnitude by which we reduce the time taken to find something interesting in their machine data.  

At the end of the day, this is a people business.  We have a great team in Australia and look forward to publicizing their many successes over the coming quarters.

photo (6)

Ariel Smoliar, Senior Product Manager

AWS Elastic Load Balancing – New Visibility Into Your AWS Load Balancers

03.06.2014 | Posted by Ariel Smoliar, Senior Product Manager

After the successful launch of the Sumo Logic Application for AWS CloudTrail last November and with numerous customers now using this application, we were really excited to work again on a new logging service from AWS, this time providing analytics around log files generated by the AWS Load Balancers.

Our integration with AWS CloudTrail targets use cases relevant to security, usage and operations. With our new application for AWS Elastic Load Balancing, we provide our customers with dashboards that provide real-time insights into operational data. You will also be able to add additional use cases based on your requirements by parsing the log entries and visualizing the data using our visualization tools.

Insights from ELB Log Data

Sumo Logic runs natively on the AWS infrastructure and uses AWS load balancers, so we had plenty of raw data to work with during the development of the content. You will find 12 fields in the ELB logs covering the entire request/response lifecycle. By adding the request, backend and response processing time, we can highlight the total time (latency) from when the load balancer started reading the request headers to when the load balancer started sending the response headers to the client. The Latency Analysis dashboard presents a granular analysis per domain, client IP and backend instance (EC2).

The Application also provides analysis of the status codes based on the ELB and backend instances status codes. Please note that the total count for the status codes will be similar for both the ELB and the instances most of the time, unless there are issues, such as no backend response or client rejected request. Additionally, for ELBs that have been configured with a TCP listener (layer 4) rather than HTTP, the TCP requests will be logged. In this case, you will see that the URL has three dashes and there are no values for the HTTP status codes.

Alerting Frequency

Often during my discussions with Sumo Logic users, the topic of scheduled searches and alerting comes up. Based on our work with ELB logs, there is no specific threshold that we recommend that covers every single use case scenario. The threshold should be based on the application – e.g., tiny beacon requests versus downloading huge files cause different latencies. Sumo Logic provides you with the flexibility to set threshold in the scheduled search or just to change the color in the graph for monitoring purpose, based on the value range

Visualization

I want to talk a little bit about machine data visualization. While skiing last week in Steamboat Colorado, I kept thinking about the relevance of the beautiful Rocky Mountain landscape with the somewhat more mundane world of load balancer data visualization. So here is what we did to present the load balancers data in a more compelling way:

pic1_blog

You can slice and dice the data using our Transpose operator as we did in the Latency by Load Balancer monitor, but I would like to focus on a different feature that was built by our UI team and share how we used it in this application. This feature combines data about the number of requests, the size of the total requests, the client IP address and integrates these data elements into the Total Requests and Data Volume monitor. 

We first used this visualization approach in our Nginx app (Traffic Volume and Bytes Served monitor). We received very positive feedback and decided it made sense to incorporate this approach into this application as well.

Combining three fields in a single view enables you to get faster overview of your environment and also provides you with the ability to drill-down and investigate any activity.

Screen Shot 2014-03-05 at 6.32.01 PM

It reminds one of the landscape above, right? :-)

To get this same visualization, click on the gear icon in the Search screen and choose the Change Series option. 

pic3_blog

For each data series, you can choose how you would like to represent the data. We used Column Chart for the total requests and Line Chart for the received and sent data. 

pic4_blog

I find it beautiful and useful. I hope you plan to use this visualization approach in your dashboards, and please let us know if any help is required.

One more thing…

Please stay tuned and check our posts next week… we can’t wait to share with you where we’re going next in the world of Sumo Logic Applications.

Manish Khettry

Sumo Logic Deployment Infrastructure and Practices

01.08.2014 | Posted by Manish Khettry

Introduction

Here at Sumo Logic, we run a log management service that ingests and indexes many terabytes of data a day; our customers then use our service to query and analyze all of this data. Powering this service are a dozen or more separate programs (which I will call assembly from now on), running in the cloud, communicating with one another. For instance the Receiver assembly is responsible for accepting log lines from collectors running on our customer host machines, while the Index assembly creates text indices for the massive amount of data pumping into our system constantly being fed by the Receivers.

We deploy to our production system multiple times each week, while our engineering teams are constantly building new features, fixing bugs, improving performance, and, last but not least, working on infrastructure improvements to help in the care and wellbeing of this complex big-data system. How do we do it? This blog post tries to explain our (semi)-continuous deployment system.

Running through hoops

In any continuous deployment system, you need multiple hoops that your software must pass through, before you deploy it for your users. At Sumo Logic, we have four well defined tiers with clear deployment criteria for each.  A tier is an instance of the entire Sumo Logic service where all the assemblies are running in concert as well as all the monitoring infrastructure (health checks, internal administrative tools, auto-remediation scripts, etc) watching over it.

Night

This is the first step in the sequence of steps that our software goes through. Originally intended as a nightly deploy, we now automatically deploy the latest clean builds of each assembly on our master branch several times every day. A clean build means that all the unit tests for the assemblies pass. In our complex system, however, it is the interaction between assemblies which can break functionality. To test these, we have a number of integration tests running against Night regularly. Any failures in these integration tests are an early warning that something is broken. We also have a dedicated person troubleshooting problems with Night whose responsibility it is, at the very least, to identify and file bugs for problems.

Stage

We cut a release branch once a week and use Stage to test this branch much as we use Night to keep master healthy. The same set of integration tests that run against Night also run against Stage and the goal is to stabilize the branch in readiness for a deployment to production. Our QA team does ad-hoc testing and runs their manual test suites against Stage.

Long

Right before production is the Long tier. We consider this almost as important as our Production tier. The interaction between Long and Production is well described in this webinar given by our founders. Logs from Long are fed to Production and vice versa, so Long is used to monitor and trouble shoot problems with Production.

Deployments to Long are done manually a few days before a scheduled deployment to Production from a build that has passed all automated unit tests as well as integration tests on Stage. While the deployment is manually triggered, the actual  process of upgrading and restarting the entire system is about as close to a one-button-click as you can get (or one command on the CLI)!

Production

After Long has soaked for a few days, we manually deploy the software running on Long to Production, the last hoop our software has to jump through. We aim for a full deployment every week and often times will do smaller upgrades of our software between full deploys.

Being Production, this deployment is closely watched and there are a fair number of safeguards built into the process. Most notably, we have two dedicated engineers who manage this deployment, with one acting as an observer. We also have a tele-conference with screen sharing that anyone can join and observe the deploy process.

Social Practices

Closely associated with the software infrastructure are the social aspects of keeping this system running. These are:

Ownership

We have well defined ownership of these tiers within engineering and devops which rotate weekly. An engineer is designated Primary and is responsible for Long and Production. Similarly we have a designated Jenkins Cop role, to keep our continuous integration system and Night and Stage healthy.

Group decision making and notifications

We have a short standup everyday before lunch, which everyone in engineering attends. The Primary and Jenkins Cop update the team on any problems or issues with these tiers for the previous day.

In addition to a physical meeting, we use Campfire, to discuss on-going problems and notifying others of changes to any of these tiers. If someone wants to change a configuration property on night to test a new feature, the person would update everyone else on campfire. Everyone (and not just the Primary or Jenkins Cop) is in the loop about these tiers and can jump in to troubleshoot problems.

Automate almost everything. A checklist for the rest.

There are certain things that are done or triggered manually. In cases where humans operate something (a deploy to Long or Production for instance), we have a checklist for engineers to follow. For more on checklists, I refer you to an excellent book, The Checklist Manifesto.

Conclusion

This system has been in place since Sumo Logic went live and has served us well. It bears mentioning that the key to all of this is automation, uniformity, and well-delineated responsibilities. For example, spinning up a complete system takes just a couple of commands in our deployment shell. Also, any deployment (even a personal one for development) comes up with everything pre-installed and running, including health checks, monitoring dashboards or auto-remediation scripts. Identifying and fixing a problem on Production is no different from that on Night. In almost every way (except for waking up the Jenkins Cop in the middle of the night and the sizing), these are identical tiers!

While automation is key, it doesn’t take away the fact that people who run and keep things healthy. A deployment to production can be stressful, more so for the Primary than anyone else and having a well defined checklist can take away some of the stress.

Any system like this needs constant improvements and since we are not sitting idle, there are dozens of features, big and small that need to be worked on. Two big ones are:

  • Red-Green deployments, where new releases are rolled out to a small set of instances and once we are confident they work, are pushed to the rest of the fleet.

  • More frequent deployments of smaller parts of the system. Smaller more frequent deployments are less risky.

In other words, there is a lot of work to do. Come join us at Sumo Logic!

Bruno Kurtic, Founding Vice President of Product and Strategy

Sumo Logic Application for AWS CloudTrail

11.13.2013 | Posted by Bruno Kurtic, Founding Vice President of Product and Strategy

Cloud is opaque

One of the biggest adoption barriers of SaaS, PaaS, and IaaS is the opaqueness and lack of visibility into changes and activities that affect cloud infrastructure.  While running an on-premise infrastructure, you have the ability to audit activity ; for example, you can easily tell who is starting and stopping VMs in virtualization clusters, see who is creating and deleting users, and watch who is making firewall configuration changes. This lack of visibility has been one of the main roadblocks to adoption, even though the benefits have been compelling enough for many enterprises to adopt the Cloud.

This information is critical to securing infrastructure, applications, and data. It’s critical to proving and maintaining compliance, critical to understanding utilization and cost, and finally, it’s critical for maintaining excellence in operations.

Not all Clouds are opaque any longer

Today, the world’s biggest cloud provider, Amazon Web Services (AWS),  announced a new product that, in combination with Sumo Logic, changes the game for cloud infrastructure audit visibility.  AWS CloudTrail is the raw log data feed that will tell you exactly who is doing what, on which sets of infrastructure, at what time, from which IP addresses, and more.  Sumo Logic is integrated with AWS CloudTrail and collects this audit data in real-time and enables SOC and NOC style visibility and analytics.

Here are few examples of what AWS CloudTrail data contains:Network Access

  • Network acl changes.

  • Creation and deletion of network interfaces.

  • Authorized Ingress/Egress across network segments and ports.

  • Changes to privileges, passwords and user profiles.

  • Deletion and creation of security groups.

  • Starting and terminating instances.

  • And much more.

Sumo Logic Application for AWS CloudTrail

Cloud data comes to life with our Sumo Logic Application for AWS CloudTrail, helping our customers across security and compliance, operational visibility, and cost containment. Sumo Logic Application for AWS CloudTrail delivers:

User Activity

  • Seamless integration with AWS CloudTrail data feed.

  • SOC-style, real-time Dashboards in order to monitor access and activity.

  • Forensic analysis to understand the “who, what, when, where, and how” of  events and logs.

  • Alerts when important activities and events occur.

  • Correlation of AWS CloudTrail data with other security data sets, such as intrusion detection system data, operating system events, application data, and more.

This integration delivers improved security posture and better compliance with internal and external regulations that protect your brand.  It also improves operational analytics that can improve SLAs and customer satisfaction.  Finally, it provides deep visibility into the utilization of AWS resources that can help improve efficiency and reduce cost.

The integration is simple: AWS CloudTrail deposits data in near-real time into your S3 account,  and Sumo Logic collects it as soon as it is deposited using an S3 Source.  Sumo Logic also provides a set of pre-built Dashboards and searches to analyze the CloudTrail Data.

To learn more, click here for more details: http://www.sumologic.com/applications/aws-cloudtrail/ and read the documentation: https://support.sumologic.com/entries/30216746-Sumo-Logic-for-Amazon-CloudTrail-App.

Sanjay Sarathy, CMO

Universal Collection of Machine Data

04.18.2013 | Posted by Sanjay Sarathy, CMO

Customers love flexibility, especially if that flexibility drives additional business value.  In that vein, today we announced an expansion of our log data collection capabilities with our hosted HTTPS and Amazon S3 collectors that eliminate the need for any local software installation.  There may be a variety of reasons why you don’t want or can’t have local collectors  - for example, not having access to the underlying infrastructure as often happens with Infrastructure-As-A-Service (IaaS) environments.  Or you simply don’t feeling like deploying any local software into your current infrastructure. Defining these hosted collectors is now baked into the set-up process, whether you’re using Sumo Logic Free or our Enterprise product.    

 

 

With these new capabilities, companies can now unify how they collect and analyze log data generated from private clouds, public clouds, and their on-premise infrastructure.  They can then apply our unique analytics capabilities like LogReduce to generate insight across every relevant application and operational tier.

With companies increasingly moving towards the Cloud to power different parts of their business, it’s imperative that they have the necessary means to troubleshoot and monitor their diverse infrastructure.  Sumo Logic provides that flexibility.

Bruno Kurtic, Founding Vice President of Product and Strategy

Pardon me, have you got data about machine data?

01.31.2013 | Posted by Bruno Kurtic, Founding Vice President of Product and Strategy

I’m glad you ask, I just might.  In fact, we started collecting data about machine data some 9 months ago when we participated at the AWS Big Data conference in Boston.  Since then we continued collecting the same data at a variety of industry show and conferences such as VMworld, AWS re: Invent, Velocity, Gluecon, Cloud Slam, Defrag, DataWeek, and others.

The original survey was printed on my home printer, 4 surveys per page, then inexpertly cut with the kitchen scissors the night before the conference – startup style, oh yeah!  The new versions made it onto a shiny new iPad as an IOS App.  The improved method, Apple caché, and a wider reach gave us more than 300 data points and, incidentally, cost us more than 300 Sumo Logic T-Shirts which we were more than happy to give up in exchange for data.  (btw, if you want one come to one of our events, next one coming up will be the Strata Conference).  

As a data junkie, I’ve been slicing and dicing the responses and thought that end of our fiscal year could be the right moment to revisit it and reflect on my first blog post on this data set.

Here is what we asked:

  • Which business problems do you solve by using machine data?
  • Which tools do you use to analyze machine data in order to solve those business problems?
  • What issues do you experience solving those problems with the chosen tools?

The survey was partially designed to help us to better understand the Sumo Logic’s segment of IT Operations Management or IT Management markets as defined by Gartner,  Forrester, and other analysts.  I think that the sample set is relatively representative.  Responders come from shows with varied audiences such as developers at Velocity and GlueCon, data center operators at VMworld, and folks investigating a move to the cloud at AWS re: Invent and Cloud Slam.  Answers were actually pretty consistent across the different “cohorts”.  We have a statistically significant number of responses, and finally, they were not our customers or direct prospects.  So let’s dive in and see what we’ve got and let’s start at the top:

Which business problems do you solve by using logs and other machine data?

  • Applications management, monitoring, and troubleshooting (46%)
  • IT operations management, monitoring, and troubleshooting (33%)
  • Security management, monitoring, and alerting (21%)

Does anything in there surprise?  I guess it depends on what your point of reference is.  Let me compare it to the overall “IT Management” or “IT Operations Management” market.  The consensus(if such a thing exists) is that size by segment is:

  • IT Infrastructure (servers, networks, etc) is up to 50-60% of the total market
  • Application (internal, external, etc.) is just north of 30-40%
  • Security is around 10%

Source: Sumo Logic analysis of aggregated data from various industry analysts who cover IT Management space.

There are a few things that could explain the big difference between how much our subsegment leans more toward Applications vs. IT infrastructure.  

  • (hypothesis #1) analysts measure total product sold to derive the market size which might not be the same as effort people apply to these use cases.  
  • (hypothesis #2) there is more shelfware in IT Infrastructure which overrepresented effort.  
  • (hypothesis #3) there are more home-grown solutions in Application management which underrepresents effort.  
  • (hypothesis #4) our data is an indicator or a result of a shift in the market (e.g., when enterprises shift toward the IaaS, they spend less time managing IT Infrastructure and shift more toward the core competency, their applications).  
  • (obnoxious hypothesis #5) intuitively, it’s the software stupid – nobody buys hardware because they love it, it exists to run software (applications), and we care more about applications, and that’s why it is so.

OK, ok, let’s check the data to see which hypothesis can our narrow response set help test/validate.  I don’t think our data can help us validate hypothesis #1 or hypothesis #2.  I’ll try to come up with additional survey questions that will, in the future, help test these two hypotheses.  

Hypothesis #3 on the other hand might be partially testable.  If we compare responses from users who use commercial vs. who use home-grown, we are left with the following:

Not a significant difference between responders who use commercial vs. responders who use home grown tools.  Hypothesis #3 explains only a couple of percentage points of difference.  

Hypothesis #4 – I think we can use a proxy to test it.  Let’s assume that responders from VMworld are focused on internal data center and the private cloud.  In this case they would not be relying as much on IaaS providers for IT Infrastructure Operations.  On the other hand, let’s also assume that AWS, and other cloud conference attendees are more likely to rely on IaaS for IT Infrastructure Operations.  Data please:

Interesting, seems to explain some shift between security and infrastructure, but not applications.  So, we’re left with:

  • hypothesis #1 – spend vs. reported effort is skewed – perhaps
  • hypothesis #2 – there is more shelfware in IT infrastructure – unlikely
  • obnoxious hypothesis #5 – it’s the software stupid – getting warmer

That should do it for one blog post.  I’ve barely scratched the surface by stopping with the responses to the first question.  I will work to see if I can test the outstanding hypotheses and, if successful, will write about the findings.  I will also follow-up with another post looking at the rest of the data.  I welcome your comments and thoughts.

While you’re at it, try Sumo Logic for free.

Ben Newton, Corporate Sales Engineering Manager

Why I joined Sumo Logic and Moved to Silicon Valley

01.28.2013 | Posted by Ben Newton, Corporate Sales Engineering Manager

Entering StartUP

We make hundreds of decisions every day, mostly small ones, that are just part of life’s ebb and flow. And then there are the big decisions that don’t merely create ripples in the flow of your life - they redirect it entirely. The massive, life-defining decisions like marriage and children; the career-defining decisions like choosing your first job after college. I’ve had my share of career-defining decisions – leaving a physics graduate program to chase after the dot com craze, leaving consulting for sales engineering, etc. The thing about this latest decision is that it combines both. I am joining Sumo Logic, leaving behind a safe job in marketing, and moving to Silicon Valley – away from my friends, family, and community. So, why did I do it? 

 

Now is the time for Start-Ups in Enterprise Software. 

Consumer start-ups get all the press, but the enterprise startups are where the real action is. The rash of consolidations in the last five years or so has created an innovation gap that companies like Sumo Logic are primed to exploit.  The perfect storm of cloud computing, SaaS, Big Data, and DevOps/Agile is forcing customers to start looking outside of their comfort zones to find the solutions they need. Sumo Logic brings together all of that innovation in a way that is too good to not be a part of it.

The Enterprise SaaS Revolution is Inevitable.

The SaaS business model, combined with Agile development practices, is completely changing the ways companies buy enterprise software. Gartner sees companies replacing legacy software with SaaS more than ever. The antiquated term-licenses of on-premise software with its massive up-front costs, double digit maintenance charges, and “true-ups” seem positively barbaric by comparison to the flexibility of SaaS. And crucially for me, Sumo Logic is also one of the few true SaaS companies that is delving into the final frontier of the previously untouchable data center. 

Big Data is the “Killer App” for the Cloud.
“Big Data” analytics, using highly parallel-ized architectures like Hadoop or Cassandra, is one of the first innovations in enterprise IT to truly be “born in the cloud”. These new approaches were built to solve problems that just didn’t exist ten, or even five, years ago. The Big Data aspect of Sumo Logic is exciting to me. I am convinced that we are only scratching the surface of what is possible with Sumo Logic’s technology, and I want to be there on the bleeding edge with them.

Management Teams Matter.
When it really comes down to it, I joined Sumo Logic because I have first-hand knowledge of the skills that Sumo Logic’s management team brings to the table. I have complete confidence in Vance Loiselle’s leadership as CEO, and Sumo Logic has an unbeatable combination of know-how and get-it-done people . And clearly some of the top venture capital firms in the world agree with me. This is a winning team, and I like to win!

Silicon Valley is still Nirvana for Geeks and the best place for Start-Ups.
Other cities are catching up, but Silicon Valley is still the best place to start a tech company. The combination of brainpower, money, and critical mass is just hard to beat. On a personal level I have resisted the siren call of San Francisco Bay Area for too long. I am strangely excited to be in a place where I can wear my glasses as a badge of honor, and discuss my love for gadgets and science fiction without shame. Luckily for me, I am blessed with a wife that has embraced my geek needs, and supports me whole heartedly (and a 21-month-old who doesn’t care either way). 

So, here’s to a great adventure with the Sumo Logic team, to a new life in Silicon Valley, and to living on the edge of innovation. 

P.S.  If you want to see what I am so excited about, get a Sumo Logic Free account and check it out. 

AWS re:Invent – The future is now

12.05.2012 | Posted by Stefan Zier, Chief Architect

This past week, several of us had the pleasure of attending Amazon Web Service’s inaugural re:Invent conference in Las Vegas. In the weeks leading up to the conference, it wasn’t fully obvious to me just how big this show was going to be (not for lack of information, but mostly because I was focused on other things).

When I picked up my badge in the Venetian and walked through the enormous Expo hall, it struck me: IaaS, the cloud, is no longer a bleeding edge technology used by a few daring early adopters. The economics and flexibilities afforded are too big to be ignored – by anyone.

Attending sessions and talking to customers showed that, more than ever before, application architectures are distributed. The four design principles outlined in Werner Vogels excellent keynote on day 2 of the conference – Werner calls them “The Commandments of 21st Century Architectures” made it obvious that the cloud requires people to build their applications in fundamentally different ways from traditional on-premise applications.

While at the conference, I spent quite some time at the Sumo Logic booth, explaining and demoing our solutions to customers. Most of them run distributed systems in AWS, and it never took more than 2 minutes for them to realize why their lives would be much easier with a log management solution — having a central tool to collect and quickly analyze logs from a large distributed app is essential to troubleshooting, monitoring and optimizing an app.

Once I started to explain how our product is architected, and started relating to the architecture principles Werner outlined, most people understood why Sumo Logic’s product can scale and perform the way it does, unlike some other “cloud washed” solutions on the market.

In addition to having a highly relevant set of attendees for us, the conference also was a great place to find other vendors who exist in the same ecosystem – we’ve begun several good conversations about integrations and partnerships.

The response from people we talked to was overwhelmingly positive. During the conference, we’ve seen a big increase in sign-ups for our Sumo Logic Free product. We will definitely be back next year.

Pragmatic AWS: 3 Tips to enhance the AWS SDK with Scala

07.12.2012 | Posted by Stefan Zier, Chief Architect

At Sumo Logic, most backend code is written in Scala. Scala is a newer JVM (Java Virtual Machine) language created in 2001 by Martin Odersky, who also co-founded our Greylock sister company, TypeSafe. Over the past two years at Sumo Logic, we’ve found Scala to be a great way to use the AWS SDK for Java. In this post, I’ll explain some use cases. 

1. Tags as fields on AWS model objects

Accessing AWS resource tags can be tedious in Java. For example, to get the value of the “Cluster” tag on a given instance, something like this is usually needed: 

   String deployment = null;
   for (Tag tag : instance.getTags()) {
     if (tag.getKey().equals(“Cluster”)) {
       deployment = tag.getValue();
     }
   }

While this isn’t horrible, it certainly doesn’t make code easy to read. Of course, one could turn this into a utility method to improve readability. The set of tags used by an application is usually known and small in number. For this reason, we found it useful to expose tags with an implicit wrapper around the EC2 SDK’s Instance, Volume, etc. classes. With a little Scala magic, the above code can now be written as:

val deployment = instance.cluster

Here is what it takes to make this magic work:

object RichAmazonEC2 {
 implicit def wrapInstance(i: Instance) = new RichEC2Instance(i)
}

class RichEC2Instance(instance: Instance) {
 private def getTagValue(tag: String): String =
   tags.find(_.getKey == tag).map(_.getValue).getOrElse(null)
 
 def cluster = getTagValue(“Cluster”)
}

Whenever this functionality is desired, one just has to import RichAmazonEC2._

2. Work with lists of resources

Scala 2.8.0 included a very powerful new set of collections libraries, which are very useful when manipulating lists of AWS resources. Since the AWS SDK uses Java collections, to make this work, one needs to import collections.JavaConversions._, which transparently “converts” (wraps implicitly) the Java collections. Here are a few examples to showcase why this is powerful: 

Printing a sorted list of instances, by name:
ec2.describeInstances(). // Get list of instances.
 getReservations.                  
 map(_.getInstances).
 flatten.                          // Translate reservations to instances.
 sortBy(_.sortName).               // Sort the list.
 map(i => “%-25s (%s)”.format(i.name, i.getInstanceId)). // Create String.
 foreach(println(_))               // Print the string.

Grouping a list of instances in a deployment by cluster (returns a Map from cluster name to list of instances in the cluster):
ec2.describeInstances().            // Get list of instances.
 filter(_.deployment = “prod”).    // Filter the list to prod deployment.
 groupBy(_.cluster)                // Group by the cluster.

You get the idea – this makes it trivial to build very rich interactions with EC2 resources.

3. Add pagination logic to the AWS SDK

When we first started using AWS, we had a utility class to provide some commonly repeated functionality, such as pagination for S3 buckets and retry logic for calls. Instead of embedding functionality in a separate utility class, implicits allow you to pretend that the functionality you want exists in the AWS SDK. Here is an example that extends the AmazonS3 class to allow listing all objects in a bucket: 

object RichAmazonS3 {
 implicit def wrapAmazonS3(s3: AmazonS3) = new RichAmazonS3(s3)
}

class RichAmazonS3(s3: AmazonS3) {
 def listAllObjects(bucket: String, cadence: Int = 100): Seq[S3ObjectSummary] = {

   var result = List[S3ObjectSummary]()

   def addObjects(objects: ObjectListing) = result ++= objects.getObjectSummaries

   var objects = s3.listObjects(new ListObjectsRequest().withMaxKeys(cadence).withBucketName(bucket))
   addObjects(objects)

   while (objects.isTruncated) {
     objects = s3.listNextBatchOfObjects(objects)
     addObjects(objects)
   }

   result
 }
}

To use this:

val objects = s3.listAllObjects(“mybucket”)

There is, of course a risk of running out of memory, given a large enough number of object summaries, but in many use cases, this is not a big concern.

Summary

Scala enables programmers to implement expressive, rich interactions with AWS and greatly improves readability and developer productivity when using the AWS SDK. It’s been an essential tool to help us succeed with  AWS.

Security-Gain without Security-Pain

06.21.2012 | Posted by Stefan Zier, Chief Architect

As Joan mentioned, we use SaaS products a lot here at Sumo Logic. On an average day, I log into sites on the internet tens or even hundreds of times, supplying a username and password each time. The proliferation of usernames and passwords creates three issues:

  • Password hygiene. In this day and age, it is reckless  to reuse passwords across sites. At the same time, it is impossible to remember arbitrarily many unique passwords.
  • Strength. With rainbow tables and the tumbling cost of compute power, passwords need to be increasingly long and complex.
  • Efficiency. I shouldn’t have to spend half my day logging into sites.

What we need are tools that:

  • Encourage you to use different passwords everywhere.
  • Are secure, ideally using two factors of authentication.
  • Require the least number of keystrokes or mouse actions to get past login screens.

Here are a few tools we use and like.

1Password

1Password is the password manager most of us use. It stands out from many other password managers in several ways:

  • It is a native Mac application and has excellent integration. There is a version for Windows.
  • Support for iOS and Android.
  • Well-implemented sync via Dropbox, including for iOS.
  • Plugins for the 3 major browsers (Safari, Chrome, Firefox).
  • Keyboard compatible.

One of the major benefits of 1Password is that it’s designed to stay out of your way. To log into a site:

  • Without 1Password, I enter the URL in the address bar, navigate to the login form. Then, I enter my login, then my password. A lot of typing.
  • With 1Password, I enter 1pinto the address bar, start typing the site’s name to select from the list and hit enter. Then, I watch 1Password log me into the site.

Properly used, 1Password can be regarded as a one and a half factor authentication solution. There’s a great discussion on Agile Bits blog. We’ll share some power user tips on 1Password in the near future.

IronKeys

IronKeys are cool toys. They’re USB sticks with “spook-grade” crypto and self-destruction capabilities. We issue every developer an IronKey for the storage of all key files, such as ssh private keys and AWS credential files. Aside from being geek-chic, the IronKeys offer two benefits:

  • The key files are only exposed while the IronKey is plugged in and mounted. Not when people are at Starbucks browsing the web.
  • If an IronKey is ever lost, we can remote-detonate them. The minute they get plugged into a USB port, the software on the IronKey phones home and gets a self destruct signal. This requires an internet connection, but we’ve configured IronKeys to not unlock without one.  

OATH (Google Apps and AWS)

Retired Hardware MFA Tokens

Retired Hardware MFA Tokens

Google’s Two-Step Verification and Amazon Web Service’s MFA both use the OATH open architecture, not to be confused with OAuth. OATH is a software replacement for traditional hardware-based two-factor authentication tokens. 

Google offers open sourced client applications for iOS and Android that serve as the second factor of authentication. This reduces clutter, since you don’t need to carry any hardware tokens. Having the phone be your token also makes it more likely that you have your token with you most of the time.

Google has also taken several steps to remove friction:

  • To set up your phone, you simply scan a QR code form the screen.
  • After the first two factor authentication with your phone, you can check a box “Remember me for 30 days”. The browser cookie then serves as your second factor of authentication.

AWS initially only supported classical hardware MFA tokens. To make matters worse, one MFA token couldn’t be shared across multiple AWS accounts. More recently, they’ve also added support for OATH. In fact, the same Google Authenticator apps work for AWS, as well.

Wrapping up

Traditional two-factor authentication approaches based on hardware tokens are painful to use. OATH, 1Password and IronKeys strengthen security without adding too much pain to people’s lives.

Twitter