Pricing Login
Pricing
Graham Watts

Graham Watts

Graham Watts is an AWS Certified Solutions Architect and Sales Engineer at Sumo Logic.

Posts by Graham Watts

Blog

Sumo Logic For Support and Customer Success Teams

*Authored by Kevin Keech, Director of Support at Sumo Logic, and Graham Watts, Senior Solutions Engineer at Sumo Logic Many Sumo Logic customers ask, “How can I use Sumo Logic for support and customer success teams?” If you need a better customer experience to stay ahead of the competition, Sumo Logic can help. In this post, I will describe why and how support and customer success teams use Sumo Logic, and summarize the key features to use in support and customer success use cases. Why Use Sumo Logic For Support and Customer Success Teams? Improved Customer Experience Catch deviations and performance degradation before your customers report it Using dashboards and scheduled alerts, your CS and Support teams can be notified of any service impacting issues and can then reach out and provide solutions before your customers may ever know they have a problem This helps your customers avoid experiencing any frustrations with your service, which in the past may have led them to look into competitive offerings Improve your Net Promoter Score (NPS) and Service Level Agreements (SLAs) Alert team members to reach out to a frustrated customer before they go to a competitors website or log out Efficiency and Cost Savings – Process More Tickets, Faster Sumo Logic customers report an increase in the number of support tickets each team member can handle by 2-3x or more Direct access to your data eliminates the need for your Support team to request access and wait for engineering resources to grant access This leads to a higher level of customer satisfaction, and allows you to reallocate engineering time to innovate and enhance your product offerings Your support reps can perform real-time analysis of issues as they are occurring, locate the root of a problem, and get your customers solutions quicker Customers report that using LogReduce cuts troubleshooting time down from hours/days to minutes As your teams and products grow, team members can process more tickets instead of needing to hire more staff Security Eliminate the need to directly log into servers to look at logs – you can Live Tail your logs right in Sumo Logic or via a CLI Use Role Based Access Control to allow teams to view only the data they need How to Use Sumo Logic For Support and Customer Success Teams Key features that enable your Support Team, Customer Success Team, or another technical team while troubleshooting are: Search Templates See here for a video tutorial of Search Templates Form based search experience – no need for employees to learn a query language Users type in human-friendly, easy to remember values like “Company Name” and Sumo will look up and inject complex IDs, like “Customer ID” or some other UUID, into the query, shown to the right: LogReduce Reduce 10s or 100s of thousands of log messages into a few patterns with the click of a button This reduces the time it takes to identify the root cause of an issue from hours or days to minutes In the below example, a bad certificate and related tracebacks are exposed with LogReduce Dashboards Dashboard filters – Auto-populating dashboard filters for easy troubleshooting TimeCompare – Is now ‘normal’ compared to historical trends? The example below shows production errors or exceptions today, overlaid with the last 7 days of production errors or exceptions:

Blog

Optimizing Cloud Visibility and Security with Amazon GuardDuty and Sumo Logic

Blog

Packer and Sumo Logic - Build Monitoring Into Your Images

Whether you're new to automating your image builds with Packer, new to Sumo Logic, or just new to integrating Packer and Sumo Logic, this post guides you through creating an image with Sumo Logic baked in. We'll use AWS as our cloud provider, and show how to create custom machine images in one command that allow you to centralize metrics and logs from applications, OSs, and other workloads on your machines. Overview When baking a Sumo Logic collector into any machine image, you'll need to follow three main steps: First, create your sources.json file, and add it to the machine. This file specifies what logs and metrics you'd like to collect It's usually stored at /etc/sources.json, although you can store it anywhere at point to it Next, download, rename, and make the collector file executable. Collector downloads for various operating systems and Sumo Logic deployments can be found here An example command might look like: sudo wget 'https://collectors.us2.sumologic.com/rest/download/linux/64' -O SumoCollector.sh && sudo chmod +x SumoCollector.sh Finally, run the install script and skip registration. The most important part here is to use the -VskipRegistration=true flag so that the collector doesn't register to the temporary machine you are trying to built the image with Other important flags include -q > Run the script in quiet mode -Vephemeral=true > This tells Sumo Logic to auto-remove old collectors that are no longer alive, usually applicable for autoscaling use cases where VMs are ephemeral -Vsources=/etc/sources.json > Point to the local path of your sources.json file -Vsumo.accessid=<id> -Vsumo.accesskey=<key> > This is your Sumo Logic access key pair See all installation options here An example command might look like: sudo ./SumoCollector.sh -q -VskipRegistration=true -Vephemeral=true -Vsources=/etc/sources.json -Vsumo.accessid=<id> -Vsumo.accesskey=<key> Packer and Sumo Logic - Provisioners Packer Provisioners allow you to communicate with third party software to automate whatever tasks you need to built your image. Some examples of what you'd use provisioners for are: installing packages patching the kernel creating users downloading application code In this example, we'll use the Packer Shell Provisioner, which provisions your machine image via shell scripts. The basic steps that Packer will execute are: Start up an EC2 instance in your AWS account Download your sources.json file locally, which describes the logs and metrics you'd like to collect Download the Sumo Logic collector agent Run the collector setup script to configure the collector, while skipping registration (this creates a user.properties config file locally) Create the AMI and shut down the EC2 instance Print out the Amazon Machine Image ID (AMI ID) for your image with Sumo baked in Instructions: Packer and Sumo Logic Build Before You Begin To ensure Packer can access your AWS account resources, make sure you have an AWS authentication method to allow Packer to control AWS resources: Option 1: User key pair Option 2: Set up the AWS CLI or SDKs in your local environment I have chosen option 2 here so my Packer build command will not need AWS access key pair information. After setting up your local AWS authentication method, create a Sumo Logic free trial here if you don't already have an account. Then, generate a Sumo Logic key pair inside you Sumo Logic account. Copy this key down, as the secret key will only be shown once. Step 1 - Get Your Files After downloading Packer, download the Packer+Sumo_template.json and the packer_variables.json files, and place all 3 in the same directory. Step 2 - Customize Variables and Test Your Template Use the command ./packer validate packer_sumo_template.json to validate your packer template. This template automatically finds the latest Amazon Linux image in whatever region you use, based on the source_ami_filter in the builders object:"source_ami_filter": { "filters": { "virtualization-type": "hvm", "name": "amzn-ami-hvm-????.??.?.x86_64-gp2", "root-device-type": "ebs" }, "owners": ["amazon"], "most_recent": true } Customize the Region in the packer_variables.json file to the AWS Region you want to build your image in You can also change the Sumo collector download URL if you are in a different deployment The sources.json file url can be updated to point to your own sources.json file, or you can update the template to use the Packer File Provisioner to upload your sources.json file, and any other files Step 3 - Build Your Image Use the command ./packer build -var-file=packer_variables.json -var 'sumo_access_id=<sumo_id>' -var 'sumo_access_key=<sumo_key>' packer_sumo_template.json to build your image. You should see the build start and finish like this: Image Build Start Image Build Finish Done! Now that you've integrated Packer and Sumo Logic, you can navigate to the AMI section of the EC2 AWS console and find the image for use in Autoscaling Launch Configurations, or just launch the image manually. Now What? View Streaming Logs and Metrics! Install the Sumo Logic Applications for Linux and Host Metrics to get pre-built monitoring for your EC2 Instance: What Else Can Sumo Logic Do? Sumo Logic collects AWS CloudWatch metrics, CloudTrail audit data, and much more. Sumo Logic also offers integrated Threat Intelligence powered by CrowdStrike, so that you can identify threats in your cloud infrastructure in real time. See below for more documentation: AWS CloudTrail AWS CloudWatch Metrics Integrated Threat Intelligence What's Next? In part 3 of this series (will be linked here when published), I'll cover how to deploy an Autoscaling Group behind a load balancer in AWS. We will integrate the Sumo Logic collector into each EC2 instance in the fleet, and also log the load balancer access logs to an S3 bucket, then scan that bucket with a Sumo Logic S3 source. If you have any questions or comments, please reach out via my LinkedIn profile, or via our Sumo Logic public Slack Channel: slack.sumologic.com (@grahamwatts-sumologic). Thanks for reading!

AWS

September 29, 2017

Blog

Terraform and Sumo Logic - Build Monitoring into your Cloud Infrastructure

Are you using Terraform and looking for a way to easily monitor your cloud infrastructure? Whether you're new to Terraform, or you control all of your cloud infrastructure through Terraform, this post provides a few examples how to integrate Sumo Logic's monitoring platform into Terraform-scripted cloud infrastructure.*This article discusses how to integrate the Sumo Logic collector agent with your EC2 resources. To manage a hosted Sumo Logic collection (S3 sources, HTTPS sources, etc.), check out the Sumo Logic Terraform Provider here or read the blog.Collect Logs and Metrics from your Terraform InfrastructureSumo Logic's ability to Unify your Logs and Metrics can be built into your Terraform code in a few different ways. This post will show how to use a simple user data file to bootstrap an EC2 instance with the Sumo Logic collector agent. After the instance starts up, monitor local log files and overlay these events with system metrics using Sumo Logic's Host Metrics functionality:AWS CloudWatch Metrics and Graphite formatted metrics can be collected and analyzed as well.Sumo Logic integrates with Terraform to enable version control of your cloud infrastructure and monitoring the same way you version and improve your software.AWS EC2 Instance with Sumo Logic Built-InBefore we begin, if you are new to Terraform, I recommend Terraform: Up and Running. This guide originated as a blog, and was expanded to a helpful book by Yevgeniy Brikman.What We'll MakeIn this first example, we'll apply the Terraform code in my GitHub repo to launch a Linux AMI in a configurable AWS Region, with a configurable Sumo Logic deployment. The resources will be created in your default VPC and will include:One t2.micro EC2 instance One AWS Security Group A Sumo Logic collector agent and sources.json fileThe Approach - User Data vs. Terraform Provisioner vs. PackerIn this example, we'll be using a user data template file to bootstrap our EC2 instance. Terraform also offers Provisioners, which run scripts at the time of creation or destruction of an instance. HashiCorp offers Packer to build machine images, but I have selected to use user data in this example for a few reasons:User Data is viewable in the AWS console Simplicity - my next post will cover an example that uses Packer rather than user data, although user data can be included in an autoscaling group's launch configuration For more details, see the Stack Overflow discussion here If you want to build Sumo Logic collectors into your images with Packer, see my blog with instructions hereThe sources.json file will be copied to the instance upon startup, along with the Sumo Logic collector. The sources.json file instructs Sumo Logic to collect various types of logs and metrics from the EC2 instance:Linux OS Logs (Audit logs, Messages logs, Secure logs) Host Metrics (CPU, Memory, TCP, Network, Disk) Cron logs Any application log you needA Note on SecurityThis example relies on wget to bootstrap the instance with the Sumo Logic collector and sources.json file, so ports 80 and 443 are open to the world. In my next post, we'll use Packer to build the image, so these ports can be closed. We'll do this by deleting them in the Security Group resource of our main.tf file.Tutorial - Apply Terraform and Monitor Logs and Metrics InstantlyPrerequisitesFirst, you'll need a few things:Terraform - see the Terraform docs here for setup instructions A Sumo Logic account - Get a free one here Access to an AWS account with AmazonEC2FullAccess permissions - If you don't have access you can sign up for the free tier here An AWS authentication method to allow Terraform to control AWS resourcesOption 1: User key pair Option 2: Set up the AWS CLI or SDKs in your local environmentInstructions1. First, copy this repo (Example 1. Collector on Linux EC2) somewhere locally.You'll need all 3 files: main.tf, vars.tf, and user_data.sh main.tf will use user_data.sh to bootstrap your EC2 main.tf will also use vars.tf to perform lookups based on a Linux AMI map, a Sumo Logic collector endpoint map, and some other variables2. Then, test out Terraform by opening your shell and running:/path/to/terraform planYou can safely enter any string, like 'test', for the var.Sumo_Logic_Access_ID and var.Sumo_Logic_Access_Key inputs while you are testing with the plan command. After Terraform runs the plan command, you should see: "Plan: 2 to add, 0 to change, 0 to destroy." if the your environment is configured correctly.3. Next, run Terraform and create your EC2 instance, using the terraform apply commandThere are some configurable variables built in For example, the default AWS Region that this EC2 will be launched into is us-east-1, but you can pass in another region like this:path/to/terraform/terraform apply -var region=us-west-2If your Sumo Logic Deployment is in another Region, like DUB or SYD, you can run the command like this:path/to/terraform/terraform apply -var Sumo_Logic_Region=SYD5. Then, Terraform will interactively ask you for your Sumo Logic Access Key pair because there is no default value specified in the vars.tf fileGet your Sumo Logic Access Keys from your Sumo Logic account and enter them when Terraform prompts youFirst, navigate to the Sumo Logic Web Application and click your name in the left nav and open the Preferences page Next, click the blue + icon near My Access Keys to create a key pair See the official Sumo Logic documentation here for more infoYou will see this success message after Terraform creates your EC2 instance and Security Group: "Apply complete! Resources: 2 added, 0 changed, 0 destroyed."6. Now you're done!After about 3-4 minutes, check under Manage Data > Collection in the Sumo Logic UI You should see you new collector running and scanning the sources we specified in the sources.json (Linux OS logs, Cron log, and Host Metrics)CleanupMake sure to delete you resources using the Terraform destroy command. You can enter any string when you are prompted for the Sumo Logic key pair information. The -Vephemeral=true flag in our Sumo Logic user data configuration command instructs Sumo Logic to automatically clean out old collectors are no longer alive./path/to/terraform destroyNow What? View Streaming Logs and Metrics!Install the Sumo Logic Applications for Linux and Host Metrics to get pre-built monitoring for your EC2 Instance:What Else Can Sumo Logic Do?Sumo Logic collects AWS CloudWatch metrics, CloudTrail audit data, and much more. Sumo Logic also offers integrated Threat Intelligence powered by CrowdStrike, so that you can identify threats in your cloud infrastructure in real time. See below for more documentation:AWS CloudTrail AWS CloudWatch Metrics Integrated Threat IntelligenceWhat's Next?In part 2 of this post, I'll cover how to deploy an Autoscaling Group behind a load balancer in AWS. We will integrate the Sumo Logic collector into each EC2 instance in the fleet, and also log the load balancer access logs to an S3 bucket, then scan that bucket with a Sumo Logic S3 source.Thanks for reading!Graham Watts is an AWS Certified Solutions Architect and Sales Engineer at Sumo Logic

Blog

An Introduction to the AWS Application Load Balancer

Blog

CloudFormation and Sumo Logic - Build Monitoring into your Stack

Curious about Infrastructure as Code (IaC)? Whether you're new to AWS CloudFormation, or you control all of your cloud infrastructure through CloudFormation templates, this post demonstrates how to integrate Sumo Logic's monitoring platform into an AWS CloudFormation stack. Collect Logs and Metrics from your Stack Sumo Logic's ability to Unify your Logs and Metrics can be built into your CloudFormation Templates. Collect operating system logs, web server logs, application logs, and other logs from an EC2 instance. Additionally, Host Metrics, AWS CloudWatch Metrics, and Graphite formatted metrics can be collected and analyzed.With CloudFormation and Sumo Logic, you can achieve version control of your AWS infrastructure and your monitoring platform the same way you version and improve your software. CloudFormation Wordpress Stack with Sumo Logic Built-In Building off of the resources Adrian Cantrill provided in his Advanced CloudFormation course via A Cloud Guru, we will launch a test Wordpress stack with the following components: Linux EC2 instance - you choose the size! RDS instance - again, with a configurable size S3 bucket The Linux EC2 instance is bootstrapped with the following to create a LAMP stack: Apache MySQL PHP MySQL-PHP Libraries We also install Wordpress, and the latest version of the Sumo Logic Linux collector agent. Using the cfn-init script in our template, we rely on the file key of AWS::CloudFormation::Init metadata to install a sources.json file on the instance. This file instructs Sumo Logic to collect various types of logs and metrics from the EC2 instance: Linux OS Logs (Audit logs, Messages logs, Secure logs) Host Metrics (CPU, Memory, TCP, Network, Disk) Apache Access logs cfn-init logs Tutorial - Launch a CloudFormation Stack and Monitor Logs and Metrics Instantly First, you'll need a few things: A Sumo Logic account - Get a free one Here Access to an AWS account - If you don't have access you can sign up for the free tier here A local EC2 Key Pair - if you don't have one you can create one like this After you have access to your Sumo Logic account and an AWS account, navigate to an unused Region if you have one. This will give you a more isolated sandbox to test in so that we can more clearly see what our CloudFormation template creates. Make sure you have an EC2 key pair in that Region, you'll need to add this to the template.*Leveraging pseudo parameters, the template is portable, meaning it can be launched in any Region. First, log into AWS and navigate to CloudFormation. Choose 'Create New Stack' Then, download the example CloudFormation template from GitHub here Next, on line 87, in the EC2 Resources section, make sure to edit the value of the "KeyName" field to whatever your EC2 key is named for your current Region*Make sure the Region you choose to launch the stack in has an EC2 Key Pair, and that you update line 87 with your key's name. If you forget to do this your stack will fail to launch! Select 'Choose File' and upload the template you just downloaded and edited, then click Next Title your stack Log into Sumo Logic. and in the top-right click on your email username, then preferences, then '+' to create a Sumo Logic Access key pair Enter the Sumo Logic key pair into the stack details page. You can also select an EC2 and RDS instance size, and enter a test string that we can navigate to later when checking that we can communicate with the instance. Click 'Next', name/tag your stack if you'd like, then click 'Next' again, the select 'Create' to launch your stack! Now What? View Streaming Logs and Metrics! You've now launched your stack. In about 10-15 minutes, we can visit our Wordpress server to verify everything is working. We can also search our Apache logs and see any visitors (probably just us) that are interacting with the instance. Follow these steps to explore your new stack, and your Sumo Logic analytics: View the CloudFormation Events log. You should see four CREATE_COMPLETE statuses like so: Check your Sumo Logic account to see the collector and sources that have been automatically provisioned for you: What's Next? Sumo Logic collects AWS CloudWatch metrics, S3 Audit logs, and much more. Below is more information on the integrations for AWS RDS Metrics and also S3 Audit Logs: Amazon RDS Metrics Amazon S3 Audit Explore your logs! Try visiting your web server by navigating to your EC2 instance's public IP address This template uses the default security group of your Region's VPC, so you'll need to temporarily allow inbound HTTP traffic from either your IP, or anywhere (your IP is recommended) To do this, navigate to the EC2 console and select the Linux machine launched via the CloudFormation Template Then, scroll down to the Security Group and click 'default' as shown below Edit the inbound rules to allow HTTP traffic in, either from your IP or anywhere After you've allowed inbound HTTP traffic, navigate in your browser to <your-public-ip>/wordpress (something like 54.149.214.198/wordpress) and you'll see you're new Wordpress front end: You can also test the string we entered during setup by navigating to <your-public-ip>/index2.html Search you Sumo Logic account with _sourceCategory=test/apache and view your visits to your new Wordpress web server in the logs Finally, check out the metrics on your instance by installing the Host Metrics App: Cleanup Make sure to delete you stack as shown below, and to remove inbound HTTP rules on your default Security Group.

Blog

AWS Well Architected Framework - Security Pillar

Blog

AWS Best Practices - How to Achieve a Well Architected Framework