Are you using Terraform and looking for a way to easily monitor your cloud infrastructure? Whether you’re new to Terraform, or you control all of your cloud infrastructure through Terraform, this post provides a few examples how to integrate Sumo Logic’s monitoring platform into Terraform-scripted cloud infrastructure.
Collect Logs and Metrics from your Terraform Infrastructure
Sumo Logic’s ability to Unify your Logs and Metrics can be built into your Terraform code in a few different ways. This post will show how to use a simple user data file to bootstrap an EC2 instance with the Sumo Logic collector agent. After the instance starts up, monitor local log files and overlay these events with system metrics using Sumo Logic’s Host Metrics functionality:
AWS CloudWatch Metrics and Graphite formatted metrics can be collected and analyzed as well.
Sumo Logic integrates with Terraform to enable version control of your cloud infrastructure and monitoring the same way you version and improve your software.
AWS EC2 Instance with Sumo Logic Built-In
What We’ll Make
In this first example, we’ll apply the Terraform code in my GitHub repo to launch a Linux AMI in a configurable AWS Region, with a configurable Sumo Logic deployment. The resources will be created in your default VPC and will include:
- One t2.micro EC2 instance
- One AWS Security Group
- A Sumo Logic collector agent and sources.json file
The Approach – User Data vs. Terraform Provisioner vs. Packer
In this example, we’ll be using a user data template file to bootstrap our EC2 instance. Terraform also offers Provisioners, which run scripts at the time of creation or destruction of an instance. HashiCorp offers Packer to build machine images, but I have selected to use user data in this example for a few reasons:
- User Data is viewable in the AWS console
- Simplicity – my next post will cover an example that uses Packer rather than user data, although user data can be included in an autoscaling group’s launch configuration
- For more details, see the Stack Overflow discussion here
- If you want to build Sumo Logic collectors into your images with Packer, see my blog with instructions here
The sources.json file will be copied to the instance upon startup, along with the Sumo Logic collector. The sources.json file instructs Sumo Logic to collect various types of logs and metrics from the EC2 instance:
- Linux OS Logs (Audit logs, Messages logs, Secure logs)
- Host Metrics (CPU, Memory, TCP, Network, Disk)
- Cron logs
- Any application log you need
A Note on Security
This example relies on wget to bootstrap the instance with the Sumo Logic collector and sources.json file, so ports 80 and 443 are open to the world. In my next post, we’ll use Packer to build the image, so these ports can be closed. We’ll do this by deleting them in the Security Group resource of our main.tf file.
Tutorial – Apply Terraform and Monitor Logs and Metrics Instantly
First, you’ll need a few things:
- Terraform – see the Terraform docs here for setup instructions
- A Sumo Logic account – Get a free one here
- Access to an AWS account with AmazonEC2FullAccess permissions – If you don’t have access you can sign up for the free tier here
- An AWS authentication method to allow Terraform to control AWS resources
- Option 1: User key pair
- Option 2: Set up the AWS CLI or SDKs in your local environment
1. First, copy this repo (Example 1. Collector on Linux EC2) somewhere locally.
- You’ll need all 3 files: main.tf, vars.tf, and user_data.sh
- main.tf will use user_data.sh to bootstrap your EC2
- main.tf will also use vars.tf to perform lookups based on a Linux AMI map, a Sumo Logic collector endpoint map, and some other variables
2. Then, test out Terraform by opening your shell and running:
You can safely enter any string, like ‘test’, for the var.Sumo_Logic_Access_ID and var.Sumo_Logic_Access_Key inputs while you are testing with the plan command.
- After Terraform runs the plan command, you should see: “Plan: 2 to add, 0 to change, 0 to destroy.” if the your environment is configured correctly.
3. Next, run Terraform and create your EC2 instance, using the terraform apply command
- There are some configurable variables built in
- For example, the default AWS Region that this EC2 will be launched into is us-east-1, but you can pass in another region like this:
path/to/terraform/terraform apply -var region=us-west-2
- If your Sumo Logic Deployment is in another Region, like DUB or SYD, you can run the command like this:
path/to/terraform/terraform apply -var Sumo_Logic_Region=SYD
5. Then, Terraform will interactively ask you for your Sumo Logic Access Key pair because there is no default value specified in the vars.tf file
- Get your Sumo Logic Access Keys from your Sumo Logic account and enter them when Terraform prompts you
- First, navigate to the Sumo Logic Web Application and click your name in the left nav and open the Preferences page
- Next, click the blue + icon near My Access Keys to create a key pair
- See the official Sumo Logic documentation here for more info
- You will see this success message after Terraform creates your EC2 instance and Security Group: “Apply complete! Resources: 2 added, 0 changed, 0 destroyed.”
6. Now you’re done!
- After about 3-4 minutes, check under Manage Data > Collection in the Sumo Logic UI
- You should see you new collector running and scanning the sources we specified in the sources.json (Linux OS logs, Cron log, and Host Metrics)
Make sure to delete you resources using the Terraform destroy command. You can enter any string when you are prompted for the Sumo Logic key pair information. The -Vephemeral=true flag in our Sumo Logic user data configuration command instructs Sumo Logic to automatically clean out old collectors are no longer alive.
Now What? View Streaming Logs and Metrics!
What Else Can Sumo Logic Do?
Sumo Logic collects AWS CloudWatch metrics, CloudTrail audit data, and much more. Sumo Logic also offers integrated Threat Intelligence powered by CrowdStrike, so that you can identify threats in your cloud infrastructure in real time. See below for more documentation:
In part 2 of this post, I’ll cover how to deploy an Autoscaling Group behind a load balancer in AWS. We will integrate the Sumo Logic collector into each EC2 instance in the fleet, and also log the load balancer access logs to an S3 bucket, then scan that bucket with a Sumo Logic S3 source.
Thanks for reading!
Graham Watts is an AWS Certified Solutions Architect and Sales Engineer at Sumo Logic