Back to insight results

June 3, 2019 By Sumo Logic

AWS Log Management Best Practices

How to get the most out of your Amazon Web Services logs.

In today’s Amazon Web Services (AWS) native or hybrid environments, torrents of data are moving in and around your network at all times. To truly understand what’s going on in your AWS environment you’ll need a strong and defined system for ingesting, analyzing and reacting to log data–no easy task under today’s strict compliance and security standards.

But with the right approach and big data partners, the challenges of mining all this data can be transformed into golden opportunities to improve user performance and reduce costs. Below is a look at four best practices for complete control of your AWS log data.

1) Know Your Logging Responsibilities

Before attempting to tackle a logging management approach it’s crucial to understand your role in log data management. Amazon provides infrastructure security with complete logging for activity in its cloud environment; however, you are responsible security within your private cloud.

This means that though AWS protects against intruders and other threats, you are responsible for the movement and credentials of users you allow into your environment. Be sure to analyze your native or hybrid cloud structure and identify the applications and data handoff points that can (and eventually do) lead to vulnerabilities.

2) Secure Your Logging Environment

Protect yourself and keep your log data clean by incorporating the following routines into your log security practices:

Restrictive access permissions. Users should have minimal access to resources, accessing only the ones essential for transactions. Frequently audit and update Access Control Lists (ACLS).
Multi-factor user authentication. To ensure that an intruder can’t sneak through one security gap, require users to use multiple authentication checks with a variety of passwords, security questions and/or biometric interfaces. This will enable you to track logs for authentication failures and identify vulnerabilities.
Update security certificates. The latest requirements outlined by the PCI Security Standards Organization call for a migration from early secure socket layer (SSL) and transport layer security (TSL) certificates to more recent and secure versions. The majority of logged security breaches stem from weaknesses in this compliance area.
Audit Your Own AWS Logs. The PCI Security Standards Organization also stipulates annual audits, both those performed internally and at least one audit per year by an approved third party security firm. These test runs and identifying key data needles in the information haystack will prepare you for any audit eventuality and give your teams practice in dealing with critical logging issues.

By taking these steps early in your log management approach you’ll ensure that you’re logging the data you need and using it to keep things secure.

Constantly monitor your AWS environment to learn the most about vulnerabilities and ways you can save IT budget. Amazon makes available a variety of great tools for keeping a close eye on the internal workings of your environment.

3) In AWS, ABW: Always Be Watching

Amazon CloudWatch tracks your AWS resources and the applications. Collect and track metrics, monitor log files, and deploy automated responses to common (or flag-raising) events in your environment.
AWS CloudTrail gathers all pertinent information about API calls within your AWS environment, revealing the caller’s identity, IP address, call requests, and other data. CloudTrail logs contain information that will be critical for audits and intrusion response.
AWS Inspector is a great automated tool that probes your AWS environment for vulnerabilities, then provides a complete log report, along with the most common fixes and improvements for better security. Identify your security problems and enforce standards, all with the automated ease of Inspector.

Familiarity with these and other key AWS data sources and apps gives you a head start in developing comprehensive logging practices. But Sumo Logic–the industry leader in machine learning and analytics–provides the tools and technology to see and act on critical log data.

4) Gain full-stack visibility with Sumo Logic

The innovation behind Sumo Logic’s log analytics transforms the raw data gathered through AWS services and tools and transforms into power you can actually see. Here is what a standard data log in AWS provides:

Sumo Logic ingests all of this raw AWS data and renders it into interactive visualizations that show you exactly what’s happening in your network in real-time so important events don’t get buried under a deluge of other data.

The Sumo Logic platform unifies diverse logs and metrics, using its advanced machine learning technology to provide full stack visibility for real-time application monitoring and root-cause analysis. No other big data partner integrates so seamlessly and completely with the AWS universe, or provides logging power that can transform your operations.

Amazon’s global AWS infrastructure gives today’s IT leaders unparalleled power and scalability. But these abilities also come with complex challenges to gather, analyze, and act on the logging data that drown organizations without the ability to keep up. Combined with Sumo Logic’s continuous machine learning, root-cause analysis capabilities, and crystal clear visualizations about what’s happening in your environment, the whole world becomes not just accessible but manageable.

AWS Log Parsers: Get Started in Sumo Logic

AWS has a number of different services that push logs to S3, which can then be analyzed later. Such services include Elastic Load Balancers, CloudTrail, CloudFront, and others. These logs often come in standard JSON format, and as such there are a number of effective AWS Log Parsers in Sumo Logic.

First, we can use the our free text parser, to pull out any field that may be present. While this is flexible and good for small selections, there are other options better suited to AWS.

Second, we could use our automatic JSON parser to extract the fields. This will pull out all the included fields, but must be included on every query run.

Finally, to be as simple and permanent as possible, we could use our Field Extraction Templates for the various AWS services. This gives us the easiest route to including key data with our AWS logs, without the need to write any parsers by hand. The data will be stored with each log line, without the need to repeat the message over and over.

People who read this also enjoyed