Pricing Login
Pricing

Posts by

Blog

DevSecOps and log analysis: improving application security

Blog

AWS Lambda in Java 8: examples and instructions

Blog

How Australia's Privacy Legislation Amendment impacts cybersecurity

Blog

11 unique insights into SLOs and reliability management

Blog

Defragging database security in a fragmented cloud world

Blog

2022 Sumo Logic blog highlights, curated just for you

Blog

Learn about the meaning and value of cloud-native from experts at Atchison Technology, Qumu, Microsoft, and Techstrong Group

Blog

How female leaders find their path to career success

Blog

No-code vs. low-code and near-no-code security automation

Blog

Kubernetes DevSecOps vulnerabilities and best practices

Blog

How to improve your microservices architecture security

Blog

What is database security?

Blog

Too many tools? Best practices for planning and implementing a successful IT tool consolidation strategy

Blog

Fusing career paths with interests and passion

Blog

New AWS services? No problem! How Sumo Logic is evolving to meet your AWS observability needs

Blog

Prepare your IT systems for Black Friday with b​est practices and strategies from Ulta Beauty

Blog

Detection notes: In-memory Office application token theft

Blog

How to design a microservices architecture with Docker containers

Blog

How to take DevSecOps to the next level: A conversation with SecOps and DevOps leaders from NielsenIQ, ARA Security and Techstrong Group

Blog

How to build your DevOps team with Agile culture

Blog

Why Sumo Logic is betting its future on OpenTelemetry

Blog

Communicating the value of Sumo Logic in EMEA

Blog

Get AWS Lambda data at your fingertips faster with the new Telemetry API

Blog

How to decide on self-hosted vs managed Apache Airflow

Blog

10 things you should know about using AWS S3

Blog

Insights from Dolby and AWS CISOs on the challenges and opportunities in orchestrating the defense of modern applications

Blog

If and how to return to the office: Data-driven decision making

Blog

Building the case for Sumo Logic in the German market

Blog

How to track AWS costs with the AWS Cost Explorer app for Sumo Logic

Blog

2022 Gartner Magic Quadrant for SIEM: Sumo Logic positioned as a Visionary for the second year in a row

Blog

Open source documentation will improve collaboration

Blog

Datadog alternatives for cloud security and application monitoring

Blog

Understand the dependency between applications and infrastructure with Sumo Logic

Blog

Beat the challenges of supply chain vulnerability

Blog

Digital experiences — Our second nature

Blog

Find threats: Cloud credential theft on Windows endpoints

Blog

How Sumo Logic helps you comply with the CERT-In Directions 2022

Blog

FedRAMP: The journey to cloud secure operations

Blog

New capabilities: Sumo Logic expands Real User Monitoring (RUM)

Blog

How to drive better decision-making with reliability management

Blog

How to get maximum value from Service Level Objectives (SLOs)

Blog

Improve your application monitoring by reducing overhead of managing and updating alert rules

Blog

SOAR Market Guide 2022: What does the Gartner research say?

Blog

Simplify infrastructure and reduce costs with VPC Flow Logs ingest via Amazon Kinesis Data Firehose into Sumo Logic

Blog

How Sumo Logic is enhancing team pride, connection, insight and elevation

Blog

Five reasons to attend Illuminate 2022

Blog

Eight best practices for a successful cloud migration strategy

Blog

Why this employee believes diverse backgrounds make for better team collaboration

Blog

DevOps automation: Best practices and benefits

Blog

Discover the business impact of digital customer experience from E-Commerce and DevOps leaders

Blog

How a happy Sumo Logic customer became an employee

Blog

​​The Sumo Logic East Coast Tour Stops in Boston for AWS re:Inforce

Blog

SRE Pulse survey: Get the latest insights on the evolving role and employee impact

Blog

Monitorama 2022: the good, the bad and the beautiful (Part 2)

Blog

Get better visibility into DevOps performance in one place with Atlassian integrations

Blog

Use new Cloud SIEM Entity Groups to make threat response more efficient

Blog

Monitorama 2022: the good, the bad and the beautiful (Part 1)

Blog

How to gain Kubernetes visibility in just a few clicks

Blog

Learn how application monitoring helps lay the foundation for operational success

Blog

How one employee found his voice in a global organization

Blog

SIEM vs SOAR : Evaluating security tools for the modern SOC

Blog

Deconstructing AIOps: Is it even real?

Blog

How to increase allyship within the LGBTQIA+ community

Blog

Follina - CVE-2022-30190

Blog

Sumo Logic named a challenger in 2022 Gartner Magic Quadrant for APM and Observability

Blog

Why end-to-end visibility is critical to secure your apps in a serverless world

Blog

Sumo Logic expands Cloud SIEM security coverage for Microsoft Azure

Blog

AAPI month helps to understand and dispel Asian stereotypes at work

Blog

Join the Sumo Logic Security Team at RSA Conference 2022

Blog

Best practices to collect, customize and centralize Node.js logs

Blog

Former Navy serviceman now trains customers to be successful with Sumo Logic

Blog

Is your penetration testing weak? Catch hackers at your backdoor with Sumo Logic

Blog

How Sumo SREs manage and monitor SLOs as Code with OpenSLO

Blog

How an HR leader aligns business and people strategy to make a difference

Blog

Are we sure that SOAR is at a crossroads?

Blog

How SAP built a Dojo Community of Practice to support a cultural shift to DevOps

Blog

Unlocking self-service monitoring with the Sensu Integration Catalog

Blog

Sumo Logic celebrates Earth Day 2022 with Planeteer-led Earth Week

Blog

Weaponizing paranoia: developing a threat detection strategy

Blog

Why you need both SIEM and SOAR to improve SOC efficiencies and increase effectiveness

Blog

What it means to be ‘in it’ with our customers every single day

Blog

How to get started with OpenTelemetry auto-instrumentation for Java

Blog

Mind your Single Sign-On (SSO) logs

Blog

Okta evolving situation: Am I impacted?

Blog

Take the very first State of SRE Survey from DevOps Institute

Blog

Sumo Logic all-in with AWS

Blog

How to monitor RabbitMQ logs and metrics with Sumo Logic

Blog

Five women leaders share advice to empower the next generation of women in STEM

Blog

Want to improve collaboration and reduce incident response time? Try Cloud SOAR War Room

Blog

How to monitor ActiveMQ logs and metrics

Blog

Ship software faster by removing bottlenecks and keep work flowing

Blog

Overwhelmed: why SOAR solutions are a game changer

Blog

Minimize downtime, and improve performance for Verizon 5G Edge applications with Sumo Logic

Blog

How to monitor Amazon Kinesis

Blog

SRE: How the role is evolving

Blog

Cloud-native SOAR and SIEM solutions pave the road to the modern SOC

Blog

Adopt user analytics to accelerate security investigations

Blog

Make the most of your observability data with the Data Volume app

Blog

Monitoring AWS Spot instances using Sumo Logic

Blog

Monitoring your AWS environment for vulnerabilities and threat detection

Blog

Accelerating software delivery through observability at two very different organizations

Blog

Database monitoring with Sumo Logic and OpenTelemetry-powered distributed tracing

Blog

How teams are breaking down data silos to improve software delivery

Blog

Host and process metrics - monitoring beyond apps

Blog

Log4Shell CVE-2021-44228

Blog

Accelerate security operations today and tomorrow with automation and AI

Blog

User experience is a focus of Sumo Logic Observability innovations

Blog

Announcing new Sumo Logic AWS security Quick Start integrations

Blog

How Cloud SOAR helps teams boost security during cloud migration

Blog

How to streamline Windows monitoring for better security

Blog

Sumo Logic extends monitoring for AWS Fargate powered by AWS Graviton2 processors

Blog

How using Cloud SIEM dashboards and metrics for daily standups improves SOC efficiency

Blog

Extend your DevOps analysis to CircleCI and GitLab data

Blog

Why and how to monitor AWS EKS

Blog

How Sumo Logic monitors unit economics to improve cloud cost-efficiency

Blog

An open letter to Sumo Logic enthusiasts

Blog

The role of APM and distributed tracing in observability

Blog

Three Cloud SIEM innovations that improve team collaboration, tailor SOC workflows, and encourage customization

Blog

Top six Amazon S3 metrics to monitor

Blog

OpenTelemetry: the future of Sumo Logic Observability

Blog

Sumo Logic recognized as a leader in the GigaOm Radar Report for Security Orchestration, Automation, and Response (SOAR)

Blog

Illuminate 2021 - embracing open standards for big picture observability

Blog

How Cloud SOAR mitigates the cybersecurity skill gap problem in modern SOCs

Blog

Analyzing human layer risks with Tessian

Blog

Supply chain security, compliance, and privacy for cloud-native ecosystems

Blog

Sumo Logic extends monitoring for AWS Lambda functions powered by AWS Graviton2 processors

Blog

Introducing Sensu Plus

Blog

Troubleshooting outages at 3 AM with Alert Response

Blog

XDR, what is it? Does everyone agree? What is real impact vs. hype?

Blog

Extending observability to app infrastructure

Blog

Learn how to modernize security operations at Illuminate 21

Blog

Building a cloud-native SOC: fantasy or reality?

Blog

5 reasons to attend Illuminate

Blog

Securing critical infrastructure

Blog

5 reasons why security automation won't replace skilled security professionals

Blog

Supervised active intelligence - the next level of security automation

Blog

How to increase & justify your cybersecurity budget

Blog

Uncovering the power of Cloud SOAR’s Open Integration Framework

Blog

Integrating MITRE ATT&CK with Cloud SOAR to optimize SecOps and incident response

Blog

How to improve MTTD and MTTR with SOAR

Blog

SOAR doesn't replace humans - It makes them more efficient

Blog

All you need to know about HAProxy log format

Blog

Announcing New York State Department of Financial Services Attestation

Blog

How to implement cybersecurity automation in SecOps with SOAR (7 simple steps)

Blog

How the cloud-native journey is changing the CISO’s role

Blog

Monitoring Cassandra vs Redis vs MongoDB

Blog

How to use Cloud SOAR's search query bar to optimize workflow processes

Blog

Sumo Logic brings full coverage to modern IT and SecOps workflows with ServiceNow

Blog

Ready, set, SOAR! The road to next-gen SOC with SOAR security

Blog

5 trends shaping the cybersecurity landscape in 2021

Blog

Cost of cyber attacks vs. cost of cybersecurity in 2021

Blog

Why proactive threat hunting will be a necessity in 2021

Blog

Ransomware attacks 2.0: How to protect your data with SOAR

Blog

The state of SOAR: What to expect in 2021

Blog

Monitoring HAProxy logs and metrics with Sumo Logic

Blog

Modernizing SOC and security

Blog

Sumo Logic Red Hat Marketplace Operator

Blog

Disrupt your SOC or be disrupted

Blog

How SMART are your security program KPIs?

Blog

Flexible Incident Response playbooks for any situation

Blog

Global Confidence: Using crowdsourcing and machine learning to scale your SOC resources

Blog

Our vision for Cloud SOAR and the future

Blog

Sumo Logic completes full stack observability with Real User Monitoring capabilities

Blog

Announcing new Cloud Security Monitoring & Analytics apps to surface the most relevant security insights from AWS GuardDuty, WAF, and Security Hub data

Blog

Deep dive into Security Orchestration, Automation and Response (SOAR)

Blog

How to monitor NGINX deployments with Sumo Logic

Blog

How to use Kubernetes to deploy Postgres

Blog

Modern security ops with Zscaler and Sumo Logic

Blog

How to troubleshoot Apache Cassandra performance using metrics and logs in debugging

Blog

Building a modern SOC

Blog

Hunting for threats in multi-cloud and hybrid cloud environments

Blog

How to monitor Redis logs and metrics

Blog

Legacy vs. modern cloud SOAR-powered SOC

Blog

Queryless vs. query-less. Faster insights and better observer experience with span analytics

Blog

How to monitor Cassandra database clusters

Blog

Analyzing Office 365 GCC data with Sumo Logic

Blog

Optimize value of Cloudtrail logs with infrequent tier

Blog

Monitoring Apache Kafka clusters with Sumo Logic

Blog

Accelerate hybrid threat protection using Sumo Logic Cloud SIEM powered by AWS

Blog

Sumo Logic named a Visionary in the 2021 Gartner Magic Quadrant for SIEM for the first time

Blog

5 important DevOps monitoring metrics

Blog

The role of threat hunting in modern security

Blog

Using pre-built Monitors to proactively monitor your application infrastructure

Blog

Threat hunting with Cloud SIEM

Blog

Introducing new cloud security monitoring & analytics apps

Blog

CMMC compliance made easy with Sumo Logic

Blog

How to monitor application logs

Blog

Ensure cloud security with these key metrics

Blog

Introducing Sensu

Blog

Introducing Sumo Logic Cloud SIEM powered by AWS

Blog

Sumo Logic + DFLabs: Cloud SIEM combined with SOAR automates threat detection and incident response

Blog

Looking to disrupt your legacy SOC? Attend The Modern SOC Summit to find out how!

Blog

Distributed tracing vs. application monitoring

Blog

What is threat intelligence?

Blog

Detecting users crawling the MITRE ATT&CK stages in your AWS environment

Blog

Cloud SIEM accelerates modernizing security operations across Asia Pacific

Blog

Using Telegraf to collect infrastructure performance metrics

Blog

Accelerate incident resolution by benchmarks-enriched on-call contexts

Blog

Tail your logs with Tailing Sidecar Operator

Blog

Extend AWS observability beyond CloudWatch

Blog

Explore NGINX usage, performance, and transactions to increase customer experience

Blog

Sumo Logic joins AWS to accelerate Amazon CloudWatch Metrics collection

Blog

Why Prometheus isn’t enough to monitor complex environments

Blog

Microservices vs. serverless architecture

Blog

Sumo Logic extends support for OpenTelemetry to AWS Lambda

Blog

Sumo Logic extends its APM to browser

Blog

Sumo Logic to accelerate modernization of security operations with proposed acquisition of DFLabs

Blog

Efficiently monitor the state of Redis database clusters

Blog

Sumo Logic continues to expand public sector footprint

Blog

Forrester TEI study: Sumo Logic’s Cloud SIEM delivers 166 percent ROI over 3 years and a payback of less than 3 months

Blog

Service map & dashboards provide insight into health and dependencies of microservice architecture

Blog

Observability vs. monitoring: what's the difference?

Blog

Analyze your tracing data any way you want with Sumo search query language

Blog

Analyze JMX to better assess the health of your Java applications

Blog

How the COVID-19 pandemic has changed IT & security

Blog

Daemons in Cloud SOAR: proactively enhancing SecOps

Blog

How to dynamically auto-steer your traffic to multi-CDN or multiple data-centers

Blog

Sumo Logic achieves FedRAMP-Moderate authorization

Blog

Automating the potential workflows with Sumo Logic APIs

Blog

Case Study: Genesys’ journey to the cloud and DevOps excellence

Blog

Building autocomplete with ANTLR and CodeMirror

Blog

Code42 launches a new app in the Sumo Logic open source partner ecosystem

Blog

Best practices to monitor Cloudflare performance

Blog

Dark theme is here

Blog

SEGA Europe and Sumo Logic: integrating security across clouds

Blog

How to monitor Amazon DynamoDB performance

Blog

Embracing open source data collection

Blog

Improve your security posture by focusing on velocity, visibility, and vectors

Blog

Automate your SIEM with Sumo Logic in 7 clicks

Blog

How Clorox leverages Cloud SIEM across security operations, threat hunting, and IT Ops

Blog

How to monitor Amazon Aurora RDS logs and metrics

Blog

Everywhere in One Place: OpenTelemetry and Observability in Sumo Logic

Blog

Recommendations for monitoring SolarWinds supply chain attack with Sumo Logic Cloud SIEM

Blog

Automatic correlation of FireEye red team tool countermeasure detections

Blog

Application Performance Management for Microservices with Sumo Logic

Blog

How to Monitor Amazon Redshift

Blog

Building your modern cloud SIEM

Blog

Pondering Dogs and Observability

Blog

Monitoring Microsoft SQL Best Practices

Blog

Onboard your tracing data to Sumo Logic even faster with AWS Distro for OpenTelemetry (now in preview)

Blog

Monitor AWS Lambda functions created from container images

Blog

Sumo Logic partners with AWS to monitor Amazon EKS Distro

Blog

6 Signals that you need SOAR [Infographic]

Blog

How Sumo Logic’s Cloud SIEM Uses MITRE ATT&CK to Develop Content

Blog

Insights from the 5th annual Continuous Intelligence Report

Blog

Full VPC traffic visibility with AWS Network Firewall and Sumo Logic

Blog

How to Monitor Akamai Logs

Blog

Data security a major concern in healthcare: How to prevent data breaches with SOAR

Blog

The Dramatic Intersection of AI, Data and Modern Life

Blog

Creepy or Unjust: The State of Data in the U.S.

Blog

How to Monitor MongoDB Logs

Blog

Automated Tech Perpetuates the Digital Poorhouse

Blog

IoT cybersecurity in healthcare: How it can be improved with SOAR?

Blog

SOC vs. CSIRT - understanding the difference

Blog

SOAR guide #3: How to maximize your SOAR investment

Blog

How to analyze IIS logs for better monitoring

Blog

Introducing the Sumo Logic Observability suite with distributed tracing - a cornerstone of cloud-native APM

Blog

Security automation vs. security orchestration - what's the difference?

Blog

Illuminate 2020 Keynote: The What, Where and Why of Issues that Affect Reliable Customer Experiences

Blog

National Cyber Security Awareness month 2020 - The importance of SOAR

Blog

Modern App Reliability with Sumo Logic Observability

Blog

Building better software faster - the key to successful digital transformation

Blog

A New Framework for Modern Security

Blog

Building Better Apps: The Open Source Partner Ecosystem

Blog

PostgreSQL vs MySQL

Blog

Gartner’s 2020 SOAR Market Guide in a nutshell

Blog

Kubernetes Dashboard

Blog

NGINX Log Analyzer

Blog

SOAR guide #2: Taking security operations to the next level

Blog

Leveraging logs to better secure cloud-native applications

Blog

How security automation and orchestration helps you work smarter and improve Incident Response

Blog

Logging and Monitoring Kubernetes

Blog

Get Started with Kubernetes

Blog

Using Data to Find the Mysterious Centrist Voter

Blog

How Goibibo uses Sumo Logic to get log analytics at cloud scale

Blog

SOAR guide: The fundamentals of Security Orchestration, Automation and Response

Blog

Configuring the OpenTelemetry Collector

Blog

4 ways to distinguish a top SOAR platform

Blog

Can We Rely on Data to Predict the Outcome of the 2020 Election?

Blog

Integrating lessons learned into Incident Response

Blog

Kubernetes vs. Docker: What Does it Really Mean?

Blog

6 key steps to building a modern SOC

Blog

9 key components of incident and forensics management

Blog

How Cloud SOAR helps higher education institutions prevent cyber attacks

Blog

Why measuring SOC-cess matters - Using metrics to enhance your security program

Blog

Emerging issues in cybersecurity for higher education institutions

Blog

Top 5 Reasons to Attend Illuminate Virtual 2020

Blog

5 common Security Orchestration, Automation and Response (SOAR) use cases

Blog

SOAR to the sky: Discover the power of next-gen progressive automation

Blog

Simplifying log management with logging as a service

Blog

Detecting Windows Persistence

Blog

5 reasons why SOAR is a must-have technology for every high-functioning MSSP

Blog

AWS Observability: Designed specifically for AWS environments

Blog

Observability: The Intelligence Economy has arrived

Blog

How to Use the New Sumo Logic Terraform Provider for Hosted Collectors

Blog

Sumo Logic Achieves FedRAMP-Moderate “In Process”

Blog

Five critical components of SOAR technology

Blog

Distributed tracing analysis backend that fits your needs

Blog

Deploying AWS Microservices

Blog

Gartner SOAR Magic Quadrant: The best of Cloud SOAR is yet to come

Blog

Sumo Logic and ZeroFOX Join Forces to Improve Visibility and Protect your Public Attack Surface

Blog

3 core pillars of a SOAR Solution

Blog

Announcing new Sumo Logic dashboards

Blog

Rethinking Modern SOC Workflow

Blog

Gartner analysis: Why SOAR is the technology for the future

Blog

Reduce AWS bills with aws-nuke

Blog

What Data Types to Prioritize in Your SIEM

Blog

Cloud SIEM: Getting More Out of Your Threat Intelligence - 3 Use Cases for IOCs

Blog

5 Ways SOAR improves collaboration within a SOC team

Blog

Building a Security Practice Powered by Cloud SIEM

Blog

The automation hype is real for SOC teams: unpacking the Dimensional Research “2020 State of SecOps and Automation” report

Blog

Distributed Tracing & Logging - Better Together

Blog

Defense in depth: DoublePulsar

Blog

How SOAR improves the performance of а SOC team

Blog

Improving Application Quality through Log Analysis

Blog

Domain Hijacking Impersonation Campaigns

Blog

Continuous Intelligence for Atlassian tools and the DevSecOps Lifecycle (Part 2)

Blog

The Path of an Outlaw, a Shellbot Campaign

Blog

The power of new-age playbooks in Incident Response

Blog

Gaining Visibility Into Edge Computing with Kubernetes & Better Monitoring

Blog

Why cloud-native SIEM is vital to closing the security skills gap

Blog

How SOAR improves Standard Operating Procedures (SOP)

Blog

Standard Operating Procedures as big piece of the cyber Incident Response puzzle

Blog

The value of a stolen account. A look at credential stuffing attacks.

Blog

The Difference Between IaaS, Paas, and SaaS

Blog

Artificial intelligence and machine learning in cybersecurity

Blog

SOAR takes over where detection starts: Understanding the role of SOAR in Standard Operating Procedures

Blog

awA Million Dollar Knob: S3 Object Lifecycle Optimization

Blog

The difference between playbooks and runbooks in Incident Response

Blog

The 7 Essential Metrics for Amazon EC2 Monitoring

Blog

Continuous Intelligence for Atlassian tools and the DevSecOps Lifecycle (Part 1)

Blog

Monitoring MySQL Performance Metrics

Blog

SOAR trends in 2020: What does the future look like for SOAR?

Blog

MySQL Log File Location

Blog

Independent Survey Reveals: Continuous Intelligence Demand Grows as Organizations Shift to Real-time Business

Blog

Service Mesh Comparison: Istio vs. Linkerd

Blog

Profiling "VIP Accounts" Part 2

Blog

The importance of evidence preservation in incident response

Blog

7 Key DevOps Principles

Blog

How to Build a DevOps Pipeline

Blog

Utilizing Cloud SOAR to manage IT and OT and strengthen the cybersecurity posture

Blog

NoSQL-based stacks exposed to the Internet

Blog

Spam In the Browser

Blog

Adopting Distributed Tracing: Finding the Right Path

Blog

Profiling “VIP Accounts” Part 1

Blog

Best Practices for Logging in AWS Lambda

Blog

How SOAR improves EDR in SOC processes

Blog

Sumo Logic and NIST team up to secure energy sector IoT

Blog

AWS Lambda Monitoring - what to keep an eye on with serverless

Blog

Remote Admin Tools (RATs): The Swiss Army Knives of Cybercrime

Blog

The cost of cybersecurity solutions vs. the cost of cyber attacks

Blog

How SOAR helps PSPs effectively comply with PSD2 regulations

Blog

The New Opportunity

Blog

How to scale Prometheus monitoring

Blog

Limitless analytics for all your data, at a price that fits your budget

Blog

“Fiel-ding Good” - Three great ways to enrich AWS logs in Sumo Logic

Blog

Triage fraudulent transactions with Cloud SOAR

Blog

5 questions to ask before investing in a SOAR solution

Blog

Sumo Logic Recognized as Data Analytics Solution of the Year Showcasing the Power of Continuous Intelligence

Blog

How SOAR helps protect remote workers from cyber threats

Blog

Best Practices for Data Tagging, Data Classification & Data Enrichment

Blog

COVID-19 crisis management guide for business leaders

Blog

PowerShell and ‘Fileless Attacks’

Blog

Monitoring with Prometheus vs Grafana: Understanding the Difference

Blog

Ensure a secure and reliable Zoom video conferencing service

Blog

How to Monitor Amazon ECS

Blog

Addressing the lack of qualified cybersecurity professionals - What can we do about it?

Blog

Top 5 security challenges with Zoom video conferencing

Blog

COVID-19 Guide for Security Professionals

Blog

4 core functions of a Security Orchestration, Automation and Response (SOAR) solution

Blog

Where will SOAR go in the next 5 years?

Blog

Love In The Time Of Coronavirus

Blog

Sumo Logic Announces Continuous Intelligence for Atlassian Tools

Blog

How SOAR can foster efficient SecOps in modern SOCs

Blog

Alcide kAudit Integrates with Sumo Logic

Blog

Work from home better with secure and reliable enterprise service

Blog

FedRAMP Joint Authorization Board (JAB) Prioritizes Sumo Logic for P-ATO

Blog

How to manage cyber fraud with SOAR

Blog

Best Practices for CSOs to Navigate Today’s Uncertain World

Blog

The top 5 challenges faced by Security Operations Centers

Blog

How does Sumo Logic’s Cloud SOAR compare to other SOAR solutions?

Blog

Amazon VPC Traffic Mirroring

Blog

Automation in cybersecurity: Benefit or a threat?

Blog

What is Amazon ECS?

Blog

CASB vs Cloud SIEM for SaaS Security

Blog

SOAR for Success: How to properly measure KPIs for security operations

Blog

In A Fast Changing World, Peer Benchmarks Are A GPS

Blog

What is SOAR? A comprehensive guide on how SOAR emerged in the cybersecurity world

Blog

A Healthy Outlook on Security From RSA Conference 2020

Blog

5 key benefits of a SOAR solution for MSSPs

Blog

Securing IaaS, PaaS, and SaaS in 2020 with a Cloud SIEM

Blog

A New Integration between Sumo Logic and ARIA Cybersecurity Solutions

Blog

Pre-RSA Twitter Poll: 3 Interesting Observations on SOC, SIEM and Cloud

Blog

How to implement Incident Response automation the right way

Blog

SIEM Yara Rules

Blog

How to Secure Office365 with Cloud SIEM

Blog

How We Understand Monitoring

Blog

Securing your SaaS apps in 2020: 3 pillars you can’t neglect

Blog

How to Monitor EKS Logs

Blog

The total business impact of Sumo Logic Cloud SIEM

Blog

How Data Analytics Support the CDM Program

Blog

Tracking Systems Metrics with collectd

Blog

Understanding the Apache Access Log: View, Locate and Analyze

Blog

AWS offers 175 services now. Should you be adopting many of them now?

Blog

Can You Tell Debug Data and BI Data Apart?

Blog

What is Amazon Elastic Kubernetes Service (EKS)?

Blog

Top 5 Cybersecurity Predictions for 2020

Blog

The Ultimate Guide to Windows Event Logging

Blog

How to View Logs in Kubectl

Blog

All The Logs For All The Intelligence

Blog

Vagrant vs. Docker: Which Is Better for Software Development?

Blog

NGINX vs Apache

Blog

Sumo Logic and Amazon Web Services Continue to Help Businesses Thrive in the Cloud Era

Blog

The New Sumo Logic AWS Security Quick Start

Blog

New Sumo Logic Apps with support for AWS Hierarchies

Blog

Announcing Sumo Logic Archive Intelligence Service now in Beta

Blog

Monitor Cloud Run for Anthos with Sumo Logic

Blog

How to Monitor Redshift Logs with Sumo Logic

Blog

AWS S3 Monitoring with Sumo Logic

Blog

Top 10 SIEM Best Practices

Blog

Multi-Cloud is Finally Here!

Blog

Data Privacy Is Our Birthright - national cybersecurity month

Blog

Context is Everything - How SPS Commerce uses context to embrace complexity

Blog

What is AWS S3

Blog

How Informatica Confidently Migrates to Kubernetes with Sumo Logic

Blog

How Doximity solved their high operational overhead of their Elastic stack with Sumo Logic

Blog

5 business reasons why every CIO should consider Kubernetes

Blog

awsHow to Monitor Amazon Redshift

Blog

5 Tips for Preventing Ransomware Attacks

Blog

We Live in an Intelligence Economy - Illuminate 2019 recap

Blog

Cloud Scale Correlation and Investigation with Cloud SIEM

Blog

Service Levels––I Want To Buy A Vowel

Blog

Serverless Computing for Dummies: AWS vs. Azure vs. GCP

Blog

How to Secure Kubernetes Using Cloud SIEM?

Blog

Serverless Computing Security Tips

Blog

10 Modern SIEM Use Cases

Blog

Challenges of Monitoring and Troubleshooting in Kubernetes Environments

Blog

More Innovations from Sumo Logic that Harnesses the Power of Continuous Intelligence for Modern Enterprises

Blog

Monitoring Slack workspaces with the Sumo Logic app for Slack

Blog

A 360 degree view of the performance, health and security of MongoDB Atlas

Blog

Monitor your Google Anthos clusters with the Sumo Logic Istio app 

Blog

Sumo Logic’s World Class Partner and Channel Ecosystem Experiences Triple Digit Growth

Blog

6 Observations from the 2019 CI Report: State of Modern Applications and DevSecOps In The Cloud

Blog

What is PCI DSS compliance?

Blog

Objectives-Driven Observability

Blog

Peering Inside the Container: How to Work with Docker Logs

Blog

Security Strategies for Mitigating IoT Botnet Threats

Blog

How to Read, Search, and Analyze AWS CloudTrail Logs

Blog

Serverless vs. Containers: What’s the Same, What’s Different?

Blog

How to Monitor Syslog Data with Sumo Logic

Blog

Know Your Logs: IIS vs. Apache vs. NGINX Logs

Blog

Multi-Cloud Security Myths

Blog

What is Amazon Redshift?

Blog

See You in September at Illuminate!

Blog

Sumo Logic adds Netskope to its Security and Compliance Arsenal

Blog

How to SIEMplify through Cloud SIEM

Blog

Illuminate 2019 Stellar Speaker Line-up Will Help Attendees See Business and the World Differently Through Data Analytics

Blog

How to Monitor Fastly CDN Logs with Sumo Logic

Blog

How to Monitor NGINX Logs with Sumo Logic

Blog

To SIEM or not to SIEM?

Blog

Cloud Security: What It Is and Why It’s Different

Blog

How to Monitor Fastly Performance

Blog

Gartner is fully in the cloud. Are you?

Blog

How to monitor NGINX logs

Blog

Why you need to secure your AWS infrastructure and workloads?

Blog

What is AWS CloudTrail?

Blog

6 steps to secure your workflows in AWS

Blog

Machine Data is Business Intelligence for Digital Companies

Blog

Launching the AWS security threats benchmark

Blog

3 key takeaways on Cloud SIEM from Gartner Security & Risk Management Conference 2019

Blog

Sumo Logic provides real-time visibility, investigation and response of G Suite Alerts

Blog

What is NGINX?

Blog

What is Fastly CDN?

Blog

Industry Analysts Recognizing Cloud Analytics Brings Wave of Disruption to the SIEM Market

Blog

Now FedRAMP Ready, Sumo Logic Empowers Public Organizations

Blog

The Super Bowl of the Cloud

Blog

The Cloud SIEM market is validated by Microsoft, Google, and AWS

Blog

Clearing the Air: What Is Cloud Native?

Blog

Key Metrics to Baseline Cloud Migration

Blog

Typing a useReducer React hook in TypeScript

Blog

What is IoT Security?

Blog

Recycling is for Cardboard, not Analytics Tools

Blog

How to Monitor Apache Web Server Performance

Blog

IIS Logs Location

Blog

Software visibility is the key to innovation

Blog

What is Apache? In-Depth Overview of Apache Web Server

Blog

The Why Behind Modern Architectures

Blog

Control Your Data Flow with Ingest Budgets

Blog

From SRE to QE - Full Visibility for the Modern Application in both Production and Development

Blog

Sumo Logic Cert Jams Come to Japan

Blog

Best Practices with AWS GuardDuty for Security and Compliance

Blog

People-driven Documentation

Blog

Improve Alert Visibility and Monitoring with Sumo Logic and Opsgenie

Blog

What is AWS GuardDuty?

Blog

Platforms All The Way Up & Down

Blog

What is Serverless Architecture?

Blog

How Sumo Logic Maps DevOps Topologies

Blog

Endpoint Security Analytics with Sumo Logic and Carbon Black

Blog

RSAC 19 Partner Cam: Sumo Logic & PagerDuty Deliver Seamless SecOps

Blog

AWS 101: An Overview of Amazon Web Services

Blog

Building Cross-platform Mobile Apps

The way we all experience and interact with apps, devices, and data is changing dramatically. End users demand that apps are responsive, stable, and offer the same user experience no matter which platform they are using. To build these well, many developers consider creating cross-platform apps. Although building a separate native app per platform is a preferred approach for mass market consumer apps, there are still a lot of situations where it makes more sense to go cross-platform. In this post I’ll look at the most popular strategies a developer faces when it comes to building a mobile app, and some tools that help you to build well. Mobile Web Apps This is probably the most easiest way onto a mobile device. Mobile web apps are hosted on a remote server and built using identical technologies to desktop web apps: HTML5, JavaScript and CSS. The primary difference is that it will be accessed via mobile device’s built-in web browser, which may require you to apply responsive web design principles to ensure that the user experience is not degraded by the limited screen size on mobile and that would be costly to build and maintain. The cost of applying responsive design principles to a web site may be a significant fraction of developing a mobile app. Native Mobile Apps Native apps are mainly developed using the device’s out-of-the-box SDK. This is a huge advantage as you have full access to the device’s API, features, and inter-app integration. However it also means you need to learn Java to build Apps for Android, Objective-C for iOS, and C# for Windows phones. Whether you are a single developer or working with a company and multi team skills, learning to code in multiple languages is costly and time-consuming. And most of the time, all features will not be available on every platform. Cross-Platform Mobile Apps Cross-platform apps have somewhat of a reputation of not being competitive against native apps, but we continue to see more and more world class apps using this strategy. Developers only have to maintain a single code base for all platforms. They can reuse the same components within different platforms, and most importantly, developers can still access the native API via native modules. Below are some tools that support building cross-platform apps: PhoneGap Owned by Adobe, PhoneGap is a free resource and handy to translate HTML5, CSS and JavaScript code. Once the app is ready, the community will help in reviewing the app and it is supported all major platforms including BlackBerry. Xamarin.Forms With a free starter option, Xamarin.Forms is great tool for C# and Ruby developers to build an app cross-platform with the option of having access to native platform’s API. The a wide store of component to help achieve the goal faster. Xamarin has created a robust cross platform mobile development platform that’s been adopted by big names like Microsoft, Foursquare, IBM, and Dow Jones. Unity 3D This tool is mainly focused on building game apps, and very useful when graphics is most important detail in it. This cross platform mobile development tool goes beyond simple translation. After developing your code in UnityScriptor or C#, you can export your games to 17 different platforms, including iOS, Android, Windows, Web, Playstation, Xbox, Wii and Linux. When it comes to building an app, whether cross-platform or not, views and thoughts always differ. My preference is cross-platform for one main reason; it is less time-consuming - that is critical because I can then focus on adding new features to the app, or building another one. About the Author Mohamed Hasni is a Software Engineer focusing on end-to-end web and mobile development and delivery. He has deep experience in building line of business applications for large-scale enterprise deployments.

Blog

Sumo Logic Expands into Japan to Support Growing Cloud Adoption

In October of last year, I joined Sumo Logic to lead sales and go-to-market functions with the goal of successfully launching our newly established Japan region in Tokyo. The launch was highly received by our customers, partners, prospects and peers in the Japanese market and everyone walked away from the event optimistic about the future and hungry for more!It certainly was an exciting time not only for the company, but for me, personally, and as I reflect over the past few months here, I wanted to share a little bit about why the company’s launch in Japan came at a very strategic and opportune time as well as why Sumo Logic is a great market fit.Market OpportunityIn terms of overall IT spend and market size, Japan remains the second largest market in the world behind U.S. in enterprise technology. A large part of that is because of service spending versus traditional hardware and software.For years, Japan had been a half step or more behind in cloud technology innovation and adoption, but that has since changed. Now, Japan is experiencing a tsunami of cloud adoption with major influence from Amazon Web Services (AWS) who has aggressively invested in building data centers in Japan the past several years.The fact that AWS began heavily investing in the Japanese technology market was a strong indication to us at Sumo Logic that as we continue to expand our global footprint, the time was finally right to capitalize on this market opportunity.Sumo Logic OpportunityHowever, market opportunity aside, the nature of our SaaS machine data analytics platform and the services we provide across operations, security and the business, was a perfect fit for the needs of innovating Japanese enterprises. I’ve been here in Tokyo for over 30 years so I feel (with confidence) that it was our moment to shine in Japan. From a sales perspective, we’re very successful with a land and expand approach where we start with only a small subset of the business, and then gradually, we grow to other areas, operations, security and business as we continue to deliver great customer experiences that demonstrate long term value and impact. That level of trust building and attentiveness we provide to our global customer base is very typical of how Japanese enterprises like to conduct business. In other words, the core business model and approach of Sumo Logic are immediately applicable to the Japanese market. Anyone with experience in global IT will understand the simple, but powerful meaning of this; Sumo Logic`s native affinity with Japan is an enormous anomaly.And, Japan can be a very unforgiving market. It’s not a place where you want to come with a half-baked products or a smoke and mirrors approach. Solid products, solutions and hard work are, on the other hand, highly respected.Vertical FocusAs I’ve mentioned above, the Japan market is mostly enterprise, which is a sweet spot for Sumo Logic, and there’s also a heavy influence of automotive and internet of things (IoT) companies here. In fact, four of the world’s largest automotive companies are headquartered in Japan and their emerging autonomous driving needs, in particular, align directly with the real-time monitoring, troubleshooting and security analytics capabilities that are crucial for modern innovations around connected cars and IoT, which both generate massive amounts of data. Customers like Samsung SmartThings, Sharp and Panasonic leverage our platform for DevSecOps teams that want to visualize, build, run and secure that data. The connected car today has become less about the engine and more about the driver experience, which is 100 percent internet-enabled.Japan is also one of the two major cryptocurrency exchange centers in the world, which is why financial services, especially fintech, bitcoin and cryptocurrencies companies, is another focus vertical for Sumo Logic Japan. Our DevSecOps approach and cloud-native multi-tenant platform provides massive mission-critical operations and security analytics capabilities for crypto companies. Most of these financial services companies are struggling to stay on top of increasingly stringent regulatory and data requirements, and one of the biggest use cases for these industries is compliance monitoring. Japan is regulatory purgatory, and so our customers look for us to help automate parts of their compliance checks and security audits.Strong Partner EcosystemHaving a strong partner ecosystem was another very important piece of our overall GTM strategy in Japan. We were very fortunate to have forged an early partnership with AWS Japan that led to an introduction to one of their premium consulting partners, Classmethod, the first regional partnership with AWS APN. The partnership is already starting to help Japanese customers maximize their investment in the Sumo Logic platform by providing local guided deployment, support and storage in AWS. In addition, Sumo Logic provides the backbone for Classmethod’s AWS infrastructure to provide the continuous intelligence needed to serve and expand their portfolio of customers.Going forward, we’ll continue to grow our partner ecosystem with the addition of service providers, telecoms, MSPs and MSSPs for security to meet our customer’s evolving needs.Trusted AdvisorAt the end of the day, our mission is to help our customers continue to innovate and provide support in areas where they most need it — economically visualizing data across their modern application stacks and cloud infrastructures. We’re in a position to help all kinds of Japanese customers across varying industries modernize their architectures. Japanese customers know that they need to move to the cloud and continue to adopt modern technologies.We’re in the business of empowering our customers to focus on their core competencies while they leave the data piece component to us. By centralizing all of this disparate data into one platform, they can better understand their business operations, be more strategic and focus on growing their business. We’ve gone beyond selling a service to becoming both “data steward” as well as trusted data advisors for our customers. Japanese business is famous for its organic partnering model — think of supply chain management, and so on. Sumo Logic’s core strategy of pioneering machine data stewardship is a perfect extension of this to meet the rapidly evolving needs of the digital economy in Japan.Now that we have a local presence with a ground office and support team, we can deliver a better and more comprehensive experience to new and existing customers, like Gree and OGIS-RI, and look forward to continued growth and success in this important global market.Additional ResourcesRead the press release for more on Sumo Logic's expansion into Japan Download the white paper on cloud migration Download the ‘State of Modern Apps & DevSecOps in the Cloud’ report

AWS

January 31, 2019

Blog

Recapping the Top 3 Talks on Futuristic Machine Learning at Scale By the Bay 2018

As discussed in our previous post, we recently had the opportunity to present some interesting challenges and proposed directions for data science and machine learning (ML) at the 2018 Scale By the Bay conference. While the excellent talks and panels at the conference were too numerous to cover here, I wanted to briefly summarize three talks in particular that I found to represent some really interesting (to me) directions for ML on the java virtual machine (JVM). Talk 1: High-performance Functional Bayesian Inference in Scala By Avi Bryant (Stripe) | Full Video Available Here Probabilistic programming lies at the intersection of machine learning and programming languages, where the user directly defines a probabilistic model of their data. This formal representation has the advantage of neatly separating conceptual model specification from the mechanics of inference and estimation, with the intention that this separation will make modeling more accessible to subject matter experts while allowing researchers and engineers to focus on optimizing the underlying infrastructure. (image source) Rainier in an open-source library in Scala that allows the user to define their model and do inference in terms of monadic APIs over distributions and random variables. Some key design decisions are that Rainier is “pure JVM” (ie, no FFI) for ease of deployment, and that the library targets single-machine (ie, not distributed) use cases but achieves high performance via the nifty technical trick of inlining training data directly into dynamically generated JVM bytecode using ASM. Talk 2: Structured Deep Learning with Probabilistic Neural Programs By Jayant Krishnamurthy (Semantic Machines) | Full Video Available Here Machine learning examples and tutorials often focus on relatively simple output spaces: Is an email spam or not? Binary outputs: Yes/No, 1/0, +/-, … What is the expected sale price of a home? Numerical outputs – $1M, $2M, $5M, … (this is the Bay Area, after all!) However, what happens when we want our model to output a more richly structured object? Say that we want to convert a natural language description of an arithmetic formula into a formal binary tree representation that can then be evaluated, for example “three times four minus one” would map to the binary expression tree “(- (* 3 4) 1)”. The associated combinatorial explosion in the size of the output space makes “brute-force” enumeration and scoring infeasible. The key idea of this approach is to define the model outputs in terms of a probabilistic program (which allows us to concisely define structured outputs), but with the probability distributions of the random variables in that program being parameterized in terms of neural networks (which are very expressive and can be efficiently trained). This talk consisted mostly of live-coding, using an open-source Scala implementation which implements a monadic API for a function from neural network weights to a probability distribution over outputs. Talk 3: Towards Typesafe Deep Learning in Scala By Tongfei Chen (Johns Hopkins University) | Full Video Available Here (image source) For a variety of reasons, the most popular deep learning libraries such as TensorFlow & PyTorch are primarily oriented around the Python programming language. Code using these libraries consists primarily of various transformation or processing steps applied to n-dimensional arrays (ndarrays). It can be easy to accidentally introduce bugs by confusing which of the n axes you intended to aggregate over, mis-matching the dimensionalities of two ndarrays you are combining, and so on. These errors will occur at run time, and can be painful to debug. This talk proposes a collection of techniques for catching these issues at compile time via type safety in Scala, and walks through an example implementation in an open-source library. The mechanics of the approach are largely based on typelevel programming constructs and ideas from the shapeless library, although you don’t need to be a shapeless wizard yourself to simply use the library, and the corresponding paper demonstrates how some famously opaque compiler error messages can be made more meaningful for end-users of the library. Conclusion Aside from being great, well-delivered talks, several factors made these presentations particularly interesting to me. First, all three had associated open-source Scala libraries. There is of course no substitute for actual code when it comes to exploring the implementation details and trying out the approach on your own test data sets. Second, these talks shared a common theme of using the type system and API design to supply a higher-level mechanism for specifying modeling choices and program behaviors. This can both make end-user code easier to understand as well as unlock opportunities for having the underlying machinery automatically do work on your behalf in terms of error-checking and optimization. Finally, all three talks illustrated some interesting connections between statistical machine learning and functional programming patterns, which I found interesting as a longer-term direction for trying to build practical machine learning systems. Additional Resources Learn how to analyze Killer Queen game data with machine learning and data science with Sumo Logic Notebooks Interested in working with the Sumo Logic engineering team? We’re hiring! Check out our open positions here Sign up for a free trial of Sumo Logic

Blog

Sumo Logic Experts Reveal Their Top Enterprise Tech and Security Predictions for 2019

Blog

SnapSecChat: Sumo Logic's CSO Explains the Next-Gen SOC Imperative

Blog

How to Analyze Game Data from Killer Queen Using Machine Learning with Sumo Logic Notebooks

Blog

The Insider’s Guide to Sumo Cert Jams

What are Sumo Cert Jams? Sumo Logic Cert Jams are one and two-day training events held in major cities all over the world to help you ramp up your product knowledge, improve your skills and walk away with a certification confirming your product mastery. We started doing Cert Jams about a year ago to help educate our users around the world on what Sumo can really do and give you a chance to network and share use cases with other Sumo Logic users. Not to mention, you get a t-shirt. So far, we’ve had over 4,700 certifications from 2,700+ unique users across 650+ organizations worldwide. And we only launched the Sumo Cert Jam program in April! If you’re still undecided, check out this short video where our very own Mario Sanchez, Director of the Sumo Logic Learn team, shares why you should get the credit and recognition you deserve! Currently there are four certifications for Sumo Logic: Pro User Power User Power Admin Security User And these are offered in a choose-your-own-adventure format. While everyone starts out with the Pro User certification to learn the fundamentals, you can take any of the remaining exams depending on your interest in DevOps (Power User), Security, or Admin. Once you complete Sumo Pro User, you can choose your own path to Certification success. For a more detailed breakdown on the different certification levels, check out our web page, or our Top Reasons to Get Sumo Certified blog. What’s the Value? Often customers ask me in one-on-one situations what is the value of certification, and I tell them that we have seen significant gains in user understanding, operator usage and search performance once we get users certified. Our first Cert Jam in Delhi, India with members from the Bed, Bath and Beyond team showing their certification swag! First, there’s the ability to rise above “Mere Mortals” (those who haven’t been certified) and write better and more complex queries. From parsing to correlation, there’s a significant increase by certified users taking Pro (Level 1), Power User (Level 2), Admin (Level 3) and Security. Certified users are taking advantage of more Sumo Logic features, not only getting more value out of their investment, but also creating more efficient/performant queries. And from a more general perspective, once you know how to write better queries and dashboards, you can create the kind of custom content that you want. When it comes to monitoring and alerting, certified users are more likely to create dashboards and alerts to stay on top of what’s important to their organizations, further benefiting from Sumo Logic as a part of their daily workload. Here we can see that certified users show an increase in the creation of searches, dashboards and alerts, as well as key optimization features such as Field Extraction Rules (FERs), scheduled views and partitions: Join Us If you’re looking to host a Cert Jam at your company, and have classroom space for 50, reach out to our team. We are happy to work with you and see if we can host one in your area. If you’re looking for ways to get certified, or know someone who would benefit, check out our list of upcoming Cert Jams we’re offering. Don’t have Sumo Logic, but want to get started? Sign up for Sumo Logic for free! Our Cert Jam hosted by Tealium in May. Everyone was so enthusiastic to be certified.

Blog

Understanding the Impact of the Kubernetes Security Flaw and Why DevSecOps is the Answer

Blog

Careful Data Science with Scala

This post gives a brief overview of some ideas we presented at the recent Scale By the Bay conference in San Francisco, for more details you can see a video of the talk or take a look at the slides. The Problems of Sensitive Data and Leakage Data science and machine learning have gotten a lot of attention recently, and the ecosystem around these topics is moving fast. One significant trend has been the rise of data science notebooks (including our own here at Sumo Logic): interactive computing environments that allow individuals to rapidly explore, analyze, and prototype against datasets. However, this ease and speed can compound existing risks. Governments, companies, and the general public are increasingly alert to the potential issues around sensitive or personal data (see, for example, GDPR). Data scientists and engineers need to continuously balance the benefits of data-driven features and products against these concerns. Ideally, we’d like a technological assistance that makes it easier for engineers to do the right thing and avoid unintended data processing or revelation. Furthermore, there is also a subtle technical problem known in the data mining community as “leakage”. Kaufman et al won the best paper award at KDD 2011 for Leakage in Data Mining: Formulation, Detection, and Avoidance, which describes how it is possible to (completely by accident) allow your machine learning model to “cheat” because of unintended information leaks in the training data contaminating the results. This can lead machine learning systems which work well on sample datasets but whose performance is significantly degraded in the real world. As this can be a major problem, especially in systems that pull data from disparate sources to make important predictions. Oscar Boykin of Stripe presented an approach to this problem at Scale By the Bay 2017 using functional-reactive feature generation from time-based event streams. Information Flow Control (IFC) for Data Science My talk at Scale By the Bay 2018 discussed how we might use Scala to encode notions of data sensitivity, privacy, or contamination, thereby helping engineers and scientists avoid these problems. The idea is based on programming languages (PL) research by Russo et al, where sensitive data (“x” below) is put in a container data type (the “box” below) which is associated with some security level. Other code can apply transformations or analyses to the data in-place (known as Functor “map” operation in functional programming), but only specially trusted code with an equal or greater security level can “unbox” the data. To encode the levels, Russo et al propose using the Lattice model of secure information flow developed by Dorothy E. Denning. In this model, the security levels form a partially ordered set with the guarantee that any given pair of levels will have a unique greatest lower bound and least upper bound. This allows for a clear and principled mechanism for determining the appropriate level when combining two pieces of information. In the Russo paper and our Scale By the Bay presentation, we use two levels for simplicity: High for sensitive data, and Low for non-sensitive data. To map this research to our problem domain, recall that we want data scientists and engineers to be able to quickly experiment and iterate when working with data. However, when data may be from sensitive sources or be contaminated with prediction target information, we want only certain, specially-audited or reviewed code to be able to directly access or export the results. For example, we may want to lift this restriction only after data has been suitably anonymized or aggregated, perhaps according to some quantitative standard like differential privacy. Another use case might be that we are constructing data pipelines or workflows and we want the code itself to track the provenance and sensitivity of different pieces of data to prevent unintended or inappropriate usage. Note that, unlike much of the research in this area, we are not aiming to prevent truly malicious actors (internal or external) from accessing sensitive data – we simply want to provide automatic support in order to assist engineers in handling data appropriately. Implementation and Beyond Depending on how exactly we want to adapt the ideas from Russo et al, there are a few different ways to implement our secure data wrapper layer in Scala. Here we demonstrate one approach using typeclass instances and implicit scoping (similar to the paper) as well as two versions where we modify the formulation slightly to allow changing the security level as a monadic effect (ie, with flatMap) having last-write-wins (LWW) semantics, and create a new Neutral security level that always “defers” to the other security levels High and Low. Implicit scoping Most similar to the original Russo paper, we can create special “security level” object instances, and require one of them to be in implicit scope when de-classifying data. (Thanks to Sergei Winitzki of Workday who suggested this at the conference!) Value encoding For LWW flatMap, we can encode the levels as values. In this case, the security level is dynamically determined at runtime by the type of the associated level argument, and the de-classify method reveal() returns a type Option[T] where it is None if the level is High. This implementation uses Scala’s pattern-matching functionality. Type encoding For LWW flatMap, we can encode the levels as types. In this case, the compiler itself will statically determine if reveal() calls are valid (ie, against the Low security level type), and simply fail to compile code which accesses sensitive data illegally. This implementation relies on some tricks derived from Stefan Zeiger’s excellent Type-Level Computations in Scala presentation. Data science and machine learning workflows can be complex, and in particular there are often potential problems lurking in the data handling aspects. Existing research in security and PL can be a rich source of tools and ideas to help navigate these challenges, and my goal for the talk was to give people some examples and starting points in this direction. Finally, it must be emphasized that a single software library can in no way replace a thorough organization-wide commitment to responsible data handling. By encoding notions of data sensitivity in software, we can automate some best practices and safeguards, but it will necessarily only be a part of a complete solution. Watch the Full Presentation at Scale by the Bay Learn More

Blog

Why European Users Are Leveraging Machine Data for Security and Customer Experience

To gain a better understanding of the adoption and usage of machine data in Europe, Sumo Logic commissioned 451 Research to survey 250 executives across the UK, Sweden, the Netherlands and Germany, and to compare this data with a previous survey of U.S. respondents that were asked the same questions. The research set out to answer a number of questions, including: Is machine data in fact an important source of fuel in the analytics economy? Do businesses recognize the role machine data can play in driving business intelligence? Are businesses that recognize the power of machine data leaders in their fields? The report, “Using Machine Data Analytics to Gain Advantage in the Analytics Economy, the European Edition,” released at DockerCon Europe in Barcelona this week, reveals that companies in the U.S. are currently more likely to use and understand the value of machine data analytics than their European counterparts, but that Europeans lead the U.S. in using machine data for security use cases. Europeans Trail US in Recognizing Value of Machine Data Analytics Let’s dig deeper into the stats regarding U.S. respondents that stated they were more likely to use and understand the value of machine data analytics. For instance, 36 percent of U.S. respondents have more than 100 users interacting with machine data at least once a week, while in Europe, only 21 percent of respondents have that many users. Likewise, 64 percent of U.S. respondents said that machine data is extremely important to their company’s ability to meet its goals, with 54 percent of European respondents saying the same. When asked if machine data tools are deployed on-premises, only 48 percent of European correspondents responded affirmatively, compared to 74 percent of U.S. respondents. The gap might be explained by idea that U.S. businesses are more likely to have a software-centric mindset. According to the data, 64 percent of U.S. respondents said most of their company had software-centric mindsets, while only 40 percent of European respondents said the same. Software-centric businesses are more likely to recognize that machine data can deliver critical insights, from both an operational and business perspective, as they are more likely to integrate their business intelligence and machine data analytics tools. Software-centric companies are also more likely to say that a wide variety of users, including head of IT, head of security, line-of-business users, product managers and C-level executives recognize the business value of machine data. Europeans Lead US in Using Machine Data for Security At 63 percent, European companies lead the way in recognising the benefit of machine data analytics in security use cases, which is ahead of the U.S. Given strict data privacy regulations in Europe, including the new European Union (EU) General Data Protection Regulation (GDPR), it only seems natural that security is a significant driver for machine data tools in the region. Business Insight Recognized by Europeans as Valuable Beyond security, other top use cases cited for machine data in Europe are monitoring (55 percent), troubleshooting (48 percent) and business insight (48 percent). This means Europeans are clearly recognizing the value of machine data analytics beyond the typical security, monitoring and troubleshooting use-cases — they’re using it as a strategic tool to move the business forward. When IT operations teams have better insight into business performance, they are better equipped to prioritize incident response and improve their ability to support business goals. A Wide Array of European Employees in Different Roles Use Machine Data Analytics The data further show that, in addition to IT operations teams, a wide array of employees in other roles commonly use machine data analytics. Security analysts, product managers and data analysts — some of whom may serve lines of business or senior executives — all appeared at the top of the list of the roles using machine data analytics tools. The finding emphasizes that companies recognize the many ways that machine data can drive intelligence across the business. Customer Experience and Product Development Seen as Most Beneficial to Europeans Although security emerged as an important priority for users of machine data, improved customer experience and more efficient product development emerged as the top benefit of machine data analytics tools. Businesses are discovering that the machine analytic tools they use to improve their security posture can also drive value in other areas, including better end-user experiences, more efficient and smarter product development, optimized cloud and infrastructure spending, and improved sales and marketing performance. Barriers Preventing Wider Usage of Machine Data The report also provided insight into the barriers preventing wider usage of machine data analytics. The number one capability that users said was lacking in their existing tools was real-time access to data (37 percent), followed by fast, ad hoc querying (34 percent). Another notable barrier to broader usage is the lack of capabilities to effectively manage different machine data analytics tools. European respondents also stated that the adoption of modern technologies does make it harder to get the data they need for speedy decision-making (47 percent). Whilst moving to microservices and container-based architectures like Docker makes it easier to deploy at scale, it seems it is hard to effectively monitor activities over time without the right approach to logs and metrics in place. In Conclusion Europe is adopting modern tools and technologies at a slower rate than their U.S. counterparts, and fewer companies currently have a ‘software-led’ mindset in place. Software-centric businesses are doing more than their less advanced counterparts to make the most out of the intelligence available to them in machine data analytics tools. However, a desire for more continuous insights derived from machine data is there: the data show is that once European organisations start using machine data analytics to gain visibility into their security operations, they start to see the value for other use cases across operations, development and the business. The combination of customer experience and compliance with security represent strong value for European users of machine data analytics tools. Users want their machine data tools to drive even more insight into the customer experience, which is increasingly important to many businesses, and at the same time help ensure compliance. Additional Resources Download the full 451 Research report for more insights Check out the Sumo Logic DockerCon Europe press release Download the Paf customer case study Read the European GDPR competitive edge blog Sign up for a free trial of Sumo Logic

Blog

Announcing Extended AWS App Support at re:Invent for Security and Operations

Blog

Complete Visibility of Amazon Aurora Databases with Sumo Logic

Sumo Logic provides digital businesses a powerful and complete view of modern applications and cloud infrastructures such as AWS. Today, we’re pleased to announce complete visibility into performance, health and user activity of the leading Amazon Aurora database via two new applications – the Sumo Logic MySQL ULM application and the Sumo Logic PostgreSQL ULM application. Amazon Aurora is a MySQL and PostgreSQL-compatible relational database available on the AWS RDS platform. Amazon Aurora is up to five times faster than standard MySQL databases and three times faster than standard PostgreSQL databases. By providing complete visibility across your Amazon Aurora databases with these two applications, Sumo Logic provides the following benefits via advanced visualizations: Optimize your databases by understanding query performance, bottlenecks and system utilization Detect and troubleshoot problems by identifying new errors, failed connections, database activity, warnings and system events Monitor user activity by detecting unusual logins, failed events and geo-locations In the following sections of this blog post, we discuss details how these applications provide value to customers. Amazon Aurora Logs and Metrics Sources Amazon provides a rich set of log and metrics sources for monitoring and managing Aurora databases. The Sumo Logic Aurora MySQL ULM app works on the following three log types: AWS CloudTrail event logs AWS CloudWatch metrics AWS CloudWatch logs For Aurora MySQL databases, error logs are enabled by default to be pushed to CloudWatch. Aurora MySQL also supports slow query logs, audit logs, and general logs to be pushed to CloudWatch, however, you need to select this feature on CloudWatch. The Sumo Logic Aurora PostgreSQL ULM app works on the following log types: AWS Cloud Trail event logs AWS CloudWatch metrics For more details on setting up logs, please check the documentation for the Amazon Aurora PostgreSQL app and the Amazon Aurora MySQL app. Installing the Apps for Amazon Aurora Analyzing each of the above logs in isolation to debug a problem, or understand how your database environments are performing can be a daunting and time-consuming task. With the two new Sumo applications, you can instantly get complete visibility into all aspects of running your Aurora databases. Once you have configured your log sources, the Sumo Logic apps can be installed. Navigate to the Apps Catalog in your Sumo Logic instance and add the “Aurora MySQL ULM” or “Aurora PostgreSQL ULM” apps to your library after providing references to sources configured in the previous step. Optimizing Database Performance As part of running today’s digital businesses, customer experiences is a key outcome and towards that end closely monitoring the health of your databases is critical. The following dashboards provide an instant view on how your Amazon Aurora MySQL and PostGreSQL databases are performing across various important metrics. Using the queries from these dashboards, you can build scheduled searches and real-time alerts to quickly detect common performance problems. The Aurora MySQL ULM Logs – Slow Query Dashboard allows you to view log details on slow queries, including the number of slow queries, trends, execution times, time comparisons, command types, users, and IP addresses. The Aurora MySQL ULM Metric – Resource Utilization Monitoring dashboard allows you to view analysis of resource utilization, including usage, latency, active and blocked transactions, and login failures. The Aurora PostgreSQL ULM Metric – Latency, Throughput, and IOPS Monitoring Dashboard allows you to view granular details of database latency, throughput, IOPS and disk queue depth. It is important to monitor the performance of database queries. Latency and throughput are the key performance metrics. Detect and Troubleshoot Errors To provide the best service to your customers, you need to take care of issues quickly and minimize impacts to your users. Database errors can be hard to detect and sometimes surface only after users report application errors. The following set of dashboards help quickly surface unusual or new activity across your AWS Aurora databases. The Aurora MySQL ULM Logs – Error Logs Analysis Dashboard allows you to view details for error logs, including failed authentications, error outliers, top and recent warnings, log levels, and aborted connections. Monitor user activity With cloud environments, its becoming even more critical to investigate user behavior patterns and make sure your database is being accessed by the right staff. The following set of dashboards track all user and database activity and can help prioritize and identify patterns of unusual behavior for security and compliance monitoring. The Aurora MySQL ULM Logs – Audit Log Analysis Dashboard allows you to view an analysis of events, including accessed resources, destination and source addresses, timestamps, and user login information. These logs are specifically enabled to audit activities that are of interest from an audit and compliance perspective. The Aurora MySQL Logs – Audit Log SQL Statements Dashboard allows you to view details for SQL statement events, including Top SQL commands and statements, trends, user management, and activity for various types of SQL statements. You can drill deeper into various SQL statements and commands executed by clicking on the “Top SQL Commands” panel in the dashboard. This will open up the Aurora MySQL ULM – Logs – Audit Log SQL Statements dashboard, which will help with identifying trends, specific executions, user management activities performed and dropped objects. The Aurora PostgreSQL ULM CloudTrail Event – Overview Dashboard allows you to view details for event logs, including geographical locations, trends, successful and failed events, user activity, and error codes. In case you need to drill down for details, the CloudTrail Event – Details dashboard will help you with monitoring the most recent changes made to resources in your Aurora database ecosystem, including creation, modification, deletion and , reboot of Aurora clusters and or instances. Get Started Now! The Sumo Logic apps for Amazon Aurora helps optimize, troubleshoot and secure your AWS Aurora database environments. To get started check out the the Sumo Logic MySQL ULM application and the Sumo Logic PostgreSQL ULM application. If you don’t yet have a Sumo Logic account, you can sign up for a free trial today. For more great DevOps-focused reads, check out the Sumo Logic blog.

November 27, 2018

Blog

The Latest Trends for Modern Apps Built on AWS

Blog

Comparing a Multi-Tenant SaaS Solution vs. Single Tenant

Blog

An Organized Workflow for Prototyping

In the world of agile there’s a demand to solve grey areas throughout the design process at lightning speed. Prototypes help the scrum team test ideas and refine them. Without prototypes, we can’t test ideas until the feature or product has been built which can be a recipe for disaster. It’s like running a marathon without training. During a two week sprint, designers often need to quickly turn around prototypes in order to test. It can be hectic to juggle meetings, design and prototyping without a little planning. The guiding principles below, inspired by my time working with one our lead product designers at Sumo Logic — Rebecca Sorensen, will help you build prototypes more effectively for usability testing under a time crunch. Define the Scope From the start, it’s essential that we understand who is the audience and what is the goal of the prototype so we can determine other parts of the prototyping process like content, fidelity level and tools for the job. We can easily find out the intent by asking the stakeholder what he or she wants to learn. By defining the scope from the beginning we are able to prioritize our time more effectively throughout the prototyping process and tailor the content for the audience. For testing usually our audience are internal users or customers. The scrum team wants to know if the customer can complete a task successfully with the proposed design. Or they may also want to validate a flow to determine design direction. If we’re testing internally, we have more flexibility showing a low or mid fidelity prototype. However, when testing with customers, sometimes we have to consider more polished prototypes with real data. Set Expectations There was a time when designers made last minute changes to the prototype — sometimes while the prototype was being tested because a stakeholder added last-minute feedback — that impacted the outcome and did not provide enough time for the researcher to understand the changes. Before jumping into details, we create milestones to set delivery expectations. This helps the scrum team understand when to give feedback on the prototype and when the research team will receive the final prototype for testing. This timeline is an estimate and it might vary depending on the level of fidelity. We constantly experiment until we find our sweet spot. The best way to get started is to start from a desired end state, like debriefing the researcher on the final prototype, and work backward. The draft of the prototype doesn’t have to be completely finished and polished. It just needs to have at least structure so we can get it in front of the team for feedback. Sometimes, we don’t have to add all the feedback. Instead, we sift through the feedback and choose what makes sense given the time constraints. Tailor the Content for your Audience Content is critical to the success of a prototype. The level of details we need in the prototype depends on the phase our design process. Discovery In the exploration phase we are figuring out what are we building and why, so at this point the content tends to be more abstract. We’re trying to understand the problem space and our users so we shouldn’t be laser focused on details, only structure and navigation matter. Abstraction allows us to have a more open conversation with users that’s not solution focused. Sometimes we choose metaphors that allow us to be on the same playing field as our users to deconstruct their world more easily. We present this in the form of manipulatives — small cut outs of UI or empty UI elements the customer can draw on during a quick participatory design session. Cutting and preparing manipulatives is also a fun team activity Delivery When we move into the delivery phase of design where our focus is on how are we building the product, content needs to reflect the customer’s world. We partner closely with our Product Manager to structure a script. Context in the form of relevant copy, charts, data and labels help support the the script and various paths the user can take when interacting with the prototype. Small details like the data ink, data points along with the correct labels help us make the prototype more realistic so the user doesn’t feel he’s stepping into an unfamiliar environment. Even though a prototype is still an experiment, using real data gives us a preview of design challenges like truncation or readability. We are lucky to have real data from our product. CSVJSON helps us convert files into JSON format so we can use the data with chart plugins and CRAFT. Collaborate to Refine Prototyping is fun and playful — too much that it can be easy to forget that there are also other people who are part of the process. Prototyping is also a social way to develop ideas with non designers so when choosing which tool to present our prototype in we need to keep in mind collaboration outside the design team, not just the final output. We use InVision to quickly convey flows along with annotations but it has a downside during this collaborative process. Annotations can leave room for interpretation since every stakeholder has his own vocabulary. Recently, a couple of our Engineers in Poland started using UXPin. At first it was used to sell their ideas but for usability testing, it has also become a common area where we can work off each others’ prototypes. They like the ability to duplicate prototypes, reshuffle screens so the prototypes can be updated quickly without having to write another long document of explanations. By iterating together we are able to create a common visual representation and move fast. UXPin came to the rescue when collaborating with cross regional teams. It’s an intuitive tool for non designers that allows them to duplicate the prototype and make their own playground too. Tools will continue to change so it’s important to have an open mindset and be flexible to learning and making judgments about when to switch tools to deliver the prototype on time to research. Architect Smartly Although we are on a time crunch when prototyping for research, we can find time to experiment by adjusting the way we build our prototype. Make a playground Our lead productdesigner Rohan Singh invented the hamster playground to go wild with design explorations. The hamster playground is an experimental space which comes in handy when we may need to quickly whip something up without messing the rest of the design. It started as a separate page in our sketch files and now this is also present in our prototyping workspace. When we design something in high fidelity, we become attached to the idea right away. This can cripple experimentation. We need that sacred space detached from the main prototype that allows us to experiment with animations or dynamic elements. The hamster playground can also be a portable whiteboard or pen and paper. Embrace libraries Libraries accelerate the prototyping process exponentially! For the tool you’re commonly using to prototype invest some time (hackathons or end of quarter) to create a pattern library of the most common interactions(this is not a static UI Kit). If the prototype we’re building has some of those common elements, we will save them into the library so other team members can reuse them on another project. Building an interactive library is time consuming but it pays off because it allows the team to easily drag, drop and combine elements like legos. Consolidate the flow We try to remove non essential items from the prototype and replace them with screenshots or turn them into loops so we can focus only on the area that matters for testing. Consolidation also forces us to not overwhelm the prototype with many artboards otherwise we risk having clunky interactions during testing. The other advantage of consolidating is that you can easily map out interactions by triggers, states and animations/transitions. Prepare Researchers for Success Our job is not done until research, our partners, understand what we built. As a best practice, set up some time with the researcher to review the prototype. Present the limitations, discrepancies in different browsers and devices and any other instructions that are critical for the success of the testing session. A short guide that outlines the different paths with screenshots of what the successful interactions look like can aid researchers a lot when they are writing the testing script. Ready, Set…Prototype! Just like marathoners, who intuitively know when to move fast, adjust and change direction, great prototypers work from principles to guide their process. Throughout the design process the scrum team constantly needs answers to many questions. By becoming an effective prototyper, not the master of x tool, you can help the team find the answers right away. The principles outlined above will guide your process so you are more aware of how you spend your time and to know when you’re prototyping too much, too little or the wrong thing. Organization doesn’t kill experimentation; it makes more time for playfulness and solving the big grey areas. This post originally appeared on Medium. Check it out here. Additional Resources Check out this great article to learn how our customers influence the Sumo Logic product and how UX research is key to improving overall experiences Curious to know how I ended up at Sumo Logic doing product design/user experience? I share my journey in this employee spotlight article. Love video games and data? Then you’ll love this article from one of our DevOps engineers on how we created our own game (Sumo Smash bros) to demonstrate the power of machine data

Blog

Understanding Transaction Behavior with Slick + MySQL InnoDB

MySQL has always been among one of the top few database management systems used worldwide, according to DB-engines, one of the leading ranking websites. And thanks to the large open source community behind MySQL, it also solves a wide variety of use cases. In this blog post, we are going to focus on how to achieve transactional behavior with MySQL and Slick. We will also discuss how these transactions resulted in one of our production outages. But before going any further into the details, let’s first define what a database transaction is. In the context of relational databases, a sequence of operations that satisfies some common properties is known as a transaction. This common set of properties, which determine the behavior of these transactions are referred to as atomic, consistent, isolated and durable (ACID) properties. These properties are intended to guarantee the validity of the underlying data in case of power failure, errors, or other real-world problems. The ACID model talks about the basic supporting principles one should think about before designing database transactions. All of these principles are important for any mission-critical applications. One of the most popular storage engines we use in MySQL is InnoDB, whereas Slick is the modern database query and access library for Scala. Slick exposes the underlying data stored in these databases as Scala collections so that data stored onto these databases is seamlessly available. Database transactions come with their own set of overhead, especially in cases when we have long running queries wrapped in a transaction. Let’s understand the transaction behavior, which we get with Slick. Slick offers ways to execute transactions on MySQL. pre{ font-family: Consolas, Menlo, Monaco, Lucida Console, Liberation Mono, DejaVu Sans Mono, Bitstream Vera Sans Mono, Courier New, monospace, serif; margin-bottom: 10px; overflow: auto; width: auto; padding: 5px; background-color: #eee; width: 650px!ie7; padding-bottom: 20px!ie7; max-height: 600px; } function fetchCustomers(): Observable<Customer[]> { ... } val a = (for { ns <- coffees.filter(_.name.startsWith("ESPRESSO")).map(_.name).result _ <- DBIO.seq(ns.map(n => coffees.filter(_.name === n).delete): _*) } yield ()).transactionally These transactions are executed with the help of the Auto-Commit feature provided by the InnoDB engine. We will go into this auto-commit feature later in this article, but first, let me tell you about an outage, which happened on our production services at Sumo Logic and resulted in one of the outages. For the rest of the article, I will be talking about one of our minor outages which happened due to this lack of understanding in this transaction behavior. Whenever any user fires a query, the query follows this course of action before getting started: Query metadata i.e. user, customerID is first sent to Service A Service A asks this common Amazon MySQL RDS for the number of concurrent sessions for this user running across all the instances for this Service A If the number is greater than some threshold we throttle the request and send 429 to the user. Otherwise, we just add the metadata of the session to the table stored in RDS All of these actions are executed within the scope of a single slick transaction. Recently we started receiving lots of lock wait timeouts on this Service A. On debugging further, we saw that from the time we started getting lots of lock wait timeouts, there was also an increase in the average CPU usage across the Service A cluster. Looking into some of these particular issues of lock wait timeouts, we noticed that whenever we had an instance in the cluster going through full GC cycles, that resulted in a higher number of lock wait timeouts across the cluster. But interestingly enough, these lock wait timeouts were all across the cluster and not isolated on the single instance, which suffered from full GC cycles. Based on that, we knew that full GC cycles on one of the nodes were somewhat responsible for causing those lock wait timeouts across the cluster. As already mentioned above, we used the transaction feature provided by slick to execute all of the actions as a single command. So the next logical step was to dig deeper into understanding the question: “how does Slick implement these transactions”? We found out that Slick uses the InnoDB feature of auto-commits to execute transactions. In the auto-commit disabled mode, the transaction is kept open until the transaction is committed from the client side, which essentially means that the connection executing the current transaction holds all the locks until the transaction is committed. Auto-Commit Documentation from the InnoDB Manual In InnoDB, all user activity occurs inside a transaction. If auto-commit mode is enabled, each SQL statement forms a single transaction on its own. By default, MySQL starts the session for each new connection with auto-commit enabled, so MySQL does a commit after each SQL statement if that statement did not return an error. If a statement returns an error, the commit or rollback behavior depends on the error. See Section 14.21.4, “InnoDB Error Handling”. A session that has auto-commit enabled can perform a multiple-statement transaction by starting it with an explicit START TRANSACTION or BEGIN statement and ending it with a COMMIT or ROLLBACK statement. See Section 13.3.1, “START TRANSACTION, COMMIT, and ROLLBACK Syntax”. If auto-commit mode is disabled within a session with SET auto-commit = 0, the session always has a transaction open. A COMMIT or ROLLBACK statement ends the current transaction and a new one starts. Pay attention to the last sentence above. This means if auto-commit is disabled, then the transaction is open, which means all the locks are still with this transaction. All the locks, in this case, will be released only when we explicitly COMMIT the transaction. So in our case, our inability to execute the remaining commands within the transaction due to a high GC, meant that we were still holding onto the locks on the table and therefore would mean that other JVMs executing the transaction touching the same table (which is, in fact, the case), would also suffer from high latencies. But we needed to be sure that was the case on our production environments. So we went ahead with reproducing the production issue on the local testbed, making sure that locks were still held by the transaction on the node undergoing high GC cycles. Steps to Reproduce the High DB Latencies on One JVM Due to GC Pauses on Another JVM Step One We needed some way to know when the queries in the transactions were actually getting executed by the MySQL server. mysql> SET global general_log = 1; mysql> SET global log_output = 'table'; mysql> SELECT * from mysql.general_log; So MySQL general logs show the recent queries which were executed by the server. Step Two We needed two different transactions to execute at the same time in different JVMs to understand this lock wait timeout. Transaction One: val query = (for { ns <- userSessions.filter(_.email.startsWith(name)).length.result _ <- { println(ns) if (ns > n) DBIOAction.seq(userSessions += userSession) else DBIOAction.successful(()) } } yield ()).transactionally db.run(query) Transaction Two: db.run(userSessions.filter(_.id === id).delete) Step Three Now we needed to simulate the long GC pauses or pauses in one of the JVMs to mimic the production environments. On mimicking those long pauses, we need to monitor the mysql.general logs for finding out when did the command reached the MySQL server for asking to be executed. The below chart depicts the order of SQL statements getting executed on both JVMs: JVM 1( Adding the session of the user ) JVM 2 ( Delete the session of the user if present ) SET auto-commit = 0 ( as in false ) SELECT count(*) FROM USERS where User_id = “temp” ( LOCKS ACQUIRED ) SET auto-commit = 0 INSERT INTO USERS user_session DELETE FROM USERS where sessionID = “121” ( Started ) INTRODUCED HIGH LATENCY ON THE CLIENT SIDE FOR 40 SECONDS DELETE OPERATION IS BLOCKED DUE TO THE WAITING ON THE LOCK COMMIT DELETE FROM USERS where sessionID = “121” ( Completed ) COMMIT In the below image, you can see the SQL statements getting executed on both the JVMs: This image shows the lock wait time of around 40 seconds on JVM 2 on “Delete SQL” command: We can clearly see from the logs how pauses in one JVM causes high latencies across the different JVMs querying on MySQL servers. Handling Such Scenarios with MySQL We more than often need to handle this kind of scenario where we need to execute MySQL transactions across the JVMs. So how can we achieve low MySQL latencies for transactions even in cases of pauses in one of the JVMs? Here are some solutions: Using Stored Procedures With stored procedures, we could easily extract out this throttling logic into a function call and store it as a function on MySQL server. They can be easily called out by clients with appropriate arguments and they can be executed all at once on the server side without being afraid of the client side pauses. Along with the use of transactions in the procedures, we can ensure that they are executed atomically and results are hence consistent for the entire duration of the transaction. Delimit Multiple Queries With this, we can create transactions on the client side and execute them atomically on the server side without being afraid of the pauses. Note: You will need to enable allowMultiQueries=true because this flag allows batching multiple queries together into a single query and hence you will be able to run transactions as a single query. Better Indexes on the Table With better indices, we can ensure that while executing SELECT statements with WHERE condition we touch minimal rows and hence ensuring minimal row locks. Let’s suppose we don’t have any index on the table, then in that case for any select statement, we need to take a shared row lock on all the rows of the table, which will mean that during the execution phase of this transaction all the delete or updates would be blocked. So it’s generally advised to have WHERE condition in SELECT to be on an index. Lower Isolation Levels for executing Transactions With READ UNCOMMITTED isolation levels, we can always read the rows which still have not been committed. Additional Resources Want more articles like this? Check out the Sumo Logic blog for more technical content! Read this blog to learn how to triage test failures in a continuous delivery lifecycle Check out this article for some clear-cut strategies on how to manage long-running API queries using RxJS Visit the Sumo Logic App for MySQL page to learn about cloud-native monitoring for MySQL https://www.sumologic.com/blog... class="at-below-post-recommended addthis_tool">

Blog

Exploring Nordcloud’s Promise to Deliver 100 Percent Alert-Based Security Operations to Customers

Blog

Strategies for Managing Long-running API Calls with RxJS

Blog

Near Real-Time Log Collection From Amazon S3 Storage

Blog

SnapSecChat: Sumo Logic CSO Recaps HackerOne's Conference, Security@

Blog

Illuminate 2018 Product Update Q&A with Sumo Logic CEO Ramin Sayar

Blog

How to Triage Test Failures in a Continuous Delivery Lifecycle

Blog

Gain Visibility into Your Puppet Deployments with the New Sumo Logic Puppet App

Puppet is a software configuration management and deployment tool that is available both as an open source tool and commercial software. It’s most commonly used on Linux and Windows to pull the strings on multiple application servers at once. It includes its own declarative language to describe system configurations. In today’s cloud environments that consist of hundreds of distributed machines, Puppet can help in reducing development time and resources by automatically applying these configurations. Just like any other DevOps tool there can be errors and configuration issues. However, with the new Sumo Logic Puppet integration and application, customers can now leverage the Sumo Logic platform to help monitor Puppet performance, configurations and errors. Puppet Architecture and Logging Puppet can apply required configurations across new and existing servers or nodes. You can configure systems with Puppet either in a client-server architecture or in stand-alone architectures. The client-server architecture is the most commonly used architecture for Puppet implementations. Puppet agents apply the required changes and send the reports to the Puppet master describing the run and details of the client resources. These reports can help answer questions like “how often are the resources modified,” “how many events were successful in the past day” and “what was the status of the most recent run?” In addition to reports, Puppet also generates an extensive set of log files. From a reporting and monitoring perspective, the two log files of interest are the Puppet server logs and the HTTP request logs. Puppet server messages and errors are logged to the file /var/log/puppetlabs/puppetserver/puppetserver.log. Logging can be configured using the /etc/puppetlabs/puppetserver/logback.xml file, which can be used to monitor the health of the server. The /var/log/puppetlabs/puppetserver/puppetserver-access.log file contains HTTP traffic being routed via your Puppet deployment. This logging can be handled using the configuration file: /etc/puppetlabs/puppetserver/request-logging.xml. Puppet agent requests to the master are logged into this file. Sumo Logic Puppet App The Sumo Logic Puppet app is designed to effectively manage and monitor Puppet metrics, events and errors across your deployments. With Sumo Logic dashboards you will be able to easily identify: Unique nodes Puppet node runs activity Service times Catalog application times Puppet execution times Resource transition (failures, out-of-sync, modifications, etc.) Error rates and causes Installation In order to get started, the app requires three data sources: Puppet server logs Puppet access logs Puppet reports The puppet server logs and puppet access logs are present in the directory var/log/puppetlabs/puppetserver/. Configure separate local file resources for both of these log files. Puppet reports are generated as yaml files. These need to be converted into JSON files before ingesting into Sumo Logic. To ingest Puppet reports, you must configure a script source. Once the log sources are configured, the Sumo Logic app can be installed. Simply navigate to the apps Catalog in your Sumo Logic instance and add the Puppet app to the library after providing the sources configured in the previous step. For more details on app configuration, please see instructions on Sumo Logic’s DocHub. Sumo Logic Puppet App Visualizations In any given Puppet deployment, there can be a large number of nodes. Some of the nodes may be faulty or others may be very active. The Puppet server manages the nodes and it may be suffering from issues itself. The Sumo Logic Puppet app consists of predefined dashboards and search queries which help you monitor the Puppet infrastructure. The Puppet Overview dashboard shown below gives you an overview of activity across nodes and servers. If a Puppet node is failing, you can quickly find out when the node made requests, what version it is running on and how much time it is taking to prepare the catalog for the node by the server. Puppet Overview Dashboard Let’s take a closer look at the Error Rate panel. The Error Rate panel displays the error rates per hour. This helps identify when error rates spiked, and by clicking on the panel, you can identify the root cause on either the node-level or the server-level via the Puppet Error Analysis dashboard. In addition, this dashboard highlights the most erroneous nodes along with the most recent errors and warnings. With this information, it will be easier to drill down into the root cause of the issues. The panel Top Erroneous Nodes helps in identifying the most unstable nodes. Drill down to view the search query by clicking on the “Show in Search” icon highlighted in the above screenshot. The node name and the errors can be easily identified and corrective actions can be performed by reviewing the messages in the search result as shown in the screenshot below: With the help of information on the Puppet – Node Puppet Run Analysis dashboard, node health can be easily determined across different deployments such as production and pre-production. The “Slowest Nodes by Catalog Application Time” panel helps you determine the slowest nodes, which can potentially be indicative of problems and issues within those nodes. From there, you can reference the Puppet Error Analysis dashboard to determine the root cause. The “Resource Status” panel helps you quickly determine the status of various resources, further details around which can be obtained by drilling down to the query behind it. By reviewing the panels on this dashboard, highest failing or out-of-sync resources can be easily determined, which may be indicative of problems on respective nodes. To compare the average catalog application times, take a look at the “Average Catalog Application Time” and “Slowest Nodes by Catalog Application Time” panels. The resources panels show resources that failed, modified, are out-of-sync and skipped. Drilling down to the queries of the panels will help in determining the exact resource list with the selected status. Note: All the panels in the Puppet Node Puppet Run Analysis dashboard and some panels of the Puppet Overview dashboard can be filtered based on the environment, such as production, pre-production, etc. as shown below: Get Started Now! The Sumo Logic app for Puppet monitors your entire Puppet infrastructure potentially spanning hundreds of nodes and helps determine the right corrective and preventative actions. To get started check out the Sumo Logic Puppet app help doc. If you don’t yet have a Sumo Logic account, you can sign up for a free trial today. For more great DevOps-focused reads, check out the Sumo Logic blog.

Blog

Pokemon Co. International and Sumo Logic's Joint Journey to Build a Modern Day SOC

The world is changing. The way we do business, the way we communicate, and the way we secure the enterprise are all vastly different today than they were 20 years ago. This natural evolution of technology innovation is powered by the cloud, which has not only freed teams from on-premises security infrastructure, but has also provided them with the resources and agility needed to automate mundane tasks. The reality is that we have to automate in the enterprise if we are to remain relevant in an increasingly competitive digital world. Automation and security are a natural pairing, and when we think about the broader cybersecurity skills talent gap, we really should be thinking about how we can replace simple tasks through automation to make way for teams and security practitioners to be more innovative, focused and strategic. A Dynamic Duo That’s why Sumo Logic and our partner, The Pokemon Co. International, are all in on bringing together the tech and security innovations of today and using those tools and techniques to completely redefine how we do security operations, starting with creating a new model for how security operations center (SOC) should be structured and how it should function. So how exactly are we teaming up to build a modern day SOC, and what does it look like in terms of techniques, talent and tooling? We’ll get into that, and more, in this blog post. Three Pillars of the Modern Day SOC Adopt Military InfoSec Techniques The first pillar is all about mindset and adopting a new level of rigor and way of thinking for security. Both the Sumo Logic and Pokemon security teams are built on the backbone of a military technique called the OODA loop, which was originally coined by U.S. Air Force fighter pilot and Pentagon consultant of the late twentieth century, John Boyd. Boyd created the OODA loop to implement a change in military doctrine that focused on an air-to-air combat model. OODA stands for observe, orient, decide and act, and Boyd’s thinking was that if you followed this model and ensured that your OODA loop was faster than that of your adversary’s, then you’d win the conflict. Applying that to today’s modern security operations, all of the decisions made by your security leadership — whether it’s around the people, process or tools you’re using — should be aimed at reducing your OODA loop to a point where, when a situation happens, or when you’re preparing for a situation, you can easily follow the protocol to observe the behavior, orient yourself, make effective and efficient decisions, and then act upon those decisions. Sound familiar? This approach is almost identical to most current incident response and security protocols, because we live in an environment where every six, 12 or 24 months we’re seeing more tactics and techniques changing. That’s why the SOC of the future is going to be dependent on a security team’s ability to break down barriers and abandon older schools of thought for faster decision making models like the OODA loop. This model is also applicable across an organization to encourage teams to be more efficient and collaborative cross-departmentally, and to move faster and with greater confidence in order to achieve mutually beneficial business goals. Build and Maintain an Agile Team But it’s not enough to have the right processes in place. You also need the right people that are collectively and transparently working towards the same shared goal. Historically, security has been full of naysayers, but it’s time to shift our mindset to that of transparency and enablement, where security teams are plugged into other departments and are able to move forward with their programs as quickly and as securely as they can without creating bottlenecks. This dotted line approach is how Pokemon operates and it’s allowed the security team to share information horizontally, which empowers development, operations, finance and other cross-functional teams to also move forward in true DevSecOps spirit. One of the main reasons why this new and modern Sumo Logic security team structure has been successful is because it’s enabled each function — data protection/privacy, SOC, DevSecOps and federal — to work in unison not only with each other, but also cross-departmentally. In addition to knowing how to structure your security team, you also need to know what to look for when recruiting new talent. Here are three tips from Pokemon’s Director of Information Security and Data Protection Officer, John Visneski: Go Against the Grain. Unfortunately there are no purple security unicorns out there. Instead of finding the “ideal” security professional, go against the grain. Find people with the attitude and aptitude to succeed, regardless of direct security experience. The threat environment is changing rapidly, and burnout can happen fast, which is why it’s more important to have someone on in your team with those two qualities.Why? No one can know everything about security and sometimes you have to adapt and throw old rules and mindsets out the window. Prioritize an Operational Mindset. QAs and test engineers are good at automation and finding gaps in seams, very applicable to security. Best Security Engineers didn’t know a think about security before joining Pokemon, but he had a valuable skill set.Find talent pools that know how the sausage is made. Best and brightest security professionals didn’t even start out in security but their value add is that they are problem solvers first, and security pros secondary. Think Transparency. The goal is to get your security team to a point where they’re sharing information at a rapid enough pace and integrating themselves with the rest of the business. This allows for core functions to help solve each other’s problems and share use-cases, and it can only be successful if you create a culture that is open and transparent. The bottom line: Don’t be afraid to think outside of the box when it comes to recruiting talent. It’s more important to build a team based on want, desire and rigor, which is why bringing in folks with military experience has been vital to both Sumo Logic’s and Pokemon’s security strategies. Security skills can be learned. What delivers real value to a company are people that have a desire to be there, a thirst for knowledge and the capability to execute on the job. Build a Modern Day Security Stack Now that you have your process, and your people, you need your third pillar — tools sets. This is the Sumo Logic reference architecture that empowers us to be more secure and agile. You’ll notice that all of these providers are either born in the cloud or are open source. The Sumo Logic platform is at the core of this stack, but its these partnerships and tools that enable us to deliver our cloud-native machine data analytics as a service, and provide SIEM capabilities that easily prioritize and correlate sophisticated security threats in the most flexible way possible for our customers. We want to grow and transform with our own customer’s modern application stacks and cloud architectures as they digitally transform. Pokemon has a very similar approach to their security stack: The driving force behind Pokemon’s modern toolset is the move away from old school customer mentality of presenting a budget and asking for services. The customer-vendor relationship needs to mirror a two way partnership with mutually invested interests and clear benefits on both sides. Three vendors — AWS, CrowdStrike and Sumo Logic — comprise the core base of the Pokemon security platform, and the remainder of the stack is modular in nature. This plug and play model is key as the security and threat environments continue to evolve because it allows for flexibility in swapping in and out new vendors/tools as they come along. As long as the foundation of the platform is strong, the rest of the stack can evolve to match the current needs of the threat landscape. Our Ideal Model May Not Be Yours We’ve given you a peek inside the security kimono, but it’s important to remember that every organization is different, and what works for Pokemon or Sumo Logic may not work for every particular team dynamic. While you can use our respective approaches as a guide to implement your own modern day security operations, the biggest takeaway here is that you find a framework that is appropriate for your organization’s goals and that will help you build success and agility within your security team and across the business. The threat landscape is only going to grow more complex, technologies more advanced and attackers more sophisticated. If you truly want to stay ahead of those trends, then you’ve got to be progressive in how you think about your security stack, teams and operations. Because regardless of whether you’re an on-premises, hybrid or cloud environment, the industry and business are going to leave you no choice but to adopt a modern application stack whether you want to or not. Additional Resources Learn about Sumo Logic's security analytics capabilities in this short video. Hear how Sumo Logic has teamed up with HackerOne to take a DevSecOps approach to bug bounties in this SnapSecChat video. Learn how Pokemon leveraged Sumo Logic to manage its data privacy and GDPR compliance program and improve its security posture.

Blog

The 3 Phases Pitney Bowes Used to Migrate to AWS

Blog

Exploring the Future of MDR and Cloud SIEM with Sumo Logic, eSentire and EMA

At Sumo Logic’s annual user conference, Illuminate, we announced a strategic partnership with eSentire, the largest pure-play managed detection and response (MDR) provider, that will leverage security analytics from the Sumo Logic platform to deliver full spectrum visibility across the organization, eliminating common blind spots that are easily exploited by attackers. Today’s digital organizations operate on a wide range of modern applications, cloud infrastructures and methodologies such as DevSecOps, that accumulate and release massive amounts of data. If that data is managed incorrectly, it could allow malicious threats to slip through the cracks and negatively impact the business. This partnership combines the innovative MDR and cloud-based SIEM technologies from eSentire and Sumo Logic, respectively, that provide customers with improved analytics and actionable intelligence to rapidly detect and investigate machine data to identify potential threats to cloud or hybrid environments and strengthen overall security posture. Watch the video to learn more about this joint effort as well as the broader security, MDR, and cloud SIEM market outlook from Jabari Norton, VP global partner sales & alliances at Sumo Logic, Sean Blenkhorn, field CTO and VP sales engineering & advisory services at eSentire, and Dave Monahan, managing research director at analyst firm, EMA. For more details on the specifics of this partnership, read the joint press release.

Blog

Accelerate Security and PCI Compliance Visibility with New Sumo Logic Apps for Palo Alto Networks

Blog

Artificial Intelligence vs. Machine Learning vs. Deep Learning: What's the Difference?

Blog

Illuminate 2018 Video Q&A with Sumo Logic CEO Ramin Sayar

Blog

Intrinsic vs Meta Tags: What’s the Difference and Why Does it Matter?

Tag-based metrics are typically used by IT operations and DevOps teams to make it easier to design and scale their systems. Tags help you to make sense of metrics by allowing you to filter on things like host, cluster, services, etc. However, knowing which tags to use, and when, can be confusing. For instance, have you ever wondered about the difference between intrinsic tags (or dimensions) and meta tags with respect to custom application metrics? If so, you’re not alone. It is pretty common to get the two confused, but don’t worry because this blog post will help explain the difference. Before We Get Started First, let’s start with some background. Metrics in Carbon 2.0 take on the following format: Note that there are two spaces between intrinsic_tags and meta_tags. If a tag is listed before the double space, then it is an intrinsic tag. If a tag is listed after the double space, then it is a meta tag. Meta_tags are also optional. If no meta_tags are provided, there must be two spaces between intrinsic_tags and value. Some examples of Carbon 2.0 metrics might be: Understanding Intrinsic Tags Intrinsic tags may also be referred to as dimensions and are metric identifiers. If you have two data points sent with same set of dimension values then they will be values in the same metric time series. In the examples above, each metric has different dimensions so they will be separate time series. Understanding Meta Tags On the other hand, meta tags are not used as metric identifiers. This means that if two data points have the same intrinsic tags or dimensions, but different meta tags, they will still be values in the same metric time series. Meta tags are meant to be used in addition to intrinsic tags so that you can more conveniently select the metrics. Let’s Look at an Example To make that more clear, let’s use another example. Let’s say that you have 100 servers in your cluster that are reporting host metrics like “metric=cpu_idle.” This would be an intrinsic tag. You may also want to track the version of your code running on that cluster. Now if you put the code version in an intrinsic tag, you’ll get a completely new set of metrics every time you upgrade to a new code version. Unless you want to maintain the metrics “history” of the old code version, you probably don’t want this behavior. However, if you put the version in a meta tag instead then you will be able to change the version without creating a new set of metrics for your cluster. To take the example even further, let’s say you have upgraded half of your cluster to a new version and want to compare the CPU idle of the old and new code version. You could do this in Sumo Logic using the query “metric = cpu_idle | avg by version.” Knowing the Difference To summarize, if you want two values of a given tag to be separate metrics at the same time then the values should be an intrinsic tag and not a meta tag. Hopefully this clears up some of the confusion regarding intrinsic versus meta tags. By tagging your metrics appropriately you will make them easier to search and ensure that you are tracking all the metrics you expect. If you already have a Sumo Logic account, then you are ready to start ingesting custom metrics. If you are new to Sumo Logic, start by signing up for a free account here. Additional Resources Learn how to accelerate data analytics with Sumo Logic’s Logs to Metrics solution in this blog Want to know how to transform Graphite data into metadata-rich metrics? Check out our Metrics Rules solution Read the case study to learn how Paf leveraged the Sumo Logic platform to derive critical insights that enabled them to analyze log and metric data, perform root-cause analysis, and monitor apps and infrastructure

Blog

Why is Oracle and Microsoft SQL Adoption Low for Developers on AWS?

Blog

Why Decluttering Complex Data in Legends is Hard

Blog

5 Best Practices for Using Sumo Logic Notebooks for Data Science

This year, at Sumo Logic’s third annual user conference, Illuminate 2018, we presented Sumo Logic Notebooks as a way to do data science in Sumo Logic. Sumo Logic Notebooks are an experimental feature that integrate Sumo Logic, notebooks and common machine learning frameworks. They are a bold attempt to go beyond what the current Sumo Logic product has to offer and enable a data science workflow leveraging our core platform. Why Notebooks? In the data science world, notebooks have emerged as an important tool to do data science. Notebooks are active documents that are created by individuals or groups to write and run code, display results, and share outcomes and insights. Like every other story, a data science notebook follows a structure that is typical for its genre. We usually have four parts. We (a) start with defining a data set, (b) continue to clean and prepare the data, (c) perform some modeling using the data, and (d) interpret the results. In essence, a notebook should record an explanation of why experiments were initiated, how they were performed, and then display the results. Anatomy of a Notebook A notebook segments a computation in individual steps called paragraphs. A paragraph contains an input and an output section. Each paragraph executes separately and modifies the global state of the notebook. State can be defined as the ensemble of all relevant variables, memories, and registers. Paragraphs must not necessarily contain computations, but also can contain text or visualizations to illustrate the workings of the code. The input section (blue) will contain the instruction to the notebook execution engine (sometimes called kernel or interpreter). The output section (green) will display a trace of the paragraph’s execution and/or an intermediate result. In addition, the notebook software will expose some controls (purple) for managing and versioning notebook content as well as operational aspects such as starting and stopping executions. Human Speed vs Machine Speed The power of the notebook roots in its ability to segment and then slow down computation. Common executions of computer programs are done at machine speed. Machine speed suggests that when a program is submitted to the processor for execution, it will run from start to end as fast as possible and only block for IO or user input. Consequently, the state of the program changes so fast that it is neither observable, nor modifiable by humans. Programmers would typically attach debuggers physically or virtually to stop programs during execution at so-called breakpoints and read out and analyze their state. Thus, they would slow down execution to human speed. Notebooks make interrogating the state more explicit. Certain paragraphs are dedicated to make progress in the computation, i.e., advance the state, whereas other paragraphs would simply serve to read out and display the state. Moreover, it is possible to rewind state during execution by overwriting certain variables. It is also simple to kill the current execution, thereby deleting the state and starting anew. Notebooks as an Enabler for Productivity Notebooks increase productivity, because they allow for incremental improvement. It is cheap to modify code and rerun only the relevant paragraph. So when developing a notebook, the user builds up state and then iterates on that state until progress is made. Running a stand-alone program on the contrary will incur more setup time and might be prone to side-effects. A notebook will most likely keep all its state in the working memory whereas every new execution of a stand-alone program will need to build up the state on every time it is run. This takes more time and the required IO operations might fail. Working off a program state in the memory and iterating on that proved to be very efficient. This is particularly true for data scientists, as their programs usually deal with a large amount of data that has to be loaded in and out of memory as well as computations that can be time-consuming. From an the organizational point of view, notebooks are a valuable tool for knowledge management. As they are designed to be self-contained, sharable units of knowledge, they amend themselves for: Knowledge transfer Auditing and validation Collaboration Notebooks at Sumo Logic At Sumo Logic, we expose notebooks as an experimental feature to empower users to build custom models and analytics pipelines on top of log metrics data sets. The notebooks provide the framework to structure a thought process. This thought process can be aimed at delivering a special kind of insight or outcome. It could be drilling down on a search. Or an analysis specific to a vertical or an organization. We provide notebooks to enable users to go beyond what Sumo Logic operators have to offer, and train and test custom machine learning (ML) algorithms on your data. Inside notebooks we deliver data using data frames as a core data structure. Data frames make it easy to integrate logs and metrics with third-party data. Moreover, we integrate with other leading data wrangling, model management and visualization tools/services to provide a blend of the best technologies to create value with data. Technology Stack Sumo Logic Notebooks are an integration of several software packages to make it easy to define data sets using the Sumo Query language and use the result data set as a data frame in common machine learning frameworks. Notebooks are delivered as a Docker container and can therefore be installed on laptops or cloud instances without much effort. The most common machine learning libraries such as Apache Spark, pandas, and TensorFlow are pre-installed, but others are easy to add through python’s pip installer, or using apt-get and other package management software from the command line. Changes can be made persistent by committing the Docker image. The key of Sumo Logic Notebooks is the integration of the Sumo Logic API data adapter with Apache Spark. After a query has been submitted, the adapter will load the data and ingest it into Spark. From there we can switch over to a python/pandas environment or continue with Spark. The notebook software provides the interface to specify data science workflows. Best Practices for Writing Notebooks #1 One notebook, one focus A notebook contains a complete record of procedures, data, and thoughts to pass on to other people. For that purpose, they need to be focused. Although it is tempting to put everything in one place, this might be confusing for users. Better write two or more notebooks than overloading a single notebook. #2 State is explicit A common source of confusion is that program state gets passed on between paragraphs through hidden variables. The set of variables that represent the interface between two subsequent paragraphs should be made explicit. Referencing variables from other paragraphs than the previous one should be avoided. #3 Push code in modules A notebook integrates code, it is not a tool for code development. That would be an Integrated Development Environment (IDE). Therefore, a notebook should one contain glue code and maybe one core algorithm. All other code should be developed in an IDE, unit tested, version controlled, and then imported via libraries in the notebook. Modularity and all other good software engineering practices are still valid in notebooks. As in practice number one too much code clutters the notebook and distracts from the original purpose or analysis goal. #4 Use speaking variables and tidy up your code Notebooks are meant to be shared and read by others. Others might not have an easy time following our thought process, if we did not come up with good, self-explaining names. Tidying up the code goes a long way, too. Notebooks impose an even higher standard than traditional code on quality. #5 Label diagrams A picture is worth a thousand words. A diagram, however, will need some words to label axes, describe lines and dots, and comprehend other important informations such sample size, etc. A reader can have a hard time to seize the proportion or importance of a diagram without that information. Also keep in mind that diagrams are easily copy-pasted from the notebook into other documents or in chats. Then they lose the context of the notebook in which they were developed. Bottom Line The segmentation of a thought process is what fuels the power of the notebook. Facilitating incremental improvements when iterating on a problem boosts productivity. Sumo Logic enables the adoption of notebooks to foster the use of data science with logs and metrics data. Additional Resources Visit our Sumo Logic Notebooks documentation page to get started Check out Sumo Logic Notebooks on DockerHub or Read the Docs Read our latest press release announcing new platform innovations, including our new Data Science Insights innovation

Blog

How to Monitor Azure Services with Sumo Logic

Blog

Illuminate Day Two Keynote Top Four Takeaways

Day two of Illuminate, Sumo Logic’s annual user conference, started with a security bang, hearing from our founders, investors, customers and a special guest (keep reading to see who)! If you were unable attend the keynote in person, or watch via the Facebook Livestream, we’ve recapped the highlights below for you. If you are curious about the day one keynote, check out that recap blog post, as well. #1: Dial Tones are Dead, But Reliability Is Forever Two of our founders, Christian Beedgen and Bruno Kurtic, took the stage Thursday morning to kick off the second day keynote talk, and they did not disappoint. Sumo Logic founders Bruno Kurtic (left) and Christian Beedgen (right) kicking off the day two Illuminate keynote Although the presentation was full of cat memes, penguins and friendly banter, they delivered an earnest message: reliability, availability and performance are important to our customers, and are important to us at Sumo Logic. But hiccups happen, it’s inevitable, and that’s why Sumo Logic is committed to constantly monitoring for any hiccups so that we can troubleshoot instantly when they happen. The bottom line: our aspiration at Sumo Logic is to be the dial tone for those times when you absolutely need Sumo Logic to work. And we do that through total transparency. Our entire team has spent time on building a reliable service, built on transparency and constant improvement. It really is that simple. #2: The Platform is the Key to Democratizing Machine Data (and, Penguins) We also announced a number of new platform enhancements, solutions and innovations at Illuminate, all with the goal of improving our customers’ experiences. All of that goodness can be found in a number of places (linked at the end of this article), but what was most exciting to hear from Bruno and Christian on stage was what Sumo Logic is doing to address major macro trends. The first being proliferation of users and access. What we’ve seen from our customers, is that the Sumo Logic platform is brought into a specific group, like the security team, or the development team, and then it spreads like wildfire, until the entire company (or all of the penguins) wants access to the rich data insights. That’s why we’ve taken an API-first approach to everything we do. To keep your workloads running around the globe, we now have 20 availability zones across five regions and we will continue to expand to meet customer needs. The second being cloud scale economics because Moore’s Law is, in fact, real. Data ingest trends are going up, and for years our customers have relied on Sumo Logic to manage mission-critical data in order to keep their modern applications running and secured. Not all data is created equal, and different data sets have different requirements. Sometimes, it can be a challenge to store data outside of the Sumo Logic platform, which is why our customers now will have brand new capabilities for basic and cold storage within Sumo Logic. (Christian can confirm that the basic storage is still secure — by packs of wolves). The third trend is around the unification of modern apps and machine data. While the industry is buzzing about observability, one size does not fit all. To address this challenge, the Sumo Logic team asked, what can we do to deliver on the vision of unification? The answer is in the data. For the first time ever, we will deliver the State of Modern Applications report live, where customers can push their data to dynamic dashboards, and all of this information will be accessible in new easy to read charts that are API-first, templatized and most importantly, unified. Stay tuned for more on the launch of this new site! #3: The State of Security from Greylock, AB InBev and Pokemon One of my favorite highlights of the second day keynote was the security panel, moderated by our very own CSO, George Gerchow, with guests from one of our top investors, Greylock Partners, and two of our customers, Anheuser-Busch InBev (AB InBev) and Pokemon. From left to right: George Gerchow, CSO, Sumo Logic; Sara Guo, partner, Greylock; Khelan Bhatt, global director, security architecture, AB InBev; John Visneski, director infosecurity & DPO, Pokemon Sara Guo, general partner at Greylock, spoke about three constantly changing trends, or waves, she’s tracking in security, and what she looks when her firm is considering an investment: the environment, the business and the attackers. We all know the IT environment is changing drastically, and as it moves away from on-premises protection, it’s not a simple lift and shift process, we have to actually do security differently. Keeping abreast of attacker innovation is also important for enterprises, especially as cybersecurity resources continue to be sparse. We have to be able to scale our products, automate, know where our data lives and come together as a defensive community. When you think of Anheuser-Busch, you most likely think of beer, not digital transformation or cybersecurity. But there’s actually a deep connection, said Khelan Bhatt, global director, security architecture, AB InBev. As the largest beer distributor in the world, Anheuser Busch has 500 different breweries (brands) in all corners of the world, and each one has its own industrial IoT components that are sending data back to massive enterprise data lakes. The bigger these lakes get, the bigger targets they become to attackers. Sumo Logic has played a big part in helping the AB InBev security team digitally transform their operations, and building secure enterprise data lakes to maintain their strong connection to the consumer while keeping that data secure. John Visneski, director of information security and data protection officer (DPO) for the Pokémon Company International had an interesting take on how he and his team approach security. Be a problem solver first, and a security pro second. Although John brought on Sumo Logic to help him fulfill security and General Data Protection Regulation (GDPR) requirements, our platform has become a key business intelligence tool at Pokemon. With over 300 million active users, Pokemon collects sensitive personally identifiable information (PII) from children, including names, addresses and some geolocation data. Sumo Logic has been key for helping John and his team deliver on the company’s core values: providing child and customer safety, trust (and uninterrupted fun)! #4: Being a Leader Means Being You, First and Foremost When our very special guest, former CIA Director George Tenet, took the stage, I did not expect to walk away with some inspiring leadership advice. In a fireside chat with our CEO, Ramin Sayar, George talked about how technology has changed the threat landscape, and how nation-state actors are leveraging the pervasiveness of data to get inside our networks and businesses. Data is a powerful tool that can be used for good or bad. At Sumo Logic, we’re in it for the good. George also talked about what it means to be a leader and how to remain steadfast, even in times of uncertainty. Leaders have to lead within the context of who they are as human beings. If they try to adopt a persona of someone else, it destroys their credibility. The key to leaderships is self awareness of who you are, and understanding your limitations so that you can hire smart, talented people to fill those gaps. Leaders don’t create followers, they create other leaders. And that’s a wrap for Sumo Logic’s second annual user conference. Thanks to everyone who attended and supported the event. If we didn’t see you at Illuminate over the last two days, we hope you can join us next year! Additional Resources For data-driven industry insights, check out Sumo Logic’s third annual ‘State of Modern Applications and DevSecOps in the Cloud’ report. You can read about our latest platform innovations in our press release, or check out the cloud SIEM solution and Global Intelligence Service blogs. Check out our recent blog for a recap of the day one Illuminate keynote.

Blog

Illuminate Day One Keynote Top Five Takeaways

Today kicked off day one of Sumo Logic’s second annual user conference, Illuminate, and there was no better way to start the day than with a keynote presentation from our CEO, Ramin Sayar, and some of our most respected and valued customers, Samsung SmartThings and Major League Baseball (MLB). The event was completely sold out and the buzz and excitement could be felt as customers, industry experts, thought leaders, peers, partners and employees made their way to the main stage. If you were unable to catch the talk in person or tune in for the Facebook livestream, then read on for the top five highlights from the day one keynote. #1: Together, We’ve Cracked the Code to Machine Data At Sumo Logic, we’re experts in all things data. But, to make sure we weren’t biased, we partnered with 451 Research earlier this year to better understand how the industry is using machine data to improve overall customer experiences in today’s digital world. We found that 60 percent of enterprises are using machine data analytics using for business and customer insights, and to help support digital initiatives, usage and app performance. These unique findings have validated what we’ve been seeing within our own customer base over the past eight years — together, we can democratize machine data to make it easily accessible, understandable and beneficial to all teams within an organization. That’s why, as Ramin shared during the keynote, we’ve committed to hosting more meet-ups and global training and certification sessions, and providing more documentation, videos, Slack channels and other resources for our growing user base — all with the goal of ‘lluminating’ machine data for the masses, and to help customers win in today’s analytics economy. #2: Ask, and You Shall Receive Continued Platform Enhancements Day one was also a big day for some pretty significant platform enhancements and new solutions centered on three core areas: development, security and operations. The Sumo Logic dev and engineering teams have been hard at work, and have over 50 significant releases to show for it, all focused on meeting our customer’s evolving needs. Some of the newer releases on the Ops analytics side include Search Templates and Logs to Metrics. Search Templates empower non-technical users like customer support and product management, to leverage Sumo Logic’s powerful analytics without learning the query language. Logs to Metrics allow users to extract business KPIs from logs and cost-effectively convert them to high performance metrics for long-term trending and analysis. We’ve been hard at work on the security side of things as well, and are happy to announce the new cloud SIEM solution that’s going to take security analytics one step further. Our customers have been shouting from the rooftop for years that their traditional on-premises SIEM tool and rules-based correlation have let them down, and so they’ve been stuck straddling the line between old and new. With this entirely new, and first of its kind cloud SIEM solution, customers have a single, unified platform in the cloud, to help them meet their modern security needs. And we’re not done yet, there’s more to come. #3: Samsung SmartThings is Changing the World of Connected IoT Scott Vlaminck, co-founder and VP of engineering at Samsung SmartThings, shared his company’s vision for SmartThings to become the definitive platform for all IoT devices, in order to deliver the best possible smart home experience for their customers. And, as Scott said on stage, Sumo Logic helps make that possible by providing continuous intelligence of all operational, security and business data flowing across the SmartThings IoT platform, which receives about 200,000 requests per second a day! Scott talked about the company’s pervasive usage of the Sumo Logic platform, in which 95 percent of employees use Sumo Logic to report on KPIs, customer service, product insights, security metrics and app usage trends, and partner health metrics to drive deeper customer satisfaction. Having a fully integrated tool available to teams outside of traditional IT and DevOps is what continuous intelligence means for SmartThings. #4: Security is Everyone’s Responsibility at MLB When Neil Boland, the chief information security officer (CISO) for Major Baseball League took the stage, he shared how he and his security team are completely redefining what enterprise security means for a digital-first sports organization that has to manage, maintain and secure over 30 different leagues (which translates to 30 unique brands and 30 different attack vectors). Neil’s mission for 2018 is to blow up the traditional SIEM and MSSP models and reinvent them for his company’s 100 percent cloud-based initiatives. Neil’s biggest takeaway is that everyone at MLB is on the cybersecurity team, even non-technical groups like the help desk, and this shared responsibility helps strengthen overall security posture and continue to deliver uninterrupted sports entertainment to their fans. And Sumo Logic has been a force multiplier that helps Neil and his team achieve that collective goal. #5: Community, Community, Community Bringing the talk full circle, Ramin ended the keynote with a word about community, and how we are not only in it for our customers, but we’re in it with them, and we want to share data trends, usages, and best practices of the Sumo Logic platform with our ecosystem to provide benchmarking capabilities. That’s why today at Illuminate, we launched a new innovation — Global Intelligence Service — that focused on three key areas: Industry Insights, Community Insights and Data Science Insights. These insights will help customers extend machine learning and insights to new teams and use cases across the enterprise, and these are only possible with Sumo Logic’s cloud-native, multi-tenant architecture. For data-driven industry insights, check out Sumo Logic’s third annual ‘State of Modern Applications and DevSecOps in the Cloud’ report. You can read about our latest platform innovations in our press release, or check out the cloud SIEM solution and Global Intelligence Service blogs. Want the Day Two Recap? If you couldn’t join us live for day two of Illuminate, or were unable to catch the Facebook livestream, check out our second day keynote recap blog for the top highlights.

Blog

Announcing the Sumo Logic Global Intelligence Service at Illuminate 2018

In today’s hyper-connected world, a company’s differentiation is completely dependent upon delivering a better customer experience, at scale, and at a lower cost than the competition. This is no easy feat, and involves a combination of many things, particularly adopting new technologies and architectures, as well as making better use of data and analytics.Sumo Logic is committed to helping our customers excel in this challenging environment by making it easier to adopt the latest application architectures while also making the most of their precious data.The Power of the PlatformAs a multi-tenant cloud-native platform, Sumo Logic has a unique opportunity to provide context and data to our customers that is not available anywhere else. Why is this?First of all, when an enterprise wants to explore new architectures and evaluate options, it is very difficult to find broad overviews of industry trends based on real-time data rather than surveys or guesswork.Second, it is difficult to find reliable information about how exactly companies are using technologies at the implementation level, all the way down to the configurations and performance characteristics. Finally, once implemented, companies struggle to make the best use of the massive amount of machine data exhaust from their applications, particularly for non-traditional audiences like data scientists.It is with this backdrop in mind that Sumo Logic is announcing the Global Intelligence Service today during the keynote presentation at our second annual user conference, Illuminate, in Burlingame, Calif. This unprecedented initiative of data democratization is composed of three primary areas of innovation.Industry Insights — What Trends Should I be Watching?Sumo Logic is continuing to building on the success of its recently released third annual ‘State of Modern Applications and DevSecOps in the Cloud’ report to provide more real-time and actionable insights about industry trends. In order to stay on top of a constantly changing technology landscape, this report is expanding to include more frequent updates and instant-access options to help customers develop the right modern application or cloud migration strategy for their business, operational and security needs.Chart depicting clusters of AWS Services used frequently togetherCommunity Insights — What Are Companies like us, and Teams like ours, Doing?Sumo Logic is applying the power of machine learning to derive actionable insights for getting the most out of your technology investments. We have found that many engineering teams lack the right resources and education needed to make the best technology choices early on in their prototyping phases. And then, when the system is in production, it is often too late to make changes. That’s why Sumo Logic has an opportunity to save our customers pain and frustration by giving them benchmarking and comparison information when they most need it.We all like to think that our use cases are each a beautiful, unique snowflake. The reality is that, while each of us is unique, our uses of technology fall into some predictable clusters. So, looking over a customer base of thousands, Sumo Logic can infer patterns and best practices about how similar organizations are using technologies.Uses those patterns, we will be building recommendations and content for our customers that can be used to compare performance against a baseline of usage across their peers.Chart depicting how the performance behavior across customers tend to clusterData Science Insights — Data Scientists Need Love, TooData scientists are under more pressure than ever to deliver stunning results, while also getting pushback from society about the quality of their models and the biases that may or may not be there. At the end of the day, while data scientists have control over their models, they may have less control over the data.If the data is incomplete or biased in any way that can directly influence the results. To alleviate this issue, Sumo Logic is providing an open source integration with the industry standard Jupyter and Apache Zeppelin notebooks in order to make it easier for data scientists to leverage the treasure trove of knowledge currently buried in their application machine data.Empower the People who Power Modern BusinessYou may still be wondering, why does all of this matter?At the end of the day, it is all about making our customers successful by making their people successful. A business is only as effective as the people who do the work, and it is our mission at Sumo Logic to empower those users to excel in their roles, which in return contributes to overall company growth and performance.And we also want to set users outside of the traditional IT, DevOps, and security teams up for success as well by making machine data analytics more accessible for them.So, don’t forget that you heard it here first: Democratizing machine data is all about empowering the people with love (and with unique machine data analytics and insights)!Additional ResourcesDownload the 2018 ‘State of Modern Applications and DevSecOps in the Cloud’ report and/or read the press release for more detailed insights.Read the Sumo Logic platform enhancement release to learn more about our latest platform enhancements and innovationsSign up for Sumo Logic for free

Blog

Introducing Sumo Logic’s New Cloud SIEM Solution for Modern IT

Blog

Sumo Logic's Third Annual State of Modern Apps and DevSecOps in the Cloud Report is Here!

Blog

Why Cloud-Native is the Way to Go for Managing Modern Application Data

Are your on-premises analytics and security solutions failing you in today’s digital world? Don’t have the visibility you need across your full application stack? Unable to effectively monitor, troubleshoot and secure your microservices and multi-cloud architectures? If this sounds like your organization, then be sure to watch this short video explaining why a cloud-native, scalable and elastic machine data analytics platform approach is the right answer for building, running and securing your modern applications and cloud infrastructures.To learn more about how Sumo Logic is uniquely positioned to offer development, security and operations (DevSecOps) teams the right tools for their cloud environments, watch our Miles Ahead in the Cloud and DevOps Redemption videos, visit our website or sign up for Sumo Logic for free here.Video TranscriptionYou’ve decided to run your business in the cloud. You chose this to leverage all the benefits the cloud enables – speed to rapidly scale your business; elasticity to handle the buying cycles of your customers; and the ability to offload data center management headaches to someone else so you can focus your time, energy and innovation on building a great customer experience.So, when you need insights into your app to monitor, troubleshoot or learn more about your customers, why would you choose a solution that doesn’t work the same way?Why would you manage your app with a tool that locks you into a peak support contract, one that’s not designed to handle the unpredictability of your data?Sumo Logic is a cloud-native, multi-tenant service that lets you monitor, troubleshoot, and secure your application with the same standards of scalability, elasticity and security you hold yourself to.Sumo Logic is built on a modern app stack for modern app stacks.Its scalable………….elastic…………resilient cloud architecture has the agility to move as fast as your app moves, quickly scaling up for data volume.Its advanced analytics based on machine learning are designed to cope with change. So, when that data volume spikes, Sumo Logic is there with the capacity and the answers you need.Sumo Logic is built with security as a 1st principle. That means security is baked in at the code level, and that the platform has the credentials and attestations you need to manage compliance for your industry.Sumo Logic’s security analytics and integrated threat intelligence also help you detect threats and breaches faster, with no additional costs.Sumo Logic delivers all this value in a single platform solution. No more swivel chair analytics to slow you down or impede your decision-making. You have one place to see and correlate the continuum of operations, security and customer experience analytics – this is what we call continuous intelligence for modern apps.So, don’t try to support your cloud app with a tool that was designed for the old, on-premise world, or a pretend cloud-tool.Leverage the intelligence solution that fully replicates what you’re doing with your own cloud-business — Sumo Logic, the industry leading, cloud-native, machine data analytics platform delivered to you as a service.Sumo Logic. Continuous Intelligence for Modern Applications.

Blog

Top Reasons Why You Should Get Sumo Logic Certified, Now!

Blog

How Our Customers Influence the Sumo Logic Product

Sumo Logic is no different than most companies — we are in the service of our customers and we seek to build a product that they love. As we continue to refine the Sumo Logic platform, we’re also refining our feedback loops. One of those feedback loops is internal dogfooding and learning how our own internal teams such as engineering, sales engineering and customer success, experience the newest feature. However, we know that that approach can be biased. Our second feedback loop is directly from our customers, whose thoughts are then aggregated, distilled and incorporated into the product. The UX research team focuses on partnering with external customers as well as internal Sumo Logic teams that regularly use our platform, to hear their feedback and ensure that the product development team takes these insights into account as they build new capabilities. Our Product Development Process Sumo Logic is a late-stage startup, which means that we’re in the age of scaling our processes to suit larger teams and to support new functions. The processes mentioned are in various stages of maturity, and we haven’t implemented all of these to a textbook level of perfection (yet!). Currently, there are two facets to the product development process. The first is the discovery side, for capabilities that are entirely new, while the second is focused on delivery and improving capabilities that currently exist in the product. The two sides run concurrently, as opposed to sequentially, with the discovery side influencing the delivery side. The teams supporting both sides are cross-functional in nature, consisting of engineers, product managers and product designers. Adapted from Jeff Patton & Associates Now that we’ve established the two aspects to product development, we’ll discuss how customer feedback fits into this. Customer feedback is critical to all product decisions at Sumo Logic. We seek out the opinions of our customers when the product development team has questions that need answers before they can proceed. Customer feedback usually manifests in two different categories: broad and granular. Broad Customer Questions The more high level questions typically come from the discovery side. For example, we may get a question like this: “should we build a metrics product?” For the teams focused on discovery, UX research starts with a clear hypothesis and is more open-ended and high level. It may consist of whiteboarding with our customers or observing their use cases in their workspaces. The insights from this research might spawn a new scrum team to build a capability, or the insights could indicate we should focus efforts elsewhere. Granular Customer Questions By contrast, UX research for delivery teams is much more focused. The team likely has designs or prototypes to illustrate the feature that they’re building, and their questions tend to focus on discoverability and usability. For instance, they may be wondering if customers can find which filters apply to which dashboard panels. The outcomes from this research give the team the necessary data to make decisions and proceed with design and development. Occasionally, the findings from the discovery side will influence what’s going on the delivery side. The UX Research Process at Sumo Logic The diagram below describes the milestones during our current UX research process, for both discovery and delivery teams. As a customer, the most interesting pieces of this are the Research Execution and the Report Presentation, as these include your involvement as well as how your input impacts the product. UX Research Execution Research execution takes a variety of forms, from on-site observation to surveys to design research with a prototype. As a customer, you’re invited to all types of research, and we are always interested in your thoughts. Our ideal participants are willing to share how they are using the Sumo Logic platform for their unique operational, security and business needs, and to voice candid opinions. Our participants are also all over the emotional spectrum, from delighted to irritated, and we welcome all types. The immediate product development team takes part in the research execution. For example, if we’re meeting with customers via video conference, we’ll invite engineers, product management and designers to observe research sessions. There’s a certain realness for the product development team when they see and hear a customer reacting to their work, and we’ve found that it increases empathy for our customers. This is very typical for our qualitative UX research sessions, and what you can expect as a participant. In the above clip, Dan Reichert, a Sumo Logic sales engineer, discusses his vision for a Data Allocation feature to manage ingest. Research Presentation After the UX research team has executed the research, we’ll collect all data, video, photos and notes. We’ll produce a report with the key and detailed insights from the research, and we’ll present the report to the immediate product development team. These report readouts tend to be conversational, with a lengthy discussion of the results, anecdotes and recommendations from the UX researcher. I’ve found that the teams are very interested in hearing specifics of how our customers are using the product, and how their efforts will influence that. After the report readout, the product development team will meet afterward to discuss how they’ll implement the feedback from the study. The UX researcher will also circulate the report to the larger product development team for awareness. The insights are often useful for other product development teams, and occasionally fill in knowledge gaps for them. How Can I Voice My Thoughts and Get Involved in UX Research at Sumo Logic? We’d love to hear how you’re using Sumo Logic, and your feedback for improvement. We have a recruiting website to collect the basics, as well as your specific interests within the product. Our UX research team looks forward to meeting you!

Blog

Understanding Sumo Logic Query Language Design Patterns

Blog

A Look Inside Being a Web UI Engineering Intern at Sumo Logic

Hello there! My name is Sam and this summer I’ve been an intern at Sumo Logic. In this post I’ll share my experience working on the web UI engineering team and what I learned from it. A year ago I started my Master of Computer Science degree at Vanderbilt University and since the program is only two years long, there’s only one internship slot before graduation. So I needed to find a good one. Like other students, I wanted the internship to prepare me for my future career by teaching me about work beyond just programming skills while also adding a reputable line to my resume. So after months of researching, applying, preparing and interviewing, I officially joined the Sumo Logic team in May. The Onboarding Experience The first day was primarily meeting a lot of new people, filling out paperwork, setting up my laptop and learning which snacks are best at the office (roasted almonds take the win). The first couple of weeks were a heads-down learning period. I was learning about the Sumo Logic machine data analytics platform — everything from why it is used and how it works to what it is built on. We also had meetings with team members who explained the technologies involved in the Sumo Logic application. In general, though, the onboarding process was fairly flexible and open ended, with a ton of opportunities to ask questions and learn. Specifically, I enjoyed watching React Courses as a part of my on boarding. In school I pay to learn this, but here I am the one being paid 🙂 Culture and Work Environment The culture and work environment are super nice and relaxed. The developers are given a lot of freedom in how and what they are working on, and the internship program is very adaptable. I was able to shape my role throughout the internship to focus on tasks and projects that were interesting to me. Of course, the team was very helpful in providing direction and answering my questions, but it was mostly up to me to decide what I would like to do. The phrase that I remember best was from my manager. On my second week at Sumo Logic he said: “You don’t have to contribute anything — the most important thing is for you to learn.” The thing that surprised me the most at Sumo Logic is how nice everyone is. This is probably the highest “niceness per person” ratio I’ve ever experienced in my life. Almost every single person I’ve met here is super friendly, humble, open minded and smart. These aspects of the culture helped me greatly. Summer Outside Sumo Logic One of the important factors in choosing a company for me was its location. I am from Moscow, Russia, and am currently living in Nashville while I attend Vanderbilt, but I knew that this summer I definitely wanted to find an internship in the heart of the tech industry — Silicon Valley. Lucky for me, Sumo Logic is conveniently located right in the middle of it in Redwood City. I also enjoyed going to San Francisco on weekends to explore the city, skateboarding to Stanford from my home and visiting my friend at Apple’s Worldwide Developers Conference (WWDC) in San Jose. I liked the SF Bay Area so much that I don’t want to work anywhere else in the foreseeable future! Actual Projects: What Did I Work On? The main project that I work on is a UI component library. As the company quickly grows, we strive to make the UI components more consistent — visually and written — in standard and the code more maintainable. We also want to simplify the communication about the UI between the Dev and Design teams. I was very excited about the future impact and benefit of this project for the company, and had asked the team join this effort. A cool thing about this library is that it is a collection of fresh and independent React components that will be then used by developers in creation of all parts of the Sumo Logic app. It is a pleasure to learn the best practices while working with cutting edge libraries like React. If that sounds interesting to you, check out this blog from one of my Sumo Logic colleagues on how to evaluate and implement react table alternatives into your project. Things I Learned That I Didn’t Know Before How professional development processes are structured How companies work, grow and evolve How large projects are organized and maintained How to communicate and work on a team What a web-scale application looks from the inside And, finally, how to develop high quality React components Final Reflection Overall, I feel like spending three months at Sumo Logic was one of the most valuable and educational experiences I’ve ever had. I received a huge return on investment of time and moved much closer to my future goals of gaining relevant software development knowledge and skills to set me up for a successful career post-graduation. Additional Resources Want to stay in touch with Sumo Logic? Follow & connect with us on Twitter, LinkedIn and Facebook for updates. If you want to learn more about our machine data analytics platform visit our “how it works” page!

Blog

Black Hat 2018 Buzzwords: What Was Hot in Security This Year?

It’s been a busy security year, with countless twists and turns, mergers, acquisitions and IPOs, and most of that happening in the lead up to one of the biggest security conferences of the year — Black Hat U.S.A. Each year, thousands of hackers, security practitioners, analysts, architects, executives/managers and engineers from varying industries and from all over the country (and world) descend on the desert lands of the Mandalay Bay Resort & Casino in Las Vegas for more than a week of trainings, educational sessions, networking and the good kind of hacking (especially if you stayed behind for DefCon26). Every Black Hat has its own flavor, and this year was no different. So what were some of the “buzzwords” floating around the show floor, sessions and networking areas? The Sumo Logic security team pulled together a list of the hottest, newest, and some old, but good terms that we overheard and observed during our time at Black Hat last week. Read on for more, including a recap of this year’s show trends. And the Buzzword is… APT — Short for advanced persistent threat Metasploit — Provides information about security vulnerabilities and used in pen testing Pen Testing (or Pentesting) — short for penetration testing. Used to discover security vulnerabilities OSINT — Short for open source intelligence technologies XSS — Short for cross site scripting, which is a type of attack commonly launched against web sites to bypass access controls White Hat — security slang for an “ethical” hacker Black Hat — a hacker who violates computer security for little reason beyond maliciousness or personal gain Red Team — Tests the security program (Blue Team) effectiveness by using techniques that hackers would use Blue Team — The defenders against Red Team efforts and real attackers Purple Team — Responsible for ensuring the maximum effectiveness of both the Red and Blue Teams Fuzzing or Fuzz Testing — Automated software that performs invalid, unexpected or random data as inputs to a computer program that is typically looking for structured content, i.e. first name, last name, etc. Blockchain — Widely used by cryptocurrencies to distribute expanding lists of records (blocks), such as transaction data, which are virtually “chained” together by cryptography. Because of their distributed and encrypted nature the blocks are resistant to modification of the data. SOC — Short for security operations center NOC — Short for network operations center Black Hat 2018 Themes There were also some pretty clear themes that bubbled to the top of this year’s show. Let’s dig into them. The Bigger, the Better….Maybe Walking the winding labyrinth that is the Mandalay Bay, you might have overheard conference attendees complaining that this year, Black Hat was bigger than in year’s past, and to accommodate for this, the show was more spread out. The business expo hall was divided between two rooms: a bigger “main” show floor (Shoreline), and a second, smaller overflow room (Oceanside), which featured companies new to the security game, startups or those not ready to spend big bucks on flashy booths. While it may have been a bit confusing or a nuisance for some to switch between halls, the fact that the conference is outgrowing its own space is a good sign that security is an important topic and more organizations are taking a vested interest in it. Cloud is the Name, Security is the Game One of the many themes at this year’s show was definitely all things cloud. Scanning the booths, you would have noticed terms around security in the cloud, how to secure the cloud, and similar messaging. Cloud has been around for a while, but seems to be having a moment in security, especially as new, agile cloud-native security players challenge some of the legacy on-premises vendors and security solutions that don’t scale well in a modern cloud, container or serverless environment. In fact, according to recent Sumo Logic research, 93 percent of responding enterprises face challenges with security tools in the cloud, and 49 percent state that existing legacy tools aren’t effective in the cloud. Roses are Red, Violets are Blue, FUD is Gone, Let’s Converge One of the biggest criticisms of security vendors (sometimes by other security vendors) is all of the language around fear, uncertainty and doubt (FUD). This year, it seems that many vendors have ditched the fearmongering and opted for collaboration instead. Walking the expo halls, there was a lot of language around “togetherness,” “collaboration” and the general positive sentiment that bringing people together to fight malicious actors is more helpful than going at it alone in siloed work streams. Everything was more blue this year. Usually, you see the typical FUD coloring: reds, oranges , yellows and blacks, and while there was still some of that, the conference felt brighter and more uplifting this year with purples, all shades of blues, bright greens, and surprisingly… pinks! There was also a ton of signage around converging development, security and operations teams (DevSecOps or SecOps) and messaging, again, that fosters an “in this together” mentality that creates visibility across functions and departments for deeper collaboration. Many vendors, including Sumo Logic have been focusing on security education, offering and promoting their security training, certification and educational courses to make sure security is a well-understood priority for stakeholders across all lines of the business. Our recent survey findings also validate the appetite for converging workflows, with 54 percent of respondents citing a greater need for cross-team collaboration (DevSecOps) to effectively investigate, prioritize and correlate threats for faster remediation. Three cheers for that! Sugar and Socks and Everything FREE Let’s talk swag. Now this trend is not entirely specific to Black Hat, but it seems each year, the booth swag gets sweeter (literally) with vendors offering doughnut walls, chocolates, popcorn and all sorts of tasty treats to reel people into conversation (and get those badge scans). There’s no shortage of socks either! Our friends at HackerOne were giving out some serious booth swag, and you better believe we weren’t headed home without grabbing some! Side note: Read the latest HackerOne blog or watch the latest SnapSecChat video to learn how our Sumo Logic security team has taken a DevSecOps approach to bug bounties that creates transparency and collaboration between hackers, developers, and external auditors to improve security posture. Sumo swag giveaways were in full swing at our booth, as well. We even raffled off a Vento drone for one lucky Black Hat winner to take home! Parting Thoughts As we part ways with 100 degree temps and step back into our neglected cubicles or offices this week, it’s always good to remember the why. Why do we go to Black Hat, DefCon, BSides, and even RSA? It’s more than socializing and partying, it’s to connect with our community, to learn from each other and to make the world a more secure and bette place for ourselves, and for our customers. And with that, we’ll see you next year! Additional Resources For the latest Sumo Logic cloud security analytics platform updates, features and capabilities, read the latest press release. Want to learn more about Sumo Logic security analytics and threat investigation capabilities? Visit our security solutions page. Interested in attending our user conference next month, Iluminate? Visit the webpage, or check out our latest “Top Five Reasons to Attend” blog for more information. Download and read our 2018 Global Security Trends in the Cloud report or the infographic for more insights on how the security and threat landscape is evolving in today’s modern IT environment of cloud, applications, containers and serverless computing.

Blog

Top Five Reasons to Attend Illuminate18

Last year Sumo Logic launched its first user conference, Illuminate. We hosted more than 300 fellow Sumo Logic users who spent two days getting certified, interacting with peers to share best practices and lots of mingling with Sumo’s technical experts (all while having fun). The result? Super engaged users with a new toolbox to take back to their teams to make the most of their Sumo Logic platform investment, and get the real-time operational and security insights needed to better manage and secure their modern applications and cloud infrastructures. Watch last year’s highlight reel below: This piece of feedback from one attendee sums up the true value of Illuminate: “In 48 hours I already have a roadmap of how to maximize the use of Sumo Logic at my company and got a green light from my boss to move forward.” — Sumo Logic Customer / Illuminate Attendee Power to the People This year’s theme for Illuminate is “Empowering the People Who Power Modern Business” and is expected to attract more than 500 attendees who will participate in a unique interactive experience including over 40 sessions, Ask the Expert bar, partner showcase and Birds of a Feather roundtables. Not enough to convince you to attend? Here are five more reasons: Get Certified – Back by popular demand, our multi-level certification program provides users with the knowledge, skills and competencies to harness the power of machine data analytics and maximize investments in the Sumo Logic platform. Bonus: we have a brand new Sumo Security certification available at Illuminate this year designed to teach users how to increase the velocity and accuracy of threat detection and strengthen overall security posture. Hear What Your Peers are Doing – Get inspired and learn directly from your peers like Major League Baseball, Genesys, USA TODAY NETWORK, Wag, Lending Tree, Samsung SmartThings, Informatica and more about how they implemented Sumo Logic and are using it to increase productivity, revenue, employee satisfaction, deliver the best customer experiences and more. You can read more about the keynote speaker line up in our latest press release. Technical Sessions…Lots of Them – This year we’ve broaden our breakout sessions into multiple tracks including Monitoring and Troubleshooting, Security Analytics, Customer Experience and Dev Talk covering tips, tricks and best practices for using Sumo Logic around topics including Kubernetes, DevSecOps, Metrics, Advanced Analytics, Privacy-by-Design and more. Ask the Experts – Get direct access to expert advice from Sumo Logic’s product and technical teams. Many of these folks will be presenting sessions throughout the event, but we’re also hosting an Ask the Expert bar where you can get all of your questions answered, see demos, get ideas for dashboards and queries, and see the latest Sumo Logic innovations. Explore the Modern App Ecosystem – Sumo Logic has a rich ecosystem of partners and we have a powerful set of joint integrations across the modern application stack to enhance the overall manageability and security for you. Stop by the Partner Pavilion to see how Sumo Logic works with AWS, Carbon Black, CrowdStrike, JFrog, LightStep, MongoDB, Okta, OneLogin, PagerDuty, Relus and more. By now you’re totally ready for the Illuminate experience, right? Check out the full conference agenda here. These two days will give you all of the tools you need (training, best practices, new ideas, peer-to-peer networking, access to Sumo’s technical experts and partners) so you can hit the ground running and maximize the value of the Sumo Logic platform for your organization. Register today, we look forward to seeing you there!

Blog

Get Miles Ahead of Security & Compliance Challenges in the Cloud with Sumo Logic

Blog

SnapSecChat: A DevSecOps Approach to Bug Bounties with Sumo Logic & HackerOne

Regardless of industry or size, all organizations need a solid security and vulnerability management plan. One of the best ways to harden your security posture is through penetration testing and inviting hackers to hit your environment to look for weak spots or holes in security. However, for today’s highly regulated, modern SaaS company, the traditional check-box compliance approach to pen testing is failing them because it’s slowing them down from innovating and scaling. That’s why Sumo Logic Chief Security Officer and his team have partnered with HackerOne to implement a modern bug bounty program that takes a DevSecOps approach. They’ve done this by building a collaborative community for developers, third-party auditors and hackers to interact and share information in an online portal that creates a transparent bug bounty program that uses compliance to strengthen security. By pushing the boundaries and breaking things, it collectively makes us stronger, and it also gives our auditors a peek inside the kimono and more confidence in our overall security posture. It also moves the rigid audit process into the DevSecOps workflow for faster and more effective results. To learn more about Sumo Logic’s modern bug bounty program, the benefits and overall positive impact it’s had on not just the security team, but all lines of the business, including external stakeholders like customers, partners and prospects, watch the latest SnapSecChat video series with Sumo Logic CSO, George Gerchow. And if you want to hear about the results of Sumo Logic’s four bounty challenge sprints, head on over to the HackerOne blog for more. If you enjoyed this video, then be sure to stay tuned for another one coming to a website near you soon! And don’t forget to follow George on Twitter at @GeorgeGerchow, and use the hashtag #SnapSecChat to join the security conversation! Stop by Sumo Logic’s booth (2009) at Black Hat this week Aug 8-9, 2018 at The Mandalay Bay in Las Vegas to chat with our experts and to learn more about our cloud security analytics and threat investigation capabilities. Happy hacking!

Blog

Building Replicated Stateful Systems using Kafka as a Commit Log

Blog

Employee Spotlight: A Dreamer with a Passion for Product Design & Mentoring

In this Sumo Logic Employee Spotlight we interview Rocio Lopez. A lover of numbers, Rocio graduated from Columbia University with a degree in economics, but certain circumstances forced her to forego a career in investment banking and instead begin freelancing until she found a new career that suited her talents and passions: product design. Intrigued? You should be! Read Rocio’s story below. She was a delight to interview! When Creativity Calls Q: So tell me, Rocio, what’s your story? Rocio Lopez (RL): I am a product designer at Sumo Logic and focus mostly on interaction design and prototyping new ideas that meet our customers’ needs. Q: Very cool! But, that’s not what you went to school for, was it? RL: No. I studied economics at Columbia. I wanted to be an investment banker. Ever since I was a little girl, I’ve been a nerd about numbers and I love math. Part of it was because I remember when the Peso was devalued and my mom could no longer afford to buy milk. I became obsessed with numbers and this inspired my college decision. But the culture and career path at Columbia was clear — you either went into consulting or investment banking. I spent a summer shadowing at Citigroup (this was during the height of the financial crisis), and although my passion was there, I had to turn down a career in finance because I was here undocumented. Q: That’s tough. So what did you do instead? RL: When I graduated in 2011, I started doing the things I knew how to do well like using Adobe Photoshop and InDesign to do marketing for a real estate company or even doing telemarketing. I eventually landed a gig designing a database for a company called Keller Williams. They hired an engineer to code the database, but there was no designer around to think through the customer experience so I jumped in. Q: So that’s the job that got you interested in product design? RL: Yes. And then I spent a few years at Cisco in the marketing organization where they needed help revamping their training platforms. I started doing product design without even knowing what it was until a lead engineer called it out. I continued doing small design projects, started freelancing and exploring on my own until I connected with my current manager, Daniel Castro. He was hiring for a senior role, and while I was not that senior, the culture of the team drew me in. Q: Can you expand on that? RL: Sure. The design team at Sumo Logic is very unique. I’ve spent about seven years total in the industry and what I’ve been most impressed by is the design culture here, and the level of trust and level-headedness the team has. I’ve never come across this before. You would think that because we’re designing an enterprise product that everyone would be very serious and buckled up, but it’s the opposite. The Life of a Dreamer Q: Let’s switch gears here. I heard you on NPR one morning, before I even started working at Sumo Logic. Tell me about being a dreamer. RL: People come to the U.S. undocumented because they don’t know of other ways to come legally or the available paths for a visa aren’t a match for them because they may not have the right skills. And those people bring their families. I fell into that category. I was born in Mexico but my parents came over to the U.S. seeking a better life after the Tequila crisis. I grew up in Silicon Valley and went to school like any other American kid. When Barack Obama was in office, he created an executive order known as the Deferred Action for Childhood Arrivals (DACA) program, since Congress has failed to passed legislative action since 2001. To qualify for the program, applicants had to have arrived in the U.S. before age 16 since June 15, 2007 and pass a rigorous background check by homeland security every two years. . I fell into this category and was able to register in this program. Because most of the immigrants are young children who were brought here at a very young age, we’ve sort of been nicknamed “dreamers” after the 2001 DREAM Act (short for Development, Relief and Education for Alien Minors Act). Q: And under DACA you’ve been able to apply for a work permit? RL: That’s right. I have a work permit, I pay income taxes, and I was able to attend college just like a U.S. citizen, although I am still considered undocumented and that comes with certain limitations. For instance, my employer cannot sponsor me and I cannot travel outside the United States. The hope was that Congress would create a path for citizenship for Dreamers, but now that future is a bit uncertain after they failed to meet the deadline to pass a bill in March. For now I have to wait until the Supreme Court rules the constitutionality of DACA to figure out my future plans. Q: I can only imagine how difficult this is to live with. What’s helped you through it? RL: At first I was a big advocate, but now I try to block it out and live in the present moment. And the opportunity to join the Sumo Logic design team came at the right time in my life. I can’t believe what I do every day is considered work. The team has a very unique way of nurturing talent and it’s something I wish more companies would do. Our team leaders make sure we have fun in addition to getting our work done. We usually do team challenges, dress up days, etc. that really bring us all together to make us feel comfortable, encourage continued growth, and inspire us to feel comfortable speaking up with new ideas. I feel like the work I am doing has value and is meaningful, and we are at the positive end of the “data conversation.” I read the news and see the conversations taking place with companies like Facebook and Airbnb that are collecting our personal data. It’s scary to think about. And it feels good to be on the other side of the conversation; on the good side of data and that’s what gets me excited and motivated. Sumo Logic is collecting data and encrypting it and because we’re not on the consumer-facing side, we can control the lens of how people see that data. We can control not only the way our customers collect data but also how they parse and visualize it. I feel we’re at the cusp of a big industry topic that’s going to break in the next few years. Q: I take it you’re not on social media? RL: No. I am completely off Facebook and other social media platforms. When I joined Sumo Logic, I became more cautious of who I was giving my personal data to. Advice for Breaking into Design & Tech? Q: Good for you! So what advice to you have for people thinking of switching careers? RL: From 2011 to now I’ve gone through big career changes. There are a lot of people out there that need to understand how the market is shifting, that some industries like manufacturing, are not coming back, and that requires an adaptive mindset. The money and opportunity is where technology and data are and if people can’t transition to these new careers in some capacity, they’re going to be left out of the economy and will continue to have problems adjusting. It’s a harsh reality, but we have to be able to make these transitions because in 15 or 20 years from now, the world will look very different. I’ve been very active in mentoring people that want to break into technology but aren’t sure how. Q: What’s some of the specific advice related to a career path in UX/design that you give your mentees? RL: Sometimes you have to breakaway from traditions like school or doing a masters program and prioritize the job experience. Design and engineering are about showing you’ve done something, showing a portfolio. If you can change your mindset to this, you will be able to make the transition more smoothly. I also want to reiterate that as people are looking for jobs or next careers, it’s important to find that place that is fun and exciting. A place where you feel comfortable and can be yourself and also continue to grow and learn. Find meaning, find value, and find the good weird that makes you successful AND happy. Stay in Touch Stay in touch with Sumo Logic & connect with us on Twitter, LinkedIn and Facebook for updates. Want to work here? We’re hiring! Check out our careers page to join the team. If you want to learn more about our machine data analytics platform visit our “how it works” page!

August 1, 2018

Blog

Postmortems Considered Beautiful

Outages and postmortems are a fact of life for any software engineer responsible for managing a complex system. And it can be safely said that those two words – “outage” and “postmortem,” do not carry any positive connotations in the remotest sense of the word. In fact, they are generally dreaded by most engineers. While that sentiment is understandable given the direct impact of such incidents on customers and the accompanying disruption, our individual perspective matters a lot here as well. If we are able to look beyond the damage caused by such incidents, we might just realize that outages and postmortems shouldn’t be “dreaded,” but instead, wholeheartedly embraced. One has to only try, and the negative vibes associated with these incidents may quickly give way to an appreciation of the complexity in modern big data systems. The Accidental Harmony of Layered Failures As cliche as it may sound, “beauty” indeed lies in the eyes of the beholder. And one of the most beautiful things about an outage/postmortem is the spectacular way in which modern big data applications often blow up. When they fail, there are often dozens of things that fail simultaneously, all of which collude, resulting in an outage. This accidental harmony among failures and the dissonance among the guards and defenses put in place by engineers, is a constant feature of such incidents and is always something to marvel at. It’s almost as if the resonance frequencies of various failure conditions match, thereby amplifying the overall impact. What’s even more surprising is the way in which failures at multiple layers can collude. For example, it might so happen that an outage-inducing bug is missed by unit tests due to missing test cases, or even worse, a bug in the tests! Integration tests in staging environments may have again failed to catch the bug, either due to a missing test case or disparity in the workload/configuration of staging/production environments. There could also be misses in monitoring/alerting, resulting in increased MTTIs. Similarly, there may be avoidable process gaps in the outage handling procedure itself. For example, some on-calls may have too high of an escalation timeout for pages or may have failed to update their phone numbers in the pager service when traveling abroad (yup, that happens too!). Sometimes, the tests are perfect, and they even catch the error in staging environments, but due to a lack of communication among teams, the buggy version accidentally gets upgraded to production. Outages are Like Deterministic Chaos In some sense, these outages can also be compared to “deterministic chaos” caused by an otherwise harmless trigger that manages to pierce through multiple levels of defenses. To top it off, there are always people involved at some level in managing such systems, so the possibility of a mundane human error is never too far away. All in all, every single outage can be considered as a potential case study of cascading failures and their layered harmony. An Intellectual Journey Another very deeply satisfying aspect of an outage/postmortem is the intellectual journey from “how did that happen?” to “that happened exactly because X, Y, Z.” Even at the system level, it’s necessary to disentangle the various interactions and hidden dependencies, discover unstated assumptions and dig through multiple layers of “why’s” to make sense of it all. When properly done, root cause analysis for outages of even moderately complex systems, demand a certain level of tenacity and perseverance, and the fruits of such labor can be a worthwhile pursuit in and of itself. There is a certain joy in putting the pieces of a puzzle together, and outages/postmortems present us exactly with that opportunity. Besides the above intangibles, outages and their subsequent postmortems have other very tangible benefits. They not only help develop operational knowledge, but also provide a focused path (within the scope of the outage) to learn about the nitty-gritty details of the system. At the managerial level too, they can act as road signs for course correction and help get the priorities right. Of course, none of the above is an excuse to have more outages and postmortems! We should always strive to build reliable, fault-tolerant systems to minimize such incidents, but when they do happen, we should take them in stride, and try to appreciate the complexity of the software systems all around us. Love thy outages. Love thy postmortems. Stay in Touch Want to stay in touch with Sumo Logic? Follow & connect with us on Twitter, LinkedIn and Facebook for updates. Visit our website to learn more about our machine data analytics platform and be sure to check back on the blog for more posts like this one if you enjoyed what you read!

Blog

11 New Google Cloud Platform (GCP) Apps for Continued Multi-Cloud Support

Blog

Sumo Smash Bros: What Creating a Video Game Taught Us About the Power of Data

As a longtime DevOps engineer with a passion for gaming and creating things, I truly believe that in order to present data correctly, you must first understand the utility of a tool without getting hung up on the output (data). To understand why this matters, I’ll use Sumo Logic’s machine data analytics platform as an example. With a better understanding of how our platform works, you’ll be able to turn seemingly disparate data into valuable security, operational or business insights that directly service your organization’s specific needs and goals. The Beginning of Something Great Last year, I was sitting with some colleagues at lunch and suggested that it would be super cool to have a video game at our trade show booth. We all agreed it was a great idea, and what started as a personal at-home project turned into a journey to extract game data, and present it in a compelling and instructive manner. The following is how this simple idea unfolded over time and what we learned as a result. Super Smash Bros Meets Sumo Logic The overall idea was solid, however, after looking at emulators and doing hours of research (outside of office hours), I concluded that it was a lot harder to extract data from an old school arcade game even working with an emulator. My only path forward would be to use a cheat engine to read memory addresses, and all the work would be done in Assembly, which is a low-level ‘80s era programming language. It’s so retro that the documentation was nearly impossible to find and I again found myself at another impasse. Another colleague of mine who is a board game aficionado, suggested I find an open source game online that I could add code to myself in order to extract data. Before I started my search, I set some parameters. What I was looking for was a game that had the following characteristics. It should be multiplayer It would ideally produce different types of data It would manifest multiple win conditions: game and social Enter Super Smash Bros (SSB), which met all of the above criteria. If you are not familiar with this game, it’s originally owned/produced by Nintendo and the appeal is that you and up to three other players battle each other in “King of the Hill” until there is an “official” game winner. It helps to damage your opponent first before throwing them off the hill. The game win condition is whoever has the most number of lives when the game ends, wins. And the game ends when either time runs out or only one player has lives left. However, this leaves holes for friends to argue who actually won. If you’ve ever played this game (which is one of strategy), there is a second kind of condition — a social win condition. You can “officially” win by the game rules but there’s context attached to “how” you won — a social win. Creating Sumo Smash Bros I found an open source clone of Super Smash Bros written in Javascript which runs entirely in a web browser. It was perfect. Javascript is a simple language and with the help of a friend to get started, we made it so we could group log messages that would go to the console where a developer could access it and then send it directly into the Sumo Logic platform. PRO TIP: If you want game controllers for an online video game like Super Smash Bros, use Xbox controllers not Nintendo! We would record certain actions in the code, such as: When a player’s animation changed What move a player performed Who hit who, when and for how much What each players’ lives were For example, an animation change would be whenever a player was punched by an opponent. Now by the game standards, very limited data determines who is the “official” winner of the game based on the predetermined rules, but with this stream of data now flowing into Sumo Logic, we could also identify the contextual “social win” and determine if and how they were different from the game rules. Here’s an example of a “social” win condition: Imagine there’s a group of four playing the game, and one of the players (player 1) hangs back avoiding brawls until two of the three opponents are out of lives. Player 1 jumps into action, gets a lucky punch on the other remaining player who has thus far dominated (and who is really damaged) and throws him from the ring to take the “official” game win. Testing the Theory When we actually played, the data showed exactly what I had predicted. First, some quick background on my opponents: Jason E. (AKA Jiggles) — He admits to having spent a good portion of his youth playing SSB, and he may have actually played in tournaments. Michael H. (AKA Killer) — He’s my partner in crime. We’re constantly cooking up crazy ideas to try out, both in and outside of work He also had plenty of experience with the game. Mikhail M. (AKA MM$$DOLLAB) — He has always been a big talker. He too played a lot, and talked a big talk. Originally I had intended for us to pseudo choreograph the game to get the data to come out “how I wanted” in order to show that while the game awarded a “winner” title to one player, the “actual winner” would be awarded by the friends to the player who “did the most damage to others” or some other parameter. It only took about three nanoseconds before the plan was out the window and we were fighting for the top. Our colleague Jason got the clear technical game win. We had recorded the game and had the additional streams of data, and when the dust had settled, a very different story emerged. For instance, Jason came in third place for our social win parameter of “damage dealt.” Watching the recording, it’s clear that Jason’s strategy was to avoid fighting until the end. When brawls happened, he was actively jumping around but rarely engaged with the other players. He instead waited for singled-out attacks. Smart, right? Maybe. We did give him the “game win,” however, based on the “damage dealt” social win rule, the order was: Michael, myself, then Jason, and Mikhail. Watch what happened for yourself: What’s the Bigger Picture? While this was a fun experiment, there’s also an important takeaway. At Sumo Logic, we ingest more than 100 terabytes of data each day — that’s the equivalent of data from about 200 Libraries of Congress per second. That data comes from all over — it’s a mix of log, event, metrics, security data coming not just from within an organization’s applications and infrastructure, but also from third party vendors. When you have more information, you can see trends and patterns, make inferences, technical and business decisions — you gain an entirely new level of understanding beyond the 1s and 0s staring back at you on a computer screen. People also appreciate the data for different reasons. For example, engineers only care that the website they served you is the exact page you clicked on. They don’t care if you searched for hats or dog food or sunscreen. But marketers care, a lot. Marketers care about your buying decisions and patterns and they use that to inform strong, effective digital marketing campaigns to serve you relevant content. At Sumo Logic, we don’t want our customers or prospects to get hung up on the data, we want them to look past that to first understand what our tool does, to understand how it can help them get the specific data they need to solve a unique problem or use case. “In the words of Sherlock Holmes, it’s a capital mistake to theorize before one has data.” — Kenneth Barry, Sumo Logic The types of data you are ingesting and analyzing only matters if you first understand your end goal, and have the proper tools in place — a means to an end. From there, you can extract and make sense of the data in ways that matter to your business, and each use case varies from one customer to another. Data powers our modern businesses and at Sumo Logic, we empower those who use this data. And we make sure to have fun along the way! Bonus: Behind the Scenes Video Q&A with Kenneth Additional Resources Visit our website to learn more about the power of machine data analytics and to download Sumo Logic for free to try it out for yourself Read our 2018 State of Modern Applications in the Cloud report Register to attend Illuminate, our annual user conference taking place Sept. 12-13, 2018 in Burlingame, Calif.

Blog

A Primer on Building a Monitoring Strategy for Amazon RDS

In a previous blog post, we talked about Amazon Relational Database Service (RDS). RDS is one of the most popular cloud-based database services today and extensively used by Amazon Web Services (AWS) customers for its ease of use, cost-effectiveness and simple administration. Although as a managed service, RDS doesn’t require database administrators (DBAs) to do many of the day-to-day tasks, it still needs to be monitored for performance and availability. That’s because Amazon doesn’t auto-tune any database performance — this is a shared responsibility of the customer. That’s why there should be a monitoring strategy and processes in place for DBAs and operation teams to keep an eye on their RDS fleet. In this blog post, we will talk about an overall best-practice approach for doing this. Why Database Monitoring Keeping a database monitoring regimen in place, no matter how simple, can help address potential issues proactively before they become incidents, and cost additional time and money. Most AWS infrastructure teams typically have decent monitoring in place for different types of resources like EC2, ELB, Auto Scaling Groups, Logs, etc. Database monitoring often comes at a later stage or is ignored altogether. With RDS, it’s also easy to overlook due to the low-administration nature of the service. The DBA or the infrastructure managers should therefore invest some time in formulating and implementing a database monitoring policy. Please note that designing an overall monitoring strategy is an involved process and is not just about defining database counters to monitor. It also includes areas like: Service Level Agreement Classifying incident types (Critical, Serious, Moderate, Low etc.) Creating RACI (Responsible, Accountable, Consulted, Informed) matrix Defining escalation paths etc.. A detailed discussion of all these topics is beyond the scope of this article, so we will concentrate on the technical part only. What to Monitor Database monitoring, or RDS monitoring in this case, is not about monitoring only database performance. A monitoring strategy should include the following broad categories and their components: Monitoring category Examples of what to monitor Availability Is the RDS instance or cluster endpoint accessible from client tools? Is there any instance stopping, starting, failed over or being deleted? Is there a failover of multi-AZ instances? Recoverability Is the RDS instance being backed up – both automatically and manually? Are individual databases being backed up successfully? Health and Performance What’s the CPU, memory and disk space currently in use? What’s the query latency? What’s the disk read/write latency? What’s the disk queue length? How many database connections are active? Are there any blocking and waiting tasks? Are there any errors or warnings reported in database log files? Are these related to application queries? Are they related to non-optimal configuration values? Are any of the scheduled jobs failing? Manageability Are there any changes in the RDS instances’ Tags Security groups Instance properties Parameter and option groups? Who made those changes and when? Security Which users are connecting to the database instance? What queries are they running? Cost How much each RDS instance is costing every month? While many of these things can be monitored directly in AWS, Sumo Logic can greatly help with understanding all of the logs and metrics that RDS produces. In this article, we will talk about what AWS offers for monitoring RDS. As we go along, we will point out where we think Sumo Logic can make the work easier. Monitoring Amazon CloudWatch You can start monitoring RDS using metrics from Amazon CloudWatch. Amazon RDS, like any other AWS service, exposes a number of metrics which are available through CloudWatch. There are three ways to access these metrics: From AWS Console Using AWS CLI Using REST APIs The image below shows some of these metrics from the RDS console: Amazon CloudWatch shows two types of RDS metrics: Built-in Metrics Enhanced Monitoring Metrics Built-in Metrics These metrics are available from any RDS instance. They are collected from the hypervisor of the host running the RDS virtual machine. Some of the metrics may not be available for all database engines, but the important ones are common. It is recommended the following RDS metrics are monitored from CloudWatch Metric What it means Why you should monitor it CPUUtilization % CPU load in the RDS instance. A consistent high value means one or more processes are waiting for CPU time while one or more processes are blocking it. DiskQueueDepth The number of input and output requests waiting for the disk resource. A consistent high value means disk resource contention – perhaps due to locking, long running update queries etc. DatabaseConnections The number of database connections against the RDS instance. A sudden spike should be investigated immediately. It may not mean a DDOS attack, but a possible issue with the application generating multiple connections per request. FreeableMemory The amount of RAM available in the RDS instance, expressed in bytes. A very low value means the instance is under memory pressure. FreeStorageSpace Amount of disk storage available in bytes. A small value means disk space is running out. ReadIOPS The average number of disk read operations per second. Should be monitored for sudden spikes. Can mean runaway queries. WriteIOPS The average number of disk write operations per second. Should be monitored for sudden spikes. Can mean a very large data modification ReadLatency The average time in milliseconds to perform a read operation from the disk. A higher value may mean a slow disk operation, probably caused by locking. WriteLatency The average time in milliseconds to perform a write operation to disk. A higher value may means disk contention. ReplicaLag How far in time, the read replica of MySQL, MariaDB or PostgreSQL instance is lagging behind from its master A high lag value can means read operations from replica is not serving the current data. Amazon RDS Aurora engine also exposes some extra counters which are really useful for troubleshooting. At the time of writing, Aurora supports MySQL and PostgreSQL only. We recommend monitoring these counters: Metric What it means Why you should monitor it DDLLatency The average time in milliseconds to complete Data Definition Language (DDL) commands like CREATE, DROP, ALTER etc. A high value means the database is having performance issues running DDL commands. This can be due to exclusive locks on objects. SelectLatency The average time in milliseconds to complete SELECT queries. A high value may mean disk contention, poorly written queries, missing indexes etc. InsertLatency The average time in milliseconds to complete INSERT commands. A high value may mean locking or poorly written INSERT command. DeleteLatency The average time in milliseconds to complete DELETE commands. A high value may mean locking or poorly written DELETE command. UpdateLatency The average time in milliseconds to complete UPDATE commands. A high value may mean locking or poorly written UPDATE command. Deadlocks The average number of deadlocks happening per second in the database. More than 0 should be a concern – it means the application queries are running in such a way that they are blocking each other frequently. BufferCacheHitRatio The percentage of queries that can be served by data already stored in memory It should be a high value, near 100, meaning queries are don’t have to access disk for fetching data. Queries The average number of queries executed per second This should have a steady, average value. Any sudden spike or dip should be investigated. You can use the AWS documentation for a complete list of built-in RDS metrics. Enhanced Monitoring Metrics RDS also exposes “enhanced monitoring metrics.” These are collected by agents running on the RDS instances’ operating system. Enhanced monitoring can be enabled when an instance is first created or it can be enabled later. It is recommended enabling it because it offers a better view of the database engine. Like built-in metrics, enhanced metrics are available from the RDS console. Unlike built-in metrics though, enhanced metrics are not readily accessible from CloudWatch Metrics console. When enhanced monitoring is enabled, CloudWatch creates a log group called RDSOSMetrics in CloudWatch Logs: Under this log group, there will be a log stream for each RDS instance with enhanced monitoring. Each log stream will contain a series of JSON documents as records. Each JSON document will show a series of metrics collected at regular intervals (by default every minute). Here is a sample excerpt from one such JSON document: { “engine”: “Aurora”, “instanceID”: “prodataskills-mariadb”, “instanceResourceID”: “db-W4JYUYWNNIV7T2NDKTV6WJSIXU”, “timestamp”: “2018-06-23T11:50:27Z”, “version”: 1, “uptime”: “2 days, 1:31:19”, “numVCPUs”: 2, “cpuUtilization”: { “guest”: 0, “irq”: 0.01, “system”: 1.72, “wait”: 0.27, “idle”: 95.88, “user”: 1.91, “total”: 4.11, “steal”: 0.2, “nice”: 0 },…… It’s possible to create custom CloudWatch metrics from these logs and view those metrics from CloudWatch console. This will require some extra work. However, both built-in and enhanced metrics can be streamed to Sumo Logic from where you can build your own charts and alarms. Regardless of platform, it is recommended to monitor the enhanced metrics for a more complete view of the RDS database engine. The following counters should be monitored for Amazon Aurora, MySQL, MariaDB, PostgreSQL, or Oracle: Metric Group Metric What it means and why you should monitor cpuUtilization user % of CPU used by user processes.

AWS

July 17, 2018

Blog

What is Blockchain, Anyway? And What Are the Biggest Use Cases?

Everyone’s talking about blockchain these days. In fact, there is so much hype about blockchains — and there are so many grand ideas related to them — that it’s hard not to wonder whether everyone who is excited about blockchains understands what a blockchain actually is. If, amidst all this blockchain hype, you’re asking yourself “what is blockchain, anyway?” then this article is for you. It defines what blockchain is and explains what it can and can’t do. Blockchain Is a Database Architecture In the most basic sense, blockchain is a particular database architecture. In other words, like any other type of database architecture (relational databases, NoSQL and the like), a blockchain is a way to structure and store digital information. (The caveat to note here is that some blockchains now make it possible to distribute compute resources in addition to data. For more on that, see below.) What Makes Blockchain Special? If blockchain is just another type of database, why are people so excited about it? The reason is that a blockchain has special features that other types of database architectures lack. They include: Maximum data distribution. On a blockchain, data is distributed across hundreds of thousands of nodes. While other types of databases are sometimes deployed using clusters of multiple servers, this is not a strict requirement. A blockchain by definition involves a widely distributed network of nodes for hosting data. Decentralization. Each of the nodes on a blockchain is controlled by a separate party. As a result, the blockchain database as a whole is decentralized. No single person or group controls it, and no single group or person can modify it. Instead, changes to the data require network consensus. Immutability. In most cases, the protocols that define how you can read and write data to a blockchain make it impossible to erase or modify data once it has been written. As a result, data stored on a blockchain is immutable. You can add data, but you can’t change what already exists. (We should note that while data immutability is a feature of the major blockchains that have been created to date, it’s not strictly the case that blockchain data is always immutable.) Beyond Data As blockchains have evolved over the past few years, some blockchain architectures have grown to include more than a way to distribute data across a decentralized network. They also make it possible to share compute resources. The Ethereum blockchain does this, for example, although Bitcoin—the first and best-known blockchain—was designed only for recording data, not sharing compute resources. If your blockchain provides access to compute resources as well as data, it becomes possible to execute code directly on the blockchain. In that case, the blockchain starts to look more like a decentralized computer than just a decentralized database. Blockchains and Smart Contracts Another buzzword that comes up frequently when discussing what defines a blockchain is a smart contract. A smart contract is code that causes a specific action to happen automatically when a certain condition is met. The code is executed on the blockchain, and the results are recorded there. This may not sound very innovative, but there are some key benefits and use cases. Any application could incorporate code that makes a certain outcome conditional upon a certain circumstance. If-this-then-that code stanzas are not really a big deal. What makes a smart contract different from a typical software conditional statement, however, is that because the smart contract is executed on a decentralized network of computers, no one can modify its outcomes. This feature differentiates smart contracts from conditional statements in traditional applications, where the application is controlled by a single, central authority, which has the power to modify it. Smart contracts are useful for governing things like payment transactions. If you want to ensure that a seller does not receive payment for an item until the buyer receives the item, you could write a smart contract to make that happen automatically, without relying on third-party oversight. Limitations of Blockchains By enabling complete data decentralization and smart contracts, blockchains make it possible to do a lot of interesting things that you could not do with traditional infrastructure. However, it’s important to note that blockchains are not magic. Most blockchains currently have several notable limitations. Transactions are not instantaneous. Bitcoin transactions take surprisingly long to complete, for example. Access control is complicated. On most blockchains, all data is publicly accessible. There are ways to limit access control, but they are complex. In general, a blockchain is not a good solution if you require sophisticated access control for your data. Security. While blockchain is considered a secure place for transactions and storing/sending sensitive data and information, there have been a few blockchain-related security breaches. Moving your data to a blockchain does provide an inherent layer of protection because of the decentralization and encryption features, however, like most things, it does not guarantee that it won’t be hacked or exploited. Additional Resources Watch the latest SnapSecChat videos to hear what our CSO, George Gerchow, has to say about data privacy and the demand for security as a service. Read a blog on new Sumo Logic research that reveals why a new approach to security in the cloud is required for today’s modern businesses. Learn what three security dragons organizations must slay to achieve threat discovery and investigation in the cloud.

Blog

Comparing Europe’s Public Cloud Growth to the Global Tech Landscape

Blog

React Tables: How to Evaluate Options and Integrate a Table into Your Project

Blog

Thoughts from Gartner’s 2018 Security & Risk Management Summit

Blog

Deadline to Update PCI SSL & TLS Looms, Are You Ready?

Quick History LessonEarly internet data communications were enabled through the use of a protocol called HyperText Transmission Protocol (HTTP) to transfer data between nodes on the internet. HTTP essentially establishes the “request-response” rules to be used between a “client” (i.e. web browser) and “server”(computer hosting a website) throughout the session. While the use of HTTP grew along with internet adoption, its lack of security protocols left internet communications vulnerable to attacks from malicious actors.In the mid-nineties, Secure Sockets Layer (SSL) was developed to close this gap. SSL is known as a “cryptographic protocol” standard established to enable the privacy and integrity of the bidirectional data being transported via HTTP. You may be familiar with HTTPS or HyperText Transmission Protocol over SSL (a.k.a. HTTP Secure). Transport Layer Security (TLS) version 1.0 (v1.0) was developed in 1999 as an enhancement to the then current SSL v3.0 protocol standard. TLS standards matured over time with TLS v1.1 [2006] and TLS v1.2 [2008].Early Security Flaws Found in HTTPSWhile both SSL and TLS protocols remained effective for some time, in October of 2014, Google’s security team discovered a vulnerability in SSL version 3.0. Skilled hackers were able to use a technique called Padding Oracle On Downgraded Legacy Encryption — widely referred to as the “POODLE” exploit to bypass the SSL security and decrypt sensitive (HTTPS) information including secret session cookies. By doing this, hackers could then hijack user accounts.In December 2014, the early versions of TLS were also found to be vulnerable from a new variant of the POODLE attack exploits, that enabled hackers to downgrade the protocol version to one that was more vulnerable.Poodle Attacks Spur Changes to PCI StandardsSo what do POODLE attacks have to do with Payment Card Industry Data Security Standards (PCI DSS) standards and compliance? PCI DSS Requirement 4.1 mandates the use of “strong cryptography and security protocols to safeguard sensitive cardholder data during transmission” and these SSL vulnerabilities (and similar variants) also meant sensitive data associated with payment card transactions was also open to these risks. And in April of 2015 the PCI Standards Security Council (SSC) issued a revised set of industry standards — PCI DSS v3.1, which stated “SSL has been removed as an example of strong cryptography in the PCI DSS, and can no longer be used as a security control after June 30, 2016.”This deadline applied to both organizations and service providers to remedy this situation in their environments by migrating from SSL to TLS v1.1 or higher. They also included an information supplement: “Migrating from SSL and Early TLS” as a guide.However, due to early industry feedback and push back, in December of 2015 the PCI SSC issued a bulletin extending the deadline to June 30, 2018 for both service providers and end users to migrate to higher, later versions of TLS standards. And in April of 2016 the PCI SSC issued PCI v3.2 to formalize the deadline extension and added an “Appendix 2” to outline the requirements for conforming with these standards.Sumo Logic Is Ready, Are You?The Sumo Logic platform was built with a security-by-design approach and we take security and compliance very seriously. As a company, we continue to lead the market in securing our own environment and providing the tools to help enable our customers to do the same.Sumo Logic complied with the the PCI DSS 3.2 service provider level one standards in accordance with the original deadline (June 30, 2016), and received validation from a third party expert, Coalfire.If your organization is still using these legacy protocols it is important to take steps immediately and migrate to the newest versions to ensure compliance by the approaching June 30, 2018 deadline.If you are unsure whether these vulnerable protocols are still in use in your PCI environment, don’t wait until it’s too late to take action. If you don’t have the resources to perform your own audit, the PCI Standards Council has provided a list of “Qualified Security Assessors” that can help you in those efforts.What About Sumo Logic Customers?If you are a current Sumo Logic customer, in addition to ensuring we comply with PCI DSS standards in our own environment, we continually make every effort to inform you if one or more of your collectors are eligible for an upgrade.If you have any collectors that might still be present in your PCI DSS environment that do not meet the new PCI DSS standards, you would have been notified through the collectors page in our UI (see image below). It’s worthwhile to note that TLS v1.1 is still considered PCI compliant, however, at Sumo Logic we are leapfrogging the PCI requirements and moving forward, we will only be supporting TLS v1.2.If needed you can follow these instructions to upgrade (or downgrade) as required.Sumo Logic Support for PCI DSS ComplianceSumo Logic provides a ton of information, tools and pre-built dashboards to our customers to help with managing PCI DSS compliance standards in many cloud and non-cloud environments. A collection of these resources can be found on our PCI Resources page.If you are a cloud user, and are required to manage PCI DSS elements in that type of environment, in April 2018 the PCI SSC Cloud Special Interest Group issued an updated version 3.0 to their previous version 2.0 that was last released in February 2013.Be looking for another related blog to provide a deeper dive on this subject.PCI SSC Cloud Computing Guidelines version 3.0 include the following changes:Updated guidance on roles and responsibilities, scoping cloud environments, and PCI DSS compliance challenges.Expanded guidance on incident response and forensic investigation.New guidance on vulnerability management, as well as additional technical security considerations on topics such as Software Defined Networks (SDN), containers, dog computing and internet of things (IoT).Standardized terminology throughout the document.Updated references to PCI SSC and external resources.Additional ResourcesFor more information on the compliance standards Sumo Logic supports visit our self-service portal. You’ll need a Sumo Logic account to access the portal.Visit our DocHub page for specifics on how Sumo Logic helps support our customer’s PCI compliance needsSign up for Sumo Logic for free to learn more

Blog

DevOps Redemption: Don't Let Outdated Data Analytics Tools Slow You Down

Blog

SnapSecChat: The Demand for Security as a Service

Blog

Log Management and Analytics for the AWS ELB Classic Service

Quick RefresherEarlier this year, we showed you how to monitor Amazon Web Services Elastic Load Balancer (AWS ELB) with Cloudwatch. This piece is a follow up to that, and will focus on Classic Load Balancers. Classic Load Balancers provide basic load balancing across multiple Amazon EC2 instances and operate at both the request level and connection level. Classic Load Balancers are intended for applications that were built within the EC2-Classic network. AWS provides the ability to monitor your ELB configuration with detailed logs of all the requests made to your load balancers. There is a wealth of data in the logs generated by ELB, and it is extremely simple to set up.How to Get Started: Setting up AWS ELB LogsLogging is not enabled in AWS ELB by default. It is important to set up logging when you start using the service so you don’t miss any important details!Step 1: Create an S3 Bucket and Enable ELB LoggingNote: If you have more than one AWS account (such as ops, dev, and so on) or multiple regions that generate Elastic Load Balancing data, you’ll probably need to configure each of these separately.Here are the key steps you need to followCreate an S3 Bucket to store the logsNote: Want to learn more about S3? Look no further (link)Allow AWS ELB access to the S3 BucketEnable AWS ELB Logging in the AWS ConsoleVerify that it is workingStep 2: Allow Access to external Log Management ToolsTo add AWS ELB logs to your log management strategy, you need to give access to your log management tool! The easiest way to do that is by creating a special user and policy.Create a user in AWS Identity and Access Management (IAM) with Programmatic Access. For more information about this, refer to the appropriate section of the AWS User Guide.Note: Make sure to store the Access Key ID and Secret Access Key credentials in a secure location. You will need to provide these later to provide access to your tools!Create a Custom Policy for the new IAM user. We recommend you use the following JSON policy:{“Version”:”2012-10-17″,“Statement”:[{“Action”:[“s3:GetObject”,“s3:GetObjectVersion”,“s3:ListBucketVersions”,“s3:ListBucket”],“Effect”:”Allow”,“Resource”:[“arn:aws:s3:::your_bucketname/*”,“arn:aws:s3:::your_bucketname”]}]}Note: All of the Action parameters shown above are required. Replace the “your_bucketname” placeholders in the Resource section of the JSON policy with your actual S3 bucket name.Refer to the Access Policies section of the AWS User Guide for more info.What do the Logs look like?ELB logs are stored as .log files in the S3 buckets you specify when you enable logging.The file names of the access logs use the following format:bucket[/prefix]/AWSLogs/aws-account-id/elasticloadbalancing/region/yyyy/mm/dd/aws-account-id_elasticloadbalancing_region_load-balancer-name_end-time_ip-address_random-string.logbucketThe name of the S3 bucket.prefixThe prefix (logical hierarchy) in the bucket. If you don’t specify a prefix, the logs are placed at the root level of the bucket.aws-account-idThe AWS account ID of the owner.regionThe region for your load balancer and S3 bucket.yyyy/mm/ddThe date that the log was delivered.load-balancer-nameThe name of the load balancer.end-timeThe date and time that the logging interval ended. For example, an end time of 20140215T2340Z contains entries for requests made between 23:35 and 23:40 if the publishing interval is 5 minutes.ip-addressThe IP address of the load balancer node that handled the request. For an internal load balancer, this is a private IP address.random-stringA system-generated random string.The following is an example log file name:s3://my-loadbalancer-logs/my-app/AWSLogs/123456789012/elasticloadbalancing/us-west-2/2014/02/15/123456789012_elasticloadbalancing_us-west-2_my-loadbalancer_20140215T2340Z_172.160.001.192_20sg8hgm.logSyntaxEach log entry contains the details of a single request made to the load balancer. All fields in the log entry are delimited by spaces. Each entry in the log file has the following format:timestamp elb client:port backend:port request_processing_time backend_processing_time response_processing_time elb_status_code backend_status_code received_bytes sent_bytes “request” “user_agent” ssl_cipher ssl_protocolThe following table explains the different fields in the log file. Note: ELB can process HTTP requests and TCP requests, and the differences are noted below:FieldDescriptiontimestampThe time when the load balancer received the request from the client, in ISO 8601 format.elbThe name of the load balancerclient:portThe IP address and port of the requesting client.backend:portThe IP address and port of the registered instance that processed this request.request_processing_time[HTTP listener] The total time elapsed, in seconds, from the time the load balancer received the request until the time it sent it to a registered instance.

AWS

June 19, 2018

Blog

Transform Graphite Data into Metadata-Rich Metrics using Sumo Logic’s Metrics Rules