Frank Reno

Frank Reno

Frank Reno is a senior technical product manager at Sumo Logic where he is focused on all things containers, orchestration and open source. Frank works with customers and our product teams to design and build solutions for Sumo Logic customers leveraging technologies like Docker and Kubernetes. He is also an active contributor to Sumo Logic open source solutions and to the general open source community.

Posts by Frank Reno

Blog

Helping solve the Kubernetes challenge: Sumo Logic at the helm

Blog

Understanding the Impact of the Kubernetes Security Flaw and Why DevSecOps is the Answer

Blog

How to Use the New Sumo Logic Terraform Provider for Hosted Collectors

Over the years, automation has become a key component in the management of the entire software release lifecycle. Automation helps teams get code from development into the hands of users faster and more reliably. While this principle is critical to your source code and continuous integration and delivery processes, it is equally essential to the underlying infrastructure you depend on. As automation has increased, a new principle for managing infrastructure has emerged to prevent environment drift and ensure your infrastructure is consistently and reliably provisioned. What Is Infrastructure as Code? Infrastructure-as-code (IaC) is a principle where infrastructure is defined using a declarative model and version controlled right along with your source code. The desired infrastructure is declared in a higher level descriptive language. Every aspect of your infrastructure, including servers, networks, firewalls and load balancers can be declared using this model. The infrastructure gets provisioned from the defined model automatically with no manual intervention required. This provisioning happens with a tool that interacts with APIs to spin up your infrastructure as needed. IaC ensures that your infrastructure can be created and updated reliably, safely and consistently anytime you need. It can be challenging to implement or practice IaC without the proper tools because it requires a lot of scripting, which can also be very time consuming. Luckily, there are a few out there that currently exist to help DevOps teams practice IaC, including one well-known and widely-used tool, Terraform. Why Terraform? Terraform is an open source tool developed by HashiCorp to address the needs of IaC. Terraform can be used to create, manage and update infrastructure resources. You can use Terraform to manage physical machines, virtual machines (VMs) load balancers, firewalls and many other resources. It provides ways to represent almost any type of infrastructure. In Terraform, you use a “provider” to define these resources. A provider understands the various APIs and contracts required to create, manage and update the various resources. Providers are created by IaaS offerings such as Amazon Web Services (AWS), Google Cloud Platform (GCP) and Microsoft Azure or OpenStack or SaaS offerings like Terraform Enterprise and CloudFlare. The provider defines a declarative model to create the various infrastructure resources it offers. Introducing the Sumo Logic Terraform Provider That’s why today we’re happy to announce that we have now released a Terraform Provider for Sumo Logic, and just in time for HashiConf ‘18 this week in San Francisco. This provider helps you treat your Sumo Logic Hosted Collectors and Sources as code, ensuring consistency across your cloud infrastructure monitoring for AWS, GCP, Azure, and other cloud environments supported by Terraform. Using our Terraform provider along with the existing provider you use to manage your cloud infrastructure, you can easily set up Sumo Logic to monitor those resources and link that set up directly to the provisioning of your infrastructure. For example, if you are using the AWS provider to create your Elastic Load Balancer (ELB) and an S3 bucket to capture those logs, you can also define a Sumo Logic Hosted with an ELB source that brings those logs from the S3 bucket straight into the Sumo Logic platform. This configuration is declared in code and version controlled to give a consistent and reliable set up to create and monitor your infrastructure. Let’s walk through an example of how you can use the Terraform provider. In this example, we will use the Sumo Logic Terraform provider to create a Hosted Collector with an HTTP Source. I will be demonstrating this example using my Mac. Step-by-Step Instructions The first thing we need to do is install Terraform. I will be installing Terraform using Homebrew, a package manager for Mac OSX. Once Terraform is installed, we need to download the Sumo Logic Terraform Provider from the GitHub release page. In this case, I will be downloading the binary for Mac OSX. We then need to copy it to the Terraform plugins directory. Next we need to initialize Terraform to ensure it is ready to go by running ‘terraform init.’ With the provider in place and Terraform initialized, we can now define a configuration file for our Hosted Collector and HTTP Source. The following Terraform configuration will create a Hosted Collector with an HTTP source. provider “sumologic” { access_id = “sumo-logic-access-id” access_key = “sum-logic-access-key” environment = “us2” } resource “sumologic_collector” “example_collector” { name = “Hosted Collector” category = “my/source/category” } resource “sumologic_http_source” “example_http_source” { name = “HTTP Source” category = “my/source/category” collector_id = “${sumologic_collector.example_collector.id}” } Let’s break down the above file. The provider section defines the required properties for the Sumo Logic provider. You need to create an Access ID and Access Key which will be the credentials for the Sumo Logic API. The environment should be set based on where your Sumo Logic account is located, in this case it is US2. There are then two resource sections, where we define the Hosted Collector and the HTTP Source. An HTTP source requires the ID of the collector you wish to assign it to. In the HTTP source resource section you will see we reference the ID of the collector by using the hosted collector we created above. With the file in place and all inputs in, we can now spin up our collector. To have Terraform create this for us, we simply need to run terraform apply. Terraform will prompt us to ensure we want to do this action. After entering yes, you should see the following output indicating that our resources have been created. And now if we go to Sumo Logic and look at our Collector page, we see we have our Hosted Collector and HTTP source, just like we defined in our configuration file! Our Terraform provider allows you to configure your Hosted Collectors and Sources with all the same properties you would expect and you can see all the different options in our Documentation page. More To Come At Sumo Logic, we are developing many new APIs to give our users full control over the provisioning of all their Sumo Logic configurations. As these APIs continue to roll out, we will be updating our provider to expose these additional resources. This will allow users to manage all aspects of Sumo Logic, including Collection, User Management, and Content as code, as well as have the ability to automate every aspect of Sumo Logic. Additional Resources Download our 2018 State of Modern Apps and DevSecOps in the Cloud report for trending insights into how some of the world’s top cloud savvy companies like Twitter, Airbnb, Adobe, Salesforce, etc. build and manage their modern applications. Want to know how to how to integrate Sumo Logic’s monitoring platform into your Terraform-scripted cloud infrastructure for EC2 resources? Read the blog. Thinking about adopting modern microservices-based infrastructure? Check out part one and part two of our blog series on how to manage Kubernetes/Docker with Sumo Logic.

Blog

Monitoring Kubernetes: What to Monitor (Crash Course, Part 2)

Blog

Monitoring Kubernetes: The K8s Anatomy (Crash Course, Part 1)

Blog

Gain Full Visibility into Microservices Architectures Using Kubernetes with Sumo Logic and Amazon EKS

Blog

The DockerCon Scoop - Containers, Kubernetes and more!

Ahhh DockerCon, the annual convention for khaki pant enthusiasts. Oh, wait, not that Docker. Last week DockerCon kicked off with 5500 Developers, IT Ops Engineers and enterprise professionals from across the globe. With the announcement of new features like LinuxKit and the Moby project, Docker is doubling down on creating tools that enable mass innovation while simplifying and accelerating the speed of the delivery cycle. Docker is starting to turn a corner, becoming a mature platform for creating mission-critical, Enterprise class applications. Throughout all of this, monitoring and visibility into your infrastructure continues to be critical to success. Current Trends In the world of containers, there are three trends we are seeing here at Sumo Logic. First, is the rapid migration to containers. Containers provide great portability of code and easier deployments. Second is the need for visibility. While migrating to containers have simplified the deployment process, it is definitely a double-edged sword. The ability to monitor your containers health, access the container logs and monitor the cluster on which your containers run is critical to maintaining the health of your application. The last trend is the desire to consolidate tools. You may have numerous tools helping you monitor your applications. Having multiple tools introduces “swivel chair” syndrome, where you have to switch back and forth between different tools to help diagnose issues as they are happening. You may start with a tool showing you some metrics on CPU and memory, indicating something is going wrong. Metrics only give you part of the visibility you need. You need to turn to your logs to figure out why this is happening. Monitoring Your Containers and Environment Sumo Logic’s Unified Logs and Metrics are here to help give you full visibility into your applications. To effectively monitor your applications, you need the whole picture. Metrics give you insights into what is happening, and logs give you insights into why. The union of these two allow you to perform root cause analysis on production issues to quickly address the problem. Sumo Logic can quickly give you visibility into your Docker containers leveraging our Docker Logs and Docker Stats sources. Our Docker application allows you to gain immediate visibility into the performance of your containers across all of your Docker hosts. Collecting Logs and Metrics From Kubernetes At DockerCon, we saw an increased use of Kubernetes and we received many questions on how to collect data from Kubernetes clusters. We have created a demo environment that is fully monitored by Sumo Logic. This demo environment is a modern application leveraging a micro-services architecture running in containers on Kubernetes. So how do we collect that data? Well, the below diagram helps illustrate that. We created a FluentD plugin to gather the logs from the nodes in the cluster and enrich them with metadata available in Kubernetes. This metadata can be pulled into Sumo Logic giving you increased ability to search and mine your data. We run the FluentD plugin as a Daemonset which ensures we collect all the logs for every node in our cluster. For metrics, we are leveraging Heapster’s ability to output to a Graphite sink and using a Graphite Source on our collector to get the metrics into Sumo Logic. Since Heapster can monitor metrics at the cluster, container and node level, we just need to run it and the collector as a deployment to get access to all the metrics that Heapster has to offer. What's Next What if you are not running in Kubernetes? In a previous post, we discussed multiple ways to collect logs from containers. However, due to the fast-paced growth in the container community, it is time to update that and we will add a post to dive deeper into that.