Sign up for a live Kubernetes or DevSecOps demo

Click here

Resource Center

Browse our library of ebooks, solutions briefs, research reports, case studies, webinars and more.

All resources

18 of 18 results

Blog

Monitor Cloud Run for Anthos with Sumo Logic

Blog

Serverless Computing for Dummies: AWS vs. Azure vs. GCP

Blog

Monitor your Google Anthos clusters with the Sumo Logic Istio app 

Blog

Multi-Cloud Security Myths

Blog

Sumo Logic provides real-time visibility, investigation and response of G Suite Alerts

Blog

The Cloud SIEM market is validated by Microsoft, Google, and AWS

Blog

Clearing the Air: What Is Cloud Native?

Blog

The Key Message from KubeCon NA 2018: Prometheus is King

Blog

Exploring Nordcloud’s Promise to Deliver 100 Percent Alert-Based Security Operations to Customers

Blog

How to Use the New Sumo Logic Terraform Provider for Hosted Collectors

Over the years, automation has become a key component in the management of the entire software release lifecycle. Automation helps teams get code from development into the hands of users faster and more reliably. While this principle is critical to your source code and continuous integration and delivery processes, it is equally essential to the underlying infrastructure you depend on. As automation has increased, a new principle for managing infrastructure has emerged to prevent environment drift and ensure your infrastructure is consistently and reliably provisioned. What Is Infrastructure as Code? Infrastructure-as-code (IaC) is a principle where infrastructure is defined using a declarative model and version controlled right along with your source code. The desired infrastructure is declared in a higher level descriptive language. Every aspect of your infrastructure, including servers, networks, firewalls and load balancers can be declared using this model. The infrastructure gets provisioned from the defined model automatically with no manual intervention required. This provisioning happens with a tool that interacts with APIs to spin up your infrastructure as needed. IaC ensures that your infrastructure can be created and updated reliably, safely and consistently anytime you need. It can be challenging to implement or practice IaC without the proper tools because it requires a lot of scripting, which can also be very time consuming. Luckily, there are a few out there that currently exist to help DevOps teams practice IaC, including one well-known and widely-used tool, Terraform. Why Terraform? Terraform is an open source tool developed by HashiCorp to address the needs of IaC. Terraform can be used to create, manage and update infrastructure resources. You can use Terraform to manage physical machines, virtual machines (VMs) load balancers, firewalls and many other resources. It provides ways to represent almost any type of infrastructure. In Terraform, you use a “provider” to define these resources. A provider understands the various APIs and contracts required to create, manage and update the various resources. Providers are created by IaaS offerings such as Amazon Web Services (AWS), Google Cloud Platform (GCP) and Microsoft Azure or OpenStack or SaaS offerings like Terraform Enterprise and CloudFlare. The provider defines a declarative model to create the various infrastructure resources it offers. Introducing the Sumo Logic Terraform Provider That’s why today we’re happy to announce that we have now released a Terraform Provider for Sumo Logic, and just in time for HashiConf ‘18 this week in San Francisco. This provider helps you treat your Sumo Logic Hosted Collectors and Sources as code, ensuring consistency across your cloud infrastructure monitoring for AWS, GCP, Azure, and other cloud environments supported by Terraform. Using our Terraform provider along with the existing provider you use to manage your cloud infrastructure, you can easily set up Sumo Logic to monitor those resources and link that set up directly to the provisioning of your infrastructure. For example, if you are using the AWS provider to create your Elastic Load Balancer (ELB) and an S3 bucket to capture those logs, you can also define a Sumo Logic Hosted with an ELB source that brings those logs from the S3 bucket straight into the Sumo Logic platform. This configuration is declared in code and version controlled to give a consistent and reliable set up to create and monitor your infrastructure. Let’s walk through an example of how you can use the Terraform provider. In this example, we will use the Sumo Logic Terraform provider to create a Hosted Collector with an HTTP Source. I will be demonstrating this example using my Mac. Step-by-Step Instructions The first thing we need to do is install Terraform. I will be installing Terraform using Homebrew, a package manager for Mac OSX. Once Terraform is installed, we need to download the Sumo Logic Terraform Provider from the GitHub release page. In this case, I will be downloading the binary for Mac OSX. We then need to copy it to the Terraform plugins directory. Next we need to initialize Terraform to ensure it is ready to go by running ‘terraform init.’ With the provider in place and Terraform initialized, we can now define a configuration file for our Hosted Collector and HTTP Source. The following Terraform configuration will create a Hosted Collector with an HTTP source. provider “sumologic” { access_id = “sumo-logic-access-id” access_key = “sum-logic-access-key” environment = “us2” } resource “sumologic_collector” “example_collector” { name = “Hosted Collector” category = “my/source/category” } resource “sumologic_http_source” “example_http_source” { name = “HTTP Source” category = “my/source/category” collector_id = “${sumologic_collector.example_collector.id}” } Let’s break down the above file. The provider section defines the required properties for the Sumo Logic provider. You need to create an Access ID and Access Key which will be the credentials for the Sumo Logic API. The environment should be set based on where your Sumo Logic account is located, in this case it is US2. There are then two resource sections, where we define the Hosted Collector and the HTTP Source. An HTTP source requires the ID of the collector you wish to assign it to. In the HTTP source resource section you will see we reference the ID of the collector by using the hosted collector we created above. With the file in place and all inputs in, we can now spin up our collector. To have Terraform create this for us, we simply need to run terraform apply. Terraform will prompt us to ensure we want to do this action. After entering yes, you should see the following output indicating that our resources have been created. And now if we go to Sumo Logic and look at our Collector page, we see we have our Hosted Collector and HTTP source, just like we defined in our configuration file! Our Terraform provider allows you to configure your Hosted Collectors and Sources with all the same properties you would expect and you can see all the different options in our Documentation page. More To Come At Sumo Logic, we are developing many new APIs to give our users full control over the provisioning of all their Sumo Logic configurations. As these APIs continue to roll out, we will be updating our provider to expose these additional resources. This will allow users to manage all aspects of Sumo Logic, including Collection, User Management, and Content as code, as well as have the ability to automate every aspect of Sumo Logic. Additional Resources Download our 2018 State of Modern Apps and DevSecOps in the Cloud report for trending insights into how some of the world’s top cloud savvy companies like Twitter, Airbnb, Adobe, Salesforce, etc. build and manage their modern applications. Want to know how to how to integrate Sumo Logic’s monitoring platform into your Terraform-scripted cloud infrastructure for EC2 resources? Read the blog. Thinking about adopting modern microservices-based infrastructure? Check out part one and part two of our blog series on how to manage Kubernetes/Docker with Sumo Logic.

Blog

Sumo Logic's Third Annual State of Modern Apps and DevSecOps in the Cloud Report is Here!

Blog

11 New Google Cloud Platform (GCP) Apps for Continued Multi-Cloud Support

Blog

How to Build a Scalable, Secure IoT Platform on GCP in 10 Days

Blog

Comparing Kubernetes Services on AWS vs. Azure vs. GCP

Blog

Kubernetes Development Trends

Blog

Monitoring k8s-powered Apps with Sumo Logic

Blog

Introducing the State of Modern Applications in the Cloud Report 2017

Blog

Dockerizing Microservices for Cloud Apps at Scale

Last week I introduced Sumo Logic Developers’ Thought Leadership Series where JFrog’s Co-founder and Chief Architect, Fred Simon, came together with Sumo Logic’s Chief Architect, Stefan Zier, to talk about optimizing continuous integration and delivery using advanced analytics. In Part 2 of this series, Fred and Stefan dive into Docker and Dockerizing microservices. Specifically, I asked Stefan about initiatives within Sumo Logic to Dockerize parts of its service. What I didn’t realize was the scale at which these Dockerized microservices must be delivered. Sumo Logic is in the middle of Dockerizing its architecture and is doing it incrementally. As Stefan says, “We’ve got a 747 in mid-air and we have to be cautious as to what we do to it mid-flight.” The goal in Dockerizing Sumo Logic is to gain more speed out of the deployment cycle. Stefan explains, “There’s a project right now to do a broader stroke containerization of all of our microservices. We’ve done a lot of benchmarking of Artifactory to see what happens if a thousand machines pull images from Artifactory at once. That is the type of scale that we operate at. Some of our microservices have a thousand-plus instances of the service running and when we do an upgrade we need to pull a thousand-plus in a reasonable amount of time – especially when we’re going to do continuous deployment: You can’t say ‘well we’ll roll the deployment for the next three hours then we’re ready to run the code,’ That’s not quick enough anymore. It has to be minutes at most to get the code out there.” The Sumo Logic engineering team has learned a lot in going through this process. In terms of adoption and learning curve Stefan suggests: Developer Education – Docker is a new and foreign thing and the benefits are not immediately obvious to people. Communication – Talking through why it’s important and why it’s going to help and how to use it. Workshops – Sumo Logic does hands-on workshops in-house to get its developers comfortable with using Docker. Culture – Build a culture around Docker. Plan for change – the tool chain is still evolving. You have to anticipate the evolution of the tools and plan for it. As a lesson learned, Stefan explains, “We’ve had some fun adventures on Ubuntu – in production we run automatic upgrades for all our patches so you get security upgrades automatically. It turns out when you get an upgrade to the Docker Daemon it kills all the running containers. We had one or two instances where, this wasn’t in production fortunately, but in one or two instances we experienced where across the fleet all containers went away. Eventually we traced it back to Docker Daemon and now we’re explicitly holding back Docker daemon upgrades and make it an explicit upgrade so that we are in control of the timing. We can do it machine by machine instead of the whole fleet at once.” JFrog on Dockerizing Microservices Fred likewise shared JFrog’s experiences, pointing out that JFrog’s customers asked early on for Docker support. So JFrog has been in it from the early days of Docker. Artifactory has supported Docker images for more than 2 years. To Stefan’s point, Fred says “we had to evolve with Docker. So we Dockerized our pure SaaS [product] Bintray, which is a distribution hub for all the packages around the world. It’s highly distributed across all the continents, CDN enabled, [utilizes a] MongoDB cluster, CouchDB, and all of this problematic distributed software. Today Bintray is fully Dockerized. We use Kubernetes for orchestration.” One of the win-wins for Frog developers is that the components the developer is “not” working on are delivered via Docker, the exact same containers that will run in production, on their own local workstation. ‘We use Vagrant to run Docker inside a VM with all the images so the developer can connect to microservices exactly the same way. So the developer has the immediate benefit that he doesn’t have to configure and install components developed by the other team. Fred also mentioned Xray, which was just released, is fully Dockerized. Xray analyzes any kind of package within Artifactory including Docker images, Debian, RPM, zip, jar, war files and analyzes what it contains. “That’s one of the things with Docker images, it’s getting hard to know what’s inside it. Xray is based on 12 microservices and we needed a way to put their software in the hands of our customers, because Artifactory is both SaaS and on-prem, we do both. So JFrog does fully Docker and Docker Compose delivery. So developers can get the first image and all images from Bintray.” “The big question to the community at large,” Fred says, “is how do you deliver microservices software to your end customer?” There is still some work to be done here.” More Docker Adventures – TL;DR Adventures is a way of saying, we went on this journey, not everything went as planned and here’s what we learned from our experience. If you’ve read this far, I’ve provided a good summary of the first 10 minutes, so you can jump there to learn more. Each of the topics are marked by a slide so you can quickly jump to a topic of interest. Those include: Promoting containers. Why it’s important to promote your containers at each stage in the delivery cycle rather than retag and rebuild. Docker Shortcuts. How Sumo Logic is implementing Docker incrementally and taking a hybrid approach versus doing pure Docker. Adventures Dockerizing Cassandra. Evolving Conventions for Docker Distribution. New Shifts in Microservices What are the new shifts in microservices? In the final segment of this series, Fred and Stefan dive into microservices and how they put pressure on your developers to create clean APIs. Stay tuned for more adventures building, running and deploying microservices in the cloud. https://www.sumologic.com/blog... class="at-below-post-recommended addthis_tool">

Featured collections