In today’s ever-changing business landscape, those that operate using a software-driven model will be the most successful. These businesses recognize the power of transforming enormous volumes of data generated by digital operations into real-time insights that propel further success. The ability to do this in real-time, all the time, across multiple functional disciplines, lies at the heart of continuous intelligence.
The OpenTelemetry Collector is a new, vendor-agnostic agent that can receive and send metrics and traces of many formats. It is a powerful tool in a cloud-native observability stack, especially when you have apps using multiple distributed tracing formats, like Zipkin and Jaeger; or, you want to send data to multiple backends like an in-house solution and a vendor. This article will walk you through configuring and deploying the OpenTelemetry Collector for such scenarios.
The countdown is on to our 4th annual Illuminate user conference October 6-7, 2020! This year we are going virtual to keep everyone healthy and safe, and while we will miss seeing all of our customers and partners, we are excited to host the premier education platform for machine data analytics to help businesses accelerate digital transformation and customer experiences.
Persistence is effectively the ability of the attacker to maintain access to a compromised host through intermittent network access, system reboots, and (to a certain degree) remediation activities. The ability of an attacker to compromise a system or network and successfully carry out their objectives typically relies on their ability to maintain some sort of persistence on the target system/network.
Last week Sumo Logic announced our new Observability Suite, which included the public introduction of the closed beta for our distributed tracing capabilities as part of our Microservices Observability solution. This new solution will provide end-to-end visibility into user transactions across services, as well as seamless integration into performance metrics and logs to accelerate issue resolution and root-cause analysis. In this blog, we’ll explore the new solution in detail.
As more and more applications move to the cloud, the complexity of application architectures inevitably increases. It is a burden we willingly take on because the benefits—flexible deployment, technology diversity, independent scaling, and much more— tend to far outweigh the costs. But along this transition, most organizations face a dilemma, to divert resources to the necessary tooling for effective monitoring and troubleshooting of these systems – i.e. observability – or slow the rate of migration to the cloud.
Automation is a key component in the management of the entire software release lifecycle. While we know it is critical to the Continuous Integration/Continuous Delivery process, it is now becoming equally essential to the underlying infrastructure you depend on. As automation has increased, a new principle for managing infrastructure has emerged to prevent environment drift and ensure your infrastructure is consistently and reliably provisioned.
I am spending a considerable amount of time recently on distributed tracing topics. In my previous blog, I discussed different pros and cons of various approaches to collecting distributed tracing data. Right now I would like to draw your attention to the analysis back-end: what does it take to be good at analyzing transaction traces? As mentioned in the blog above, one of the most important outcomes of adopting open source tracing standards is a freedom to choose the right analysis backend, as long as it supports these standards. So, what is the requirement list for a distributed tracing backend? What should it do and what are absolute must-haves? We have looked at many free, open source and commercial offerings on the market and found a few tools that are good here or there, but nothing would fully match a complete list.
There has been increasing buzz in the past decade about the benefits of using a microservice architecture. Let’s explore what microservices are and are not, as well as contrast them with traditional monolithic applications. We’ll discuss the benefits of using a microservices-based architecture and the effort and planning that are required to transition from a monolithic architecture to a microservices architecture.
Today’s organizations have the challenge of managing several different applications and software within their technology stack. The more public-facing platforms an organization utilizes, the greater their public attack surface risks. Without proper protection, they and their community can become an easy target for malicious actors.
Technology has a way of circling around to the same ideas over time, but with different approaches that learn from previous iterations. Service Oriented Architecture (SOA) and Microservices Architecture (MSA) are such evolutionary approaches. Where lessons learned made sense, they were reused; and where painful lessons were learned, new methods and ideas were introduced.
We’re excited to announce the first version release of our new dashboard framework: Dashboard (New). Built on top of a scalable, flexible, and extensible charting system, the new dashboards provide customers with deep control over their visuals, enable metadata rich workflows, and create dashboards in a dashboard first GUI.
Compared to even just a few years ago, the tools available for data scientists and machine learning engineers today are of remarkable variety and ease of use. However, the availability and sophistication of such tools belies the ongoing challenges in implementing end-to-end data analytics use cases in the enterprise and in production.
Many of the organizations use AWS as their cloud infrastructure, and in general they have multiple AWS accounts for production, staging, and development. Inevitably, this would result in losing track of various experimental AWS resources instantiated by your developers. Eventually, you will be paying AWS bills for resources which could have been identified and deleted in time.
Customers regularly ask me what types of data sources they should be sending to their SIEMs to get the most value out of the solution. The driver for these conversations is often because the customers have been locked into a SIEM product where they have to pay more for consumption. More log data equals more money and, as a result, enterprises have to make a difficult choice around what log sources and data are what they guess is the most important. This often leads to blind spots from a logging perspective and requires that your analysts pivot to other tools and consoles to get any additional context and detail they can during an investigation.