In today’s ever-changing business landscape, those that operate using a software-driven model will be the most successful. These businesses recognize the power of transforming enormous volumes of data generated by digital operations into real-time insights that propel further success. The ability to do this in real-time, all the time, across multiple functional disciplines, lies at the heart of continuous intelligence.
As more and more applications move to the cloud, the complexity of application architectures inevitably increases. It is a burden we willingly take on because the benefits—flexible deployment, technology diversity, independent scaling, and much more— tend to far outweigh the costs. But along this transition, most organizations face a dilemma, to divert resources to the necessary tooling for effective monitoring and troubleshooting of these systems – i.e. observability – or slow the rate of migration to the cloud.
The Continuous Intelligence Platform for AWS Observability is the first solution to leverage all of the telemetry generated by AWS services to accelerate issue resolution, automatically determine the root cause of failures, and help customers optimize usage of AWS services to improve uptime and performance.
Automation is a key component in the management of the entire software release lifecycle. While we know it is critical to the Continuous Integration/Continuous Delivery process, it is now becoming equally essential to the underlying infrastructure you depend on. As automation has increased, a new principle for managing infrastructure has emerged to prevent environment drift and ensure your infrastructure is consistently and reliably provisioned.
I am spending a considerable amount of time recently on distributed tracing topics. In my previous blog, I discussed different pros and cons of various approaches to collecting distributed tracing data. Right now I would like to draw your attention to the analysis back-end: what does it take to be good at analyzing transaction traces? As mentioned in the blog above, one of the most important outcomes of adopting open source tracing standards is a freedom to choose the right analysis backend, as long as it supports these standards. So, what is the requirement list for a distributed tracing backend? What should it do and what are absolute must-haves? We have looked at many free, open source and commercial offerings on the market and found a few tools that are good here or there, but nothing would fully match a complete list.
There has been increasing buzz in the past decade about the benefits of using a microservice architecture. Let’s explore what microservices are and are not, as well as contrast them with traditional monolithic applications. We’ll discuss the benefits of using a microservices-based architecture and the effort and planning that are required to transition from a monolithic architecture to a microservices architecture.
Today’s organizations have the challenge of managing several different applications and software within their technology stack. The more public-facing platforms an organization utilizes, the greater their public attack surface risks. Without proper protection, they and their community can become an easy target for malicious actors.
Technology has a way of circling around to the same ideas over time, but with different approaches that learn from previous iterations. Service Oriented Architecture (SOA) and Microservices Architecture (MSA) are such evolutionary approaches. Where lessons learned made sense, they were reused; and where painful lessons were learned, new methods and ideas were introduced.
We’re excited to announce the first version release of our new dashboard framework: Dashboard (New). Built on top of a scalable, flexible, and extensible charting system, the new dashboards provide customers with deep control over their visuals, enable metadata rich workflows, and create dashboards in a dashboard first GUI.
Compared to even just a few years ago, the tools available for data scientists and machine learning engineers today are of remarkable variety and ease of use. However, the availability and sophistication of such tools belies the ongoing challenges in implementing end-to-end data analytics use cases in the enterprise and in production.
Many of the organizations use AWS as their cloud infrastructure, and in general they have multiple AWS accounts for production, staging, and development. Inevitably, this would result in losing track of various experimental AWS resources instantiated by your developers. Eventually, you will be paying AWS bills for resources which could have been identified and deleted in time.
Customers regularly ask me what types of data sources they should be sending to their SIEMs to get the most value out of the solution. The driver for these conversations is often because the customers have been locked into a SIEM product where they have to pay more for consumption. More log data equals more money and, as a result, enterprises have to make a difficult choice around what log sources and data are what they guess is the most important. This often leads to blind spots from a logging perspective and requires that your analysts pivot to other tools and consoles to get any additional context and detail they can during an investigation.
Ever since JASK was founded, we have heavily integrated with threat intelligence platforms to gain context into attacker activity through indicators of compromise (IOCs). Now that we have joined Sumo Logic, our customers have the ability to pull in more data than ever making this feature even more powerful.