What is Containerization?
Software containers are a form of OS virtualization where the running container includes just the minimum operating system resources, memory and services required to run an application or service. Containers enable developers to work with identical development environments and stacks. But they also facilitate DevOps by encouraging the use of stateless designs.
The primary usage for containers has been focused on simplifying DevOps with easy developer to test to production flows for services deployed, often in the cloud. A Docker image can be created that can be deployed identically across any environment in seconds. Containers offer developers benefits in three areas:
- Instant startup of operating system resources
- Container Environments can be replicated, template-ized and blessed for production deployments.
- Small footprint leads to greater performance with higher security profile.
The combination of instant startup that comes from OS virtualization, and the reliable execution that comes from namespace isolation and resource governance makes containers ideal for application development and testing. During the development process, developers can quickly iterate. Because its environment and resource usage are consistent across systems, a containerized application that works on a developer’s system will work the same way in a production system.
The instant startup and small footprint also benefits cloud scenarios. More application instances can fit onto a machine than if they were each in their own VM, which allows applications to scale-out quickly.
New Containerization Benefits
Composition and Clustering Unifies Disparate Containers
For efficiency, many of the operating system files, directories and running services are shared between containers and projected into each container’s namespace. This sharing makes deploying multiple containers on a single host extremely efficient. That’s great for a single application running in a container. In practice, though, containers making up an application may be distributed across machines and cloud environments.
The magic for making this happen is composition and clustering. Computer clustering is where a set of computers are loosely or tightly connected and work together so that they can be viewed as a single system. Similarly container cluster managers handle the communication between containers, manage resources (memory, CPU, and storage), and manage task execution. Cluster managers also include schedulers that manage dependencies between the tasks that make up jobs, and assign tasks to nodes.
Docker Simplifies Containerization
Docker needs no introduction. Containerization has been around for decades, but it is Docker that has reinvigorated this ancient technology. Docker’s appeal is that it provides a common toolset, packaging model and deployment mechanism that greatly simplifies the containerization and distribution of applications. These “Dockerized” applications can run anywhere on any Linux host. But as support for Docker grows organizations like AWS, Google, Microsoft, and Apache are building in support.
To manage composition and clustering, Docker offers Docker Compose that gives you a way of defining and running multi-container distributed applications. Then developers can use Docker Swarm to turn a pool of Docker hosts into a single, virtual Docker host. Swarm silently manages the scaling of your application to multiple hosts.
[Read More: What is Docker Swarm?]
Another benefit of Docker is Dockerhub, the massive and growing ecosystem of applications packaged in Docker containers. Dockerhub is a registry for Dockerized applications with currently well over 235,000 public repositories. Need a Web server in a container? Pull Apache httpd. Need a database? Pull the MySQL image. Whatever major service you need, there’s probably an image for it on DockerHub. Docker has also formed the Open Container Initiative (OCI) to ensure the packaging format remains universal and open.
Amazon ECS Helps Manage Containers
If you running on AWS, Amazon EC2 Container Service (ECS) is a container management service that supports Docker containers and allows you to run applications on a managed cluster of Amazon EC2 instances. ECS provides cluster management including task management and scheduling so you can scale your applications dynamically. Amazon ECS also eliminates the need to install and manage your own cluster manager. ECS allows you to launch and kill Docker-enabled applications, query the state of your cluster, and access other AWS services (e.g., CloudTrail, ELB, EBS volumes) and features like security groups via API calls.
New Containerization Challenges
While both DevOps and containers are helping improve software quality and breaking down monolithic applications, the emphasis on automation and continuous delivery also leads to new issues.Software developers are challenged with log files that may be scattered in a variety of different isolated containers each with its own log system dependencies. Developers often implement their own logging solutions, and with them language dependencies. As Christian Beedgen noted at a recent Docker Meetup, this is particularly true of containers built with earlier versions of Docker. To summarize, organizations are faced with:
- Organizing applications made up of different components to run across multiple containers and servers.
- Container security – (namespace isolation)
- Containers deployed to production are difficult to update with patches.
- Logs are no longer stored in one uniform place, they are scattered in a variety of different isolated containers.
A Model for Comprehensive Monitoring
The Sumo Logic App for Docker uses a container that includes a collector and a script source to gather statistics and events from the Docker Remote API on each host. The app basically wraps events into JSON messages, then enumerates over all running containers and listens to the event stream. This essentially creates a log for container events. In addition, the app collects configuration information obtained using Docker’s Inspect API. The app also collects host and daemon logs, giving developers and DevOps teams a way to monitor their entire Docker infrastructure in real time.
Using this approach, developers no longer have to synchronize between different logging systems (that might require Java or Node.js), agree on specific dependencies, or risk of breaking code in other containers.
If you’re running Docker on AWS, you can of course monitor your container environment as described above. But Sumo Logic also provides a collection of Apps supporting all things AWS including out of the box solutions for Sumo Logic App for CloudTrail, AWS Config, AWS ELB, and many others, thus giving you a comprehensive view of your entire environment.
Get Started with Containers
Sumo Logic delivers a comprehensive strategy for monitoring Docker infrastructure with a native collection source for events, stats, configurations and logs, and provides views into things like container performance for CPU, memory, and the network. There’s no need to parse different log formats, or manage logging dependencies between containers. Sumo Logic’s advanced machine-learning and analytics capabilities to enable DevOps teams to analyze, troubleshoot, and perform root cause analysis of issues surfacing from distributed container-based applications and Docker containers themselves.
- Docker from Code to Container
- Kubernetes vs Docker
- Application Containers vs System Containers
- Design Microservices Architecture with Containers
Complete visibility for DevSecOps
Reduce downtime and move from reactive to proactive monitoring.