Pricing Login
Pricing
Support
Demo
Interactive demos

Click through interactive platform demos now.

Live demo, real expert

Schedule a platform demo with a Sumo Logic expert.

Start free trial
Back to blog results

November 17, 2022 By Colin Fernandes and Zoe Hawkins

How to design a microservices architecture with Docker containers

Microservices architecture with Docker containers

Application development trends guide industries (tech and non-tech alike) toward a more cloud-native and distributed model with digital-first strategies. Many organizations are adopting new technologies and distributed workflows. Software development pipelines enable teams to collaborate efficiently and maintain productivity. However, organizations that were early to embrace modern application development strategies and tools, including containerization and multi-cloud environments.

What is Docker?

Docker is an open source virtualization technology known as a platform for software containers. These containers provide a means to enclose an application, including its own filesystem, into a single, replicable package.

Born out of open source collaboration, Docker containers helped revolutionize the software development world. By encasing software in shells of code called containers, which include all the resources the software needs to run on a server - tools, runtime, system libraries, and more - the software can perform the same way across multiple hosting platforms (e.g. AWS, Google Cloud, Microsoft Azure, and Apache). Docker’s container technology is at the forefront of mobile, scalable development.

Today, developers use Docker to build modules called microservices, which decentralize packages and divide tasks into separate, stand-alone integrations that collaborate. Developers for a nationwide pizza chain can build a microservices application for taking an order, processing a payment, and creating a ‘make’ ticket for the cooks, and a delivery ticket for the drivers. These microservices would then operate together to get pizzas cooked and delivered all over the country.

When people talk about Docker, they probably talk about Docker Engine, the runtime that allows you to build and run containers. But before you can run a Docker container, they must be built, starting with a Docker File.

The Docker File defines everything needed to run the container image, including the OS network specifications, and file locations. Now that you have a Docker file, you can build a Docker Image, which is the portable, static component that gets run on the Docker Engine. A Docker Image is a set of instructions (or a template) to build Docker containers.

To manage composition and clustering, Docker offers Docker Compose, which allows you to define and run containerized applications. Then developers can use Docker Swarm to turn a pool of Docker hosts into a single, virtual Docker host. Swarm silently manages the scaling of your application to multiple hosts.

Another benefit of Docker is Docker Hub, the massive and growing ecosystem of containerized microservices. Dockerhub is a registry for Dockerized applications with currently well over 235,000 public repositories. Need a Web server in a container? Need a database in a container ? Pull the MySQL images. Whatever major service you need, there’s probably an image for it on DockerHub. Docker has also formed the Open Container Initiative (OCI) to ensure the packaging format remains universal and open.

If you are running on AWS, Amazon EC2 Container Service (ECS) is a container management service that supports Docker containers and allows you to run applications on a managed cluster of Amazon EC2 instances. ECS provides cluster management, including task management and scheduling, so you can scale your applications dynamically. Amazon ECS also eliminates the need to install and manage your own cluster manager. ECS allows you to launch and kill Docker-enabled applications, query the state of your cluster, and access other AWS services (e.g. CloudTrail, ELB, EBS volumes) and features like security groups via API calls.

Designing with microservices in Docker requires new thinking and approaches, but also creates unparalleled abilities for building stable, scalable integrations. Here’s a look at the ins and outs of microservices, and how to make them work for you.

What are microservices?

Developing microservices is the art of breaking down the old model of building one large application, i.e. a “monolithic” application, and forming a new model where specialized, cloud-hosted sub applications - each charged with a very specific task - work together. Microservices distribute application load balancing and can help ensure stability with replicable, scalable services interacting.

But what’s the right approach for breaking a monolithic integration apart? When deconstructing an application into modules, engineers tend to follow planned decomposition patterns, sorting the new software modules into logical working groups.,

For example, a grocery chain’s shipping and tracking software that currently uses one application for fruit might decompose into modules that process bananas, oranges, etc. This may improve aspects of tracking, but decomposing software along logical subdomains - fruit types in this instance - can have unforeseen consequences on business ability.

Author and highly regarded software developer expert Martin Fowler examines the trap of hyper-focus on decomposition by subdomain:

“When looking to split a large application into parts, often management focuses on the technology layer, leading to UI teams, server-side logic teams, and database teams. When teams are separated along these lines, even simple changes can lead to a cross-team project taking time and budgetary approval.”

Microservice architecture takes a different approach to organizing modules. It decomposes applications around business capabilities, building cross-functional teams to develop, support, and continually deploy microservices. Fowler emphasizes the “products, not projects” approach to business-focused decomposition: delivering a package isn’t a one-time project with a team that breaks up on completion, but an ongoing, collaborative commitment to continually delivering excellent products.

Microservices also decentralize traditional storage models found in monolithic application development. Microservices work best with native management of their own data stores, either repeated instances of the same database technology or a blend of separate database types as most appropriate for the service. This is the full realization of an approach first outlined by the developer Scott Leberknight, which he called PolyglotPersistence. The ability to mix and match data store types presents myriad possibilities for microservice developers.

The advantages of the microservice approach are still being explored. So, as with all systems, be aware of potential pitfalls and limitations of the practice.

Challenges of building a microservice architecture

The power and possibilities that can be realized through microservices come with these common areas to address in design and manage on an ongoing basis.

Service tracking

Services distributed across multiple hosts can be hard to track. Rather than a single stop to tweak monolithic integrations, collaborating microservices scattered throughout your environment need to be inventoried and quickly accessible.

Rapid resource scaling

Each microservice consumes far fewer resources than monolithic applications, but remember that the number of microservices in production will grow rapidly as your architecture scales. Without proper management, many little hosts can consume as much compute power and storage, or more, as a monolithic application.

Inefficient minimal resourcing

If you’re using the Amazon Web Services environment, there is a bottom limit to the resources you can assign to any task. Microservices may be so small that they require only a portion of a minimal EC2 instance, resulting in wasted resources and costs that exceed the actual resource demand of the microservice.

Increased deployment complexity

Microservices stand alone, and can be developed in many programming languages. But every language depends on its own libraries and frameworks, so these multiple programming languages will require a completely different set of libraries and frameworks. This increases resource overhead (and costs) and makes deployment a complex consideration.

But these obstacles aren’t insurmountable. This is where groundbreaking container technology like Docker can step in and fill existing gaps.

Docker to the rescue for microservices

The Docker technology of the container, now emulated by other container services, helps address the biggest challenges to building a microservice architecture in the following ways.

Task isolation

Create a Docker container for each individual microservice. This solves the problem of resource bloat from over-provisioned instances idling under the almost non-existent strain of a lone service, and multiple containers can be run per instance.

Support multiple coding languages

Divvy all the services required to run a language, including libraries and framework information, into linked containers to simplify and manage multiple platforms.

Database separation

Use containers to host one or more data volumes, then reference them from other microservices and containers. Chris Evans at ComputerWeekly explains the concept:

“The benefit of this method of access is that it abstracts the location of the original data, making the data container a logical mount point. It also allows ‘application’ containers accessing the data container volumes to be created and destroyed, while keeping the data persistent in a dedicated container.”

Automate monitoring

Gain deep insights into data flow within by monitoring individual container logs with powerful tools like Sumo Logic for logging and machine learning, saving your team’s time and accelerating the continuous delivery pipeline.

Five principles to enable your architecture

Designing an efficient microservice architecture is no accident. Libkage to old Sumo Logic’s own Mike Mackrory outlines five principles for staying in control of a complex environment powered by microservices:

  1. Cultivate a solid foundation. Everything starts with people, so make sure yours are ready to live and breathe in a microservices world.

  2. Begin with the API. Simple math: one microservice starts with one API.

  3. Ensure separation of concerns. Each microservice must have a single, defined purpose. If it starts feeling like they should add a responsibility, add a new microservice (and a new API) instead.

  4. Production approval through testing. Write comprehensive testing parameters for each microservice, then combine them into a full testing suite for use in your continuous delivery pipeline.

  5. Automate deployment. And everything else. Automate code analysis, container security scans, pass/fail testing, and every other possible process in your microservice environment.

Build your teams themselves and your general approach to a microservice architecture gradually, carefully, and in the same DevOps spirit of continual feedback and improvement.

How do Docker and Kubernetes relate?

Kubernetes and Docker are both comprehensive de-facto solutions to intelligently manage containerized applications and provide powerful capabilities. From this, some confusion has emerged. “Kubernetes” is now sometimes used as a shorthand for an entire container environment based on Kubernetes. In reality, they are not directly comparable, have different roots, and solve for different things.

Docker is a platform and tool for building, distributing and running Docker containers. It offers its own native clustering tool that can be used to orchestrate and schedule containers on machine clusters.

Kubernetes is a container orchestration system for Docker containers that is more extensive than Docker Swarm. It is meant to coordinate clusters of nodes at scale in production efficiently. It works around the concept of pods, which are scheduling units (and can contain one or more containers) in the Kubernetes ecosystem, and are distributed among nodes to provide high availability. One can easily run a Docker build on a Kubernetes cluster, but Kubernetes itself is not a complete solution and is meant to include custom plugins.

Kubernetes and Docker are both fundamentally different technologies, but they work well together, and both facilitate the management and deployment of containers in a distributed architecture. The main difference between Docker and Kubernetes is that Docker is a container technology platform, and Kubernetes is a container orchestrator for platforms like Docker. It's pretty common to compare Kubernetes and Docker, but a better comparison is Kubernetes vs Docker Swarm. Take a closer look at Kubernetes vs. Docker.

Make the move to microservices

Microservices are the modern approach to assisting older models to make software collaborate and scale. The long-term efficacy of the approach is still to be determined, but there’s no denying the capabilities it brings to designing and managing complex infrastructures in a DevOps environment.

Want to dive deeper into the worlds of microservices and Docker? Learn more about benchmarking microservices and check out the power and versatility of the Docker App for Sumo Logic to monitor and analyze your Docker containers.

Complete visibility for DevSecOps

Reduce downtime and move from reactive to proactive monitoring.

Sumo Logic cloud-native SaaS analytics

Build, run, and secure modern applications and cloud infrastructures.

Start free trial

Colin Hawkins

Senior Director of Product Marketing | Principal Content Manager

More posts by Colin Fernandes and Zoe Hawkins.

More posts by Colin Hawkins.

People who read this also enjoyed