Containers are portable assets that let you design and deploy with little overhead from your development team. They take your monolithic code base and turn it into several lightweight modules that you can more easily manage and interconnect without worry of one small module taking out your entire application. This gives you more granular control of your code, but it also means that you have several moving parts as part of your platform.
The problem with so many moving parts is that it’s difficult to keep track of them. If you have one container that connects to another, you must remember to update each one to ensure stability across your platform. Multiply that by dozens of containers and you now have a code management problem.
Kubernetes solves much of the overhead for deploying containers. If you’ve ever been in an enterprise environment, you know that deployment days can take all day between compiling code, testing it, and then ensuring all services are updated. Several automation tools have sprung up over the years to help develop a better way to deploy, but these tools are mainly for monolithic code bases. Orchestration and automation solve many of the problems experienced from manual methods — you don’t forget files, you update across all servers, and changes can be rolled back.
Another difference between Kubernetes and other deployment options is that container deployments are continuous. There is no wait for compiling and then deploying binaries one-by-one. Instead, Kubernetes is always pushing new changes to your containers and deploying in the background. It’s a way to rapidly deploy code without stopping productivity at certain times of the month. If you have a service that consistently needs changes, then Kubernetes and containers can handle the workload with little interference from your developers.
Bottom line, Kubernetes, and containers in general, are a natural fit for cloud environments because containers are much more portable and lightweight – and containers can run in most cloud and on-premise environments. In particular, Containers are a way to achieve a multi-cloud strategy, because they can be run across both on premise and the major cloud providers. This cross-compatibility makes containers an attractive option for reducing the risk of adopting microservices in the cloud.
Pros and Cons of Amazon Web Services (AWS)
Out of the three top services (AWS, Azure, GCP), AWS is the cloud-based hosting market share leader and so we’ll look at it first. It has three container environments: ECS, EKS, and Fargate. ECS is the best option if you have little experience with containers and already work with AWS to host your services. It’s the “container light” of the three options. Deploying to ECS has been called “containers-as-a-service,” and it’s considered a good starting point for anyone that wants to determine if they are right for the organization. You don’t need to install anything with ECS. You simply let the service automate your deployments directly in the cloud using Amazon’s AWS CloudFormation.
For a much more full Kubernetes and container experience, you can opt for Amazon EKS. If you’re already hosting Kubernetes locally, EKS is simply moving your existing environment to the cloud. EKS is definitely well positioned to become the most popular way to manage containers. Sixty-three percent of container users surveyed by Kubernetes are using AWS already.
AWS Fargate is the new kid on the block. It’s the latest release from Amazon for container users. It’s a way for developers with no experience in underlying infrastructure to work with them. Fargate lets you deploy containers without managing servers or clusters. AWS has confirmed that Fargate will work with AWS EKS as well, so there will be a few options and combinations based on individual needs.
Hosted Kubernetes with AWS is what makes it attractive to developers that are just learning the entire containership environment. If you need to experiment with containers and aren’t sure if they fit into your development environment, using AWS and their Kubernetes service is a good starting point. The downside is that EKS is considered difficult to set up and requires some technical background with containers, but it’s also a fully scalable and customizable solution that puts the business in control of Kubernetes and the way it works with local development.
Pros and Cons of Azure
If you primarily work in a Windows environment, deploying in Azure seems like the natural solution. Azure (its container services is named AKS or Azure Container Service) is reported as slightly slower during deployments, but it’s still an improvement from older traditions of manually moving .NET code from staging or development to a production environment. It’s the newest container service in the cloud market only available since 2015, so Microsoft still continues to improve its service.
Even though using Azure sounds like it would naturally just be for Windows instances, Azure actually supports Linux images, which means that you can deploy Linux containers on Azure as long as you have the operating system installed from the Azure dashboard. This means that you aren’t limited to just Windows, but you do have limitations with hybrid containers. Where AWS will support hybrid deployments, Azure limits you to either Linux or Windows.
In October 2017, Azure released its AKS (Azure Container Service), which is similar to AWS EKS. AKS will deploy seamlessly on an Azure hosted VM, but the biggest advantage is that AKS is free. You only pay for the VM resources that you use in Azure.
The downside to Azure is that, while the AKS service actually predates AWS EKS, Kubernetes adoption is much higher on AWS and GCP, and Azure is usually behind both AWS and Google Kubernetes Engine (GKE) when the latest versions come out. However, if you already have Azure and want to tinker with the idea of containership, Azure makes it easy to deploy and gives you detailed analytics that you can use to determine if it’s the right platform for you. It should be noted, though, that Azure also has a competitive service to AWS ECS called Service Fabric, which is worth taking a look at as well.
Pros and Cons of Google Cloud Platform (GCP)
Google is the original creator for Kubernetes standards, so working with this platform puts you ahead of the game in almost every aspect. Any new versions and deployments are immediately available to you while other platforms must catch up. Google excels in offering big data, machine learning and artificial intelligence (AI) technologies, so if this is what you’re into then GCP is the service to work with.
The main issue with GCP is that it’s not the most popular for IaaS. It doesn’t have the small business cloud offerings that AWS and Azure offer, so its platform as a whole is not attractive to corporations that want to integrate the cloud into its internal network. There is no integration of Active Directory like Azure or IAM with AWS.
Google is pushing its platform as a better way to handle DevOps. DevOps departments are a hybrid of operations people and developers. The developers spend their time finding better ways to handle operations management, and this is where GCP and Kubernetes can help. If you’re looking for a better way to automate deployments within a DevOps team, GCP has its advantages.
|AWS||Security, reliability and scalability||Potentially more expensive than other options|
|Options for lightweight services to experience Kubernetes and containers before you move your local environment to the cloud||Difficult to use for new developers unfamiliar with container services and Kubernetes|
|Fargate gives developers a way to deploy containers with no understanding of server infrastructure|
|Azure||More intuitive for Windows developers||Slower during deployments|
|Supports Windows and Linux containers||Does not support hybrid containers|
|AKS is free. You only pay for VM resources.|
|GCP||Original creator of Kubernetes, so introduction of new features is faster||Doesn’t integrate with IaaS cloud requirements|
|Perfect for developers that want to work with AI and big data|
|Works well with DevOps teams|
Cost comparisons are difficult due to the “pay as you go” structure in cloud computing. An individual developer just tinkering with Kubernetes and containers won’t pay nearly as much as an enterprise that requires high-powered computing resources. Another factor is that costs are relative to the resources that you use, and each platform has a minimum amount allocated to a cluster.
Here are some rough figures though to help you determine costs when choosing a Kubernetes platform. This cost comparison assumes that you have 5 master nodes, 15 worker nodes, and each node has 4 vCPU and 16GB of RAM.
|Google Cloud Platform||Microsoft Azure|
|$.20 per hour||$.19 per hour||$.20 per hour|
|14,440 compute hours||10,8000 compute hours||14,440 compute hours|
|$2,880 per month||$2,052 per month ($1,539 with discounts)||$2,880 per month|
GCP is the clear winner if you’re looking for a platform on a budget. They offer up to a 30 percent discount, which brings the cost below $2,000 per month.
How to decide
So, what’s the best service to use with Kubernetes? There’s no clear answer because in the end, it all comes down to which platform you’re comfortable with. If you already have Azure or AWS, it’s probably best to stick with the same platform. However, if you’re looking for a new player and want to get into AI and machine learning, Google has an attractive offer that actually costs less in the end.