/

GKE vs AKS vs EKS

The state of managed Kubernetes by industry’s top cloud providers.

Update: This post was last updated on June 15, 2018.

In this post, we will try to keep up-to-date information on Google Container Engine (GKE) by Google Cloud Platform, Azure Container Service (AKS) by Microsoft Azure and Elastic Container Service (EKS) by Amazon Web Services.

Here is a table for quick reference: Though this is non-exhaustive, it covers what is required for most use-cases.

PS: Hasura has been running Kubernetes clusters big (~100nodes) and small on Azure, AWS & GCP for few years and are dedicated to being vendor agnostic. Our dream come true would be GKE’s onboarding and Azure’s resource group based IAM + billing.

Now, lets explore some of the main aspects of using these platforms:

Onboarding

Creating a Kubernetes cluster is a straightforward experience with both GKE and AKS. Using their native CLI, you can create, upgrade or delete clusters with a single command. Definitely, a very simple and smooth process to get started. For more advanced setups, GKE provides a fairly comprehensive setup through their UI as well. AKS seems to have kept their UI very minimal and doesn’t offer anything different from their CLI.

EKS does not have a one page onboarding experience like the others. The process has multiple steps and needs configuration beyond the web console (like connecting the worker nodes to the cluster via kubectl). This is clearly a bottleneck if you are in the business of creating and deleting clusters on demand. But respite is not very far away with the community building tools like ‘eksctl’ that simplify the provisioning process considerably.

With regards to available Kubernetes versions, GKE and EKS are providing Kubernetes v1.10 which is the latest stable upstream version. GKE also offers K8s in alpha mode, which is a fully enabled (aka alpha features included) Kubernetes package. AKS is not very far off, just one major version behind: v1.9.

From our usage experience we found GKE provisioning to be the fastest, followed by AKS and EKS respectively. Typically, creation took ~2 mins on GKE, ~5 mins on AKS and ~15 mins on EKS.

Post provisioning, all the vendors provide full access to the clusters using kubectl.

Availability

Availability is one thing that is very crucial if you are running business critical applications on Kubernetes.

GKE provides high availability of their clusters in two modes: multi-zonal and regional. In multi-zonal mode, there is only one master node but there can be worker nodes in different zones. In regional mode, the master nodes are also spread across all the zones to provide even better HA.

AKS does not have high availability for their master nodes, as of date. The worker nodes are part of Availability Zones so they provide HA . We are not sure what Azure’s roadmap looks like regarding complete HA in the future.

EKS also provides HA master and worker nodes spread across multiple availability zones very similar to GKE’s regional mode.

Scalability

GKE, AKS, and EKS all provide you the ability to scale up nodes very easily, just by using the UI.

GKE and EKS provide further customisation in their ability to scale up. Unlike AKS, where you can only scale up to similar nodes, GKE and EKS provide the ability to add different ‘nodepools’ (or ‘nodegroups’) which allows different machine types to join the worker pool. Whereas in GKE addition of a nodepool is a single step process, in EKS you need to additionally connect the new nodepool to the cluster manually.

Also in GKE, you can configure your cluster to use the Cluster Autoscaler automatically which will scale your nodes up or down based on the required workload. This is great if you repeatedly run short-lived processes like batch jobs. Note that although both AKS and EKS have their worker nodes running as “auto-scaling” groups, the auto-scaling policies are not Kubernetes aware. The Cluster Autoscaler can also be setup for these providers manually.

Add-ons

There are many add-on features that enhance the usability of a Kubernetes cluster. The availability of these features as turn-key services can differentiate these providers more saliently.

For instance, all three providers have the option of running K8s on GPU powered nodes. This enables heavy machine-learning/image processing frameworks like TensorFlow to leverage their full potential.

GKE provides support for Calico as its network plugin which enables Network Policies to be defined for inter-pod communication. AKS supports Network Policies through kube-router project which has to be installed manually. EKS also provides Calico integration though it has to be setup manually. Network policies are crucial for securing the platform especially in a multi-tenant environment.

AKS offers a new and incubating feature on Kubernetes called Service Catalog to its users. With Service Catalog, a client can request for Azure services which are then bound and the credentials provided to an application running on the cluster.

AKS also supports integration with the virtual-kubelet project. With the virtual-kubelet project, pods can be backed by the provider’s CaaS services (in this case, ACI) instead of VMs. This enables a “serverless” experience for workloads on Kubernetes. Along the same lines, EKS is also working on providing integration with its own CaaS: Fargate.

We foresee many more features from upstream Kubernetes and incubators to make its way to these managed providers.

Pricing

Both GKE and AKS charge you only for the infrastructure that is visible to you. This means the master nodes, cluster management and other services are free of cost.

EKS differs here from the others as it charges for the master nodes as well. With 0.20 dollars per hour for the master nodes (combined), EKS is considerably costlier.

Clearly, even from a pricing perspective, if you want to run a K8s cluster on these providers it makes sense to use their managed service instead of deploying it yourself. Of course, having a self-deployed cluster might be inevitable for few custom cases.

Here is a simple pricing table for a 8 node (1 core, 3.5gb) cluster:

 ╔═══════════╦══════════════════════╦═══════════════════════════╗
 ║           ║ Short-term (100 hrs) ║ Long-term (3yrs committed)║
 ║           ║ /per month           ║ /per month                ║
 ╠═══════════╬══════════════════════╬═══════════════════════════╣
 ║    GKE    ║        40$           ║        125$               ║
 ║ — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -║
 ║    AKS    ║        60$           ║        150$               ║ 
 ║ — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -║
 ║    EKS    ║ 50$ + 20$* (* master)║ 150$ + 144$* (* master)   ║
 ╚═══════════╩══════════════════════╩═══════════════════════════╝

Endnote

It’s very exciting to see big cloud providers getting into the space of managed Kubernetes. These clusters come with cluster management, high availability, scalability, automation and many other plugin services. The Kubernetes ecosystem is evolving with great speed with such platforms competing with each other and providing solid value to end users.

Do you have any other criteria which you’d like to see in this qualitative comparison of Kubernetes offerings? Please let me know in the comments below!

Blog
02 Mar, 2018
Email
Subscribe to stay up-to-date on all things Hasura. One newsletter, once a month.
Loading...
v3-pattern
Accelerate development and data access with radically reduced complexity.