Kubernetes is the answer. Now, what was the problem?

Kubernetes is the answer. Now, what was the problem

Over the last several years, an insidious technology has seeped into the Technosphere.  Containerisation and the orchestration thereof by Kubernetes.   For a large subsection of tech, the consumers of containers, the complexity has been successfully hidden.  The Kubernetes solutions provided by the public cloud hyper-scalers – Azure, AWS and Google and, to a greater and lesser extent, the private cloud providers at first look appear easy to consume.  Kubernetes seems very much to be the serene swan gliding over the lake.  However, pull away the pretty façade, dip your head under the water, and the swan’s feet paddle faster than a hamster on a wheel.  To a developer, Kubernetes and containers make things simple.  With a standard deployment interface and a common development platform, it manages the infrastructure for you because it is a platform.

This is beauty, and the danger of Kubernetes is that it is a platform.  It hides the complexity of its moving parts by providing a standard interface upon deploying your applications.  It can also handle several application life-cycle functions like auto-growth and contraction, create, modify, destroy, and manage auto-recovery; all these are manna from heaven for application developers.  Another excellent benefit of Kubernetes and containerisation is that it is cloud agnostic.  That’s right, Kubernetes deployed containers are cross-cloud deployable.

OK, that is a little bit of marketing there.  Several prerequisites are required to deploy a container cross-cloud successfully; for example, standard Kubernetes platform versions, the same underlying container technology or compatibility, network connectivity between clouds, etc.  Why you would want to do this is an entirely different question and will be the subject of a later article.

All this is great, but it remains so complicated under the covers.  But before that, a little bit of history.

For anybody over the age of forty, Containers are NOT new.  Docker did not invent the container concept; all they did was make it viable and were in the right place at the right time.  However, the success of Docker brought with it the law of unintended consequences.  We now have all these containers running all over the place, and deploying, managing and removing them was hard.  So Google created an internal tool called Borg to orchestrate the life-cycle of their containerised applications and services.

I am Borg; I bring order to your Containers. (Copyright: Paramount Pictures)

Borg was the root of Kubernetes and contained several core features of Kubernetes, namely these four features:

  1. Pods. A pod is the smallest deployable unit in Kubernetes.  It is a group of one or more containers.  Containers that are part of the same Pod are guaranteed to be scheduled together onto the same machine and can share state via local volumes.
  2. Services. These are used to expose the application running on the aforementioned Pod.
  3. Labels. These are key and value pairs that are assigned to a pod or object to identify certain attributes of the Pod or object to the user.  For example
“metadata”:{
   “labels”: {
      “release”: “stable”, 
      “release”: “canary”,
      “environment”: “dev”, 
      “environment”: “qa”, 
      “environment”: “production”,
      “tier”: “frontend”,
      “tier”: “backend”, 
      “tier”: “cache”,
      “partition”: “customerA”,
      “partition”: “customerB”,
      “track”: “daily”, 
      “track”: “weekly”
   }
}
  1. IP-per-Pod. Each Pod has a single IP address assigned from the Pod CIDR range of its node.  This IP address is shared by all containers running within the Pod and connects them to other Pods running in the cluster.

Kubernetes consumers interact directly with the Kubernetes API and code or through a pretty interface that guides your hand and leads you gently through the barbed jungle.   It is the abstraction of the underlying complexity that gives Kubernetes its power.  But it is the complexity of Kubernetes that is its Achilles heel.  From an infrastructure perspective, it is not an easy beast to tame.  Building out a Kubernetes stack from scratch is not a trivial task if you are still running a traditional data center.  That said, it is not as tricky as five years ago, when it would involve the mystics arts of Linux whispering, fighting “make” commands, package versions, and other prereqs.  Even when using a packaged deployment, it is still a trial.   VMware’s Tanzu stack, an option from version 7 of vSphere, is not seamless, and RedHat’s OpenShift solution is not the most obvious of solutions either.

ohhh, a pretty graphical interface (Azure AKS)

So, from the developer’s perspective, containers and Kubernetes are simple to consume and almost perfect.  However, if you talk too the infrastructure and operations side of the equation, you will soon find that the horror stories are manifold.  In fact, several websites are dedicated to reporting Kubernetes failures.  For example, “Kubernetes Failure Stories”.

Kubernetes is complicated

Kubernetes and container deployments are a complicated beast; the levels of moving parts that go into delivering a functional solution are often baffling.

At its core, Kubernetes is a set of controller machines (three) that run a minimum of three or more worker nodes.  These worker nodes run your containers.  OK, that doesn’t sound complicated.  Well, let’s dig a little deeper; we have Etcd, Kubelet, Kube-proxy, Kube-controller manager, which contains the node, replication and endpoint controllers,  the Kube-API-server, the Kube-Scheduler and that is just the control plane.

All this infrastructure to deliver an application.

But why does it have to be so complicated?  According to Appvia,  The reason is simple.  “Kubernetes defines a complex infrastructure so that applications can be simple.  All those things that an application developer typically had to consider when coding a new application, like security, logging, redundancy and scaling, are all built into the Kubernetes fabric”.  So there you have it; Kubernetes is complicated so that developers can have an easier time creating their applications without worrying about things like security, logging, scaling and redundancy.  More work for Operations again.

All this infrastructure just to deliver an application; this would be acceptable if that were it.  And back in the early days when containers were stateless, this was it, but as it is with vendors trying to drive features, now containers are no longer the cattle we were led to believe they were.  Now we have stateful containers, storage containers, and Operations need to know how to handle these, to prevent outages.  This means new monitoring capabilities and a deep understanding of how your infrastructure works.

Kubernetes requires so much configuration.,

Kubernetes is the only technology that does not appear to have received the simplification memo.

Kubernetes is simple – sorry was I suppose to be simple?

There are no shortcuts to deploying Kubernetes; it is not like deploying a VMware environment or building out infrastructure on AWS or Azure.  There is no easy option.  You cannot just deploy and go.  Deploying your container image and your Kubernetes distribution is just the start.  You don’t believe me.  Read the standard documentation set at Kubenetes.io.  That is just to get started, not optimised.

I have often said that any solution is a problem that is more complicated than the problem it purports to solve is no solution.  My gut feels this is the case with Kubenetes.  That said, I do feel that it has its place.  For those that need a cloud-agnostic solution, Kubernetes is currently the only  answer to that problem.  This is unlike Virtual machines, whose format depends on the platform they are deployed on, VHD Azure, VMX on VMware, or functions, which again are platform specific, Lamba on AWS or function apps on Azure.

Is it worth the investment needed to understand the platform?  Consultants answer; it depends.  Are you deploying container-deployed applications at scale?  Do you require the ability to move from cloud to cloud at the drop of a hat?  Then arguably no, you do not need in-depth knowledge about the platform’s infrastructure; use AKS, EKS or GKS. However, if you are running VMware or Openshift etc, you will need to understand it implicitly.  Tanzu Grid and Openshift are good products, but simple to deploy, they are not.  Is Kubernetes a great addition to your CV?  Very much so.  Are there many environments deploying containerised applications?  if you work at Azure, AWS, GCP, Netflix, Facebook, and Twitter, then the answer is yes, you are deploying them every second.  However, the vast majority of Enterprise and LME environments are only now starting to look at application modernisation.

Summary

Personally, I feel it is a technology that has not matured enough for mainstream adoption, there is still too much of the mystic arts surrounding containers and Kubernetes in general, even after eight years of the project being in the wild (Kubernetes was released in the wild in September 2014).  It is still very much a pioneer technology; after eight years it should be late settler status, even town planner stage.  Consider the difference in the simplicity of use that VMware ESX/ESXi undertook from its initial release in 2001 and 2009 version 4.1, over the obvious lack of improvement in Kubernetes simplicity.

This, leads me to believe that Kubernetes is still a solution looking for a problem because it is still technology first, and not simplicity first.

If you have questions related to this topic, feel free to book a meeting with one of our solutions experts, mail to sales@amazic.com.