HomeOperationsKubernetes Management Challenges Are on the Rise

Kubernetes Management Challenges Are on the Rise

A pair of reports suggest that IT organizations are struggling with managing Kubernetes environments as the number of workloads deployed on the cloud native platform continues to steadily increase.

A survey of 800 C-level execs and senior IT operations and DevOps professionals conducted by Pepperdata, a provider of platform for controlling cloud costs, finds organizations on average have now deployed 10 Kubernetes clusters, with 19% having deployed more than 11 instances.

However, well over half (57%) cited a significant or unexpected amount of spending on compute, storage, networking infrastructure, and/or infrastructure-as-a-service as their biggest challenge. Not surprisingly, nearly 44% as a result are implementing cloud cost reduction and finance operations (FinOps) initiatives to reduce costs at a time when more organizations than ever are trying to navigate uncertain economic times, the survey finds.

A second report, meanwhile, finds organizations are especially finding it challenging to set memory limits. An analysis of more than 150,000 workloads running in Kubernetes clusters deployed by hundreds of different organizations that was conducted by Fairwinds, a provider of managed services, finds 30% of organizations have at least 50% of their workloads set with memory limits too high.

Exactly who is responsible for managing Kubernetes clusters in most organizations varies widely. In some instances, developers that lack IT operations experience are managing clusters inefficiently. Many of those developers routinely overprovision Kubernetes clusters despite the platform’s inherent ability to dynamically scale resources up and down as required.

At the other end of the spectrum, IT operations teams that lack the skills, tools and expertise required to programmatically automate Kubernetes cluster management are getting more involved. Many of those IT professionals might not entirely understand the need to, for example, right-size containers. he Fairwinds report, for example, finds organizations that implemented Kubernetes guardrails were able to correct 36% more issues where CPU and memory configurations were missing than those that did not have guardrails in place. IT teams leveraging guardrails were also able to repair 15% more image vulnerabilities than those not using them.

In theory, at least, cloud-native applications should be less expensive to deploy than legacy monolithic applications that consume a dedicated amount of infrastructure allocated to a specific virtual machine. Cloud native applications deployed on Kubernetes clusters should be able to dynamically scale up and, just as importantly from a cost perspective, down. In reality, many IT teams are applying the same practices they use to build and deploy monolithic applications to cloud-native application environments. The reason this occurs is that many organizations are now rushing to deploy cloud-native applications without first making sure the proper orchestration and management controls are in place.

The reason for this is that Kubernetes is simultaneously the more powerful and complex platform to find its way in the enterprise. There is no standard management plane so IT organizations are required to master a range of lower-level application programming interfaces (APIs) and YAML files to manage each cluster. Some organizations may have DevOps teams in place that can programmatically invoke those APIs and what can quickly become a wall of YAML files but the average IT administrator that lacks programming skills needs a set of graphical tools to effectively manage any IT platform. There’s usually not enough DevOps talent available so most organizations will need at some point to press traditional IT administrators into Kubernetes management service.

Ultimately, organizations will need to determine to what degree do they want DevOps teams to manage infrastructure alongside application code versus relying on IT administrators to handle those tasks. The less time DevOps teams spend on managing and optimizing infrastructure the more time they should have to focus on applications. In they elect to have IT administrators providing them with a control plane to manage multiple Kubernetes cluster at a higher level of abstraction becomes absolutely essential.

NEWSLETTER

Receive our top stories directly in your inbox!

Sign up for our Newsletters

spot_img
spot_img

LET'S CONNECT