As Kubernetes clusters become more widely employed with enterprise IT organizations a long-simmer debate about where best to deploy applications is now coming to a head.
A cloud native application is typically deployed on a Kubernetes cluster, but most enterprise IT organizations are managing hundreds of legacy monolithic applications that were originally deployed on virtual machines. In effect, IT organizations that embrace cloud-native applications to take advantage of elastic infrastructure will find the total cost of IT increasing as they allocate resources to manage two very different classes of applications.
There is, however, a way to run legacy monolithic applications running on virtual machines within a Kubernetes cluster. Originally developed by Red Hat, open source kubevirt software makes it possible to deploy workloads running on open source kernel-based virtual machines (KVMs) within containers that can be managed just like any other pod running on a Kubernetes cluster.
Today kubervirt is being advanced as an incubation level project by the Cloud Native Computing Foundation (CNCF). The maintainers of the project just released version 1.0 of kubevirt to ensure that a core set of application programming interfaces (APIs) for the project remain stable going forward.
Of course, KVMs are not the most widely employed virtual machines in the enterprise. VMware doesn’t provide a way to run its virtual machines on Kubernetes clusters. It prefers to make a case for running Kubernetes on top of its virtual machines. However, there are toolkits available to convert any number of virtual machines into KVMs. In the wake of the proposed acquisition of VMware by Broadcom, some IT teams are hedging their bets by exploring that option along with others.
Others are simply looking for ways to move legacy applications into cloud computing environments in a way that reduces licensing costs for proprietary software. Regardless of motivation, however, it’s clear Kubernetes clusters are starting to be pervasively deployed across the enterprise. As more cloud-native applications are deployed on those clusters, it’s only a matter of time before many IT teams will discover the number of workloads running natively on Kubernetes will eventually exceed the number of monolithic applications running on virtual machines. Beyond the cost of the infrastructure required, the total cost of maintaining, securing and hiring the personnel required to manage two completely separate application stacks will inevitably really start to add up.
Of, course, it’s not clear to degree monolithic applications running on kubevirt might experience a performance penalty if they were to be deployed on kubvirt, but there are clearly many of these applications where performance is not really a primary concern. IT teams may always wind up running a subset of the more mission-critical monolithic applications as they are today on virtual machines but there is also a clear opportunity to rationalize some of the existing IT infrastructure that many less critical applications run on today. It’s also important to remember many of those legacy monolithic applications are running on older IT infrastructure. If they were moved to a Kubernetes cluster running on modern infrastructure they might run just as fast or, for that matter, even faster.
It’s worth noting the challenge with kubevirt is that it hasn’t been easy to deploy, but providers of Kubernetes management frameworks are starting to address that issue. Spectro Cloud, for example, recently extended its Palette management platform to make it simpler to deploy kubevirt.
There are, naturally, multiple ways to reduce the total cost of IT. It may prove simpler to just replace a legacy monolithic application with a modern cloud-native alternative or a software-as-a-service (SaaS) application that a vendor manages on behalf of multiple organizations. There will, however, always be instances of custom legacy applications that for one reason or another can’t be replaced just yet so IT teams will for many years to come need to employ various methods, including rehosting applications, to rein in the cost of deploying, updating, securing and managing those applications.