When organizations scale up Kubernetes and other cloud native environments for day 2 operations, inefficiency and cost optimization become a significant problem. Kubernetes is complex, and things can quickly get out of hand when it comes to resource management. Before you know it, you could be doubling and tripling your cloud bill for the month. In light of this, what you need is a way to better manage Kubernetes resources.
The premise of Kubernetes was that scaling an application across multiple nodes was a part of the design model and if you build your application in an efficient way, it should just scale up and scale down easily. Although Kubernetes provides the flexibility to scale, in reality, it is a very complicated process. Therefore, scaling and resource optimization in Kubernetes brings about inefficiency that ultimately bleeds into developer productivity and affects user experience as well.
The complexity of Kubernetes
When you deploy an application on Kubernetes, there are a lot of different resource settings you have to configure like replicas, CPU and memory requests and limits, etc. When you add the different types of resource settings together and multiply them by the number of containers that make up your application, the number of combinations is essentially infinite. Furthermore, every combination of settings is going to result in a different outcome thus, making it a significantly complex process. This type of complexity – Kubernetes resource management complexity – forces you to make a choice between three things:
- Over-provisioning: As companies start using Kubernetes, they tend to over-provision just to be safe. And you scale up for day 2 operations, this can quickly become a costly affair.
- Risking application performance: When companies scale back on their resources and try to be more frugal with the resources allocated to an application, they are introducing risks that will end up affecting the user experience.
- Slowing down time-to-market so that the DevOps team can focus on achieving efficiency: Companies can take the resources they have and tune the application to run more efficiently by changing each variable at a time. This doesn’t really work as all variables are interrelated and the number of combinations is too large for humans to go through.
StormForge’s holistic approach to Kubernetes
When StormForge was founded in 2015, it was essentially a machine learning lab. And as the StormForge team began to port their code over to Kubernetes, they quickly realized that the machine learning that they were using to optimize data center power utilization could also be adapted to help address the Kubernetes problem. Machine learning, as it turns out, is quite well-suited for complex, multi-variable optimization problems. This made them release two products in the years to come. The first one, Optimize Pro, employs proactive optimization and works in a non-production environment by the process of experimentation.
The second, Optimize Live, employs continuous optimization and works in a production environment by using the generated observability data.
The first step in using Optimize Pro is to create a load test that is going to run the scenario you want to experiment for. After the load test is set up, you need to configure your experiment. Cost and performance are usually the two primary goals used to configure an experiment. You can then kick off your experiment. This process is completely automated where StormForge manages the entire experiment.
A number of trials make up the experiment. Each trial consists of a process where the application is started in a non-prod environment and load is applied to it using the load test built before. The results, which are the goals you specified, are measured and analyzed by machine learning. It comes back with a new set of recommendations, updates the manifesto, and does the same thing over and over again. As the experiment runs the machine learning with every iteration, it learns more and gets a better understanding of this complex multi-parameter space of the application. Over time, it hones into the configurations that are going to result in optimal outcomes. Basically, Optimize Pro runs different scenarios and machine learning tells you what the best way to configure your application is.
A few years after the creation of Optimize Pro, StormForge realized that experimentation was only one way to understand and configure an application and there could be other ways. So, the team at StormForge came up with another product called Optimize Live which takes the generated observability data and uses machine learning to help make recommendations on how to make your environment more efficient in production. As the recommendations are updated, you can either implement them automatically or you can have an approval step in the middle. Once you are on board with all the machine learning recommendations, you can put it on autopilot thus helping your application maintain maximum efficiency as it runs in production.
So, there are two approaches:
You either educate developers and platform teams about their actual workload requirements and help them make better provisioning decisions or you help transparently adjust and rectify the provisioning errors.
The goal at StormForge is to make its platform and machine learning a part of the standard DevOps process and a continuous systematic optimization process that closes the loop between production and pre-production. This takes the burden off of the developers by pulling them away from spending time on the manual tuning of Kubernetes applications and allows them to focus on innovation thus improving their overall productivity.
StormForge helps optimize Kubernetes and helps improve efficiency by applying machine learning to non-production experimentation, scenario analysis as well as production environments. This ensures easy and rapid deployment and configuration and contributes to increasing developer efficiency.
If you have questions related to this topic, feel free to book a meeting with one of our solutions experts, mail to email@example.com.