HomeArchitecture8 Best Practices to Improve Your Kubernetes Deployments

8 Best Practices to Improve Your Kubernetes Deployments

Kubernetes is all about simplifying deployments. However, if not managed well, bad deployments can result in downtime, and a bad user experience. In this article, we look at 8 best practices to follow when deploying an application using Kubernetes.

1. Ensure Pods are Highly Available

High Availability of pods is about setting up pods in a way that there is no single point of failure. Kubernetes allocates a pod to a worker node based on several factors such as availability of node resources, taints, tolerations, affinity, and anti-affinity rules. However, Kubernetes does not ensure the high availability of the pods by default. Below are the number of ways to set the high availability of pods.

1.1. Pod Disruption Budget

Kubernetes helps to run highly available applications by using frequent voluntary disruptions. Create a PodDisruptionBudget (PDB) for each application. A PDB limits the number of pods that are down simultaneously from voluntary disruptions. For example, a PDB can be set to front-end pods so that Kubernetes ensures certain replicas serving the applications never come down simultaneously. There are 2 ways to specify a pod disruption budget – by Percentage, or by number of pods.

Example:
Configure a pod disruption budget to an nginx deployment.

apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: nginx-pdb
spec:
  minAvailable: 3
  selector:
    matchLabels:
      app: nginx

Deploy pod disruption budget and nginx, get the deployment, and get the pods with the -o option.

Now, drain the node, and in the logs, you’ll find that you’re unable to drain the node completely because of the Pod Disruption Budget.

1.2. Affinity and Anti Affinity

This is another method of ensuring high availability by scheduling pods on different nodes. You can force Kubernetes to schedule the pods across the worker nodes using anti-affinity.

Two types of Pod affinity and anti-affinity are as follows:

  1. requiredDuringSchedulingIgnoredDuringExecution
  2. preferredDuringSchedulingIgnoredDuringExecution

The Pod affinity rule uses the “hard” requiredDuringSchedulingIgnoredDuringExecution, while the anti-affinity rule uses the “soft” preferredDuringSchedulingIgnoredDuringExecution.

1.3. Horizontal Pod Scaling

Another method to ensure high availability of pods is to configure Horizontal Pod Scaling (HPA). When there is an increase in load to the pod, HPA would increase the number of pods. If the load decreases, HPA scales down the number of pods.

Here link of HPA Example: HorizontalPodAutoscaler Walkthrough | Kubernetes

2. Provide Required Resources to the Pods

When you create a pod, mention Requests and limits of resources a container needs. The most common resources are CPU and memory (RAM). There are other resources available. Without resource requests and limits, pods in clusters can start using more resources from the node. This can make other pods not able to be created and sometimes it causes node failure. Resource requests specify that minimum amount of resources the pod can consume. Resource limits specify the maximum amount of resources a container can use.

spec:

      containers:
      - image: nginx
        name: nginx
        resources:
          requests:
            memory: "128Mi"
            cpu: "400m"
          limits:
            memory: "256Mi"
            cpu: "800m

3. Secure the pods

Securing pods is very important. You can use RBAC rules in Kubernetes to secure the pods. Role-based access control (RBAC) enables us to give access to computer or network based resources based on the roles of users. RBAC in Kubernetes uses the rbac.authorization.k8s.io API group. The RBAC API gives three types of Kubernetes objects

  1. Role
  2. ClusterRole
  3. ClusterRoleBinding

3.1. Role and ClusterRole

An RBAC Role or Cluster Role refers to rules that represent a set of permissions. A Role always sets permissions within a particular namespace. When you create a Role, specify the namespace it belongs in. ClusterRole is set at the cluster level.

Role Example:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: pod-role
rules:
- apiGroups: [“”]
  resources: ["pods"]
  verbs: ["get", "watch", "list", "write"]

Cluster Role is the same as the above but one difference – in the ‘kind’ section instead of Role it would be ClusterRole.

3.2. RoleBinding and ClusterRoleBinding

A Role Binding grants the permissions defined in role to users. It has different entities – users, groups and service accounts. Role Binding can bind roles to different entities. A Role Binding grants permissions within a particular namespace but Cluster Role Binding grant permissions at the cluster level.

Cluster Role Binding Example:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: fleet-server
subjects:
- kind: ServiceAccount
  name: fleet-server
  namespace: eck-shared
roleRef:
  kind: ClusterRole
  name: fleet-server
  apiGroup: rbac.authorization.k8s.io

Role binding is the same as cluster Role binding but the difference is in kind section instead of clusterRoleBinding it is just RoleBinding and mention the namespace also.

4. Monitor the pods

Monitoring Kubernetes clusters is very important. You need to monitor not only at cluster level but also monitor every layer of the Kubernetes system – physical nodes, pods, clusters and the control pane. There are many tools available in the market to monitor Kubernetes clusters. Some of the monitoring tools available for Kubernetes monitoring are

  • cAdvisor
  • Prometheus
  • Elastic stack
  • Fluentd

Prometheus is the most widely used monitoring tool, and since it is open source, you can get started with it right away with zero commitment.

5. Use of Small Size Containers

Use smaller size container images, and avoid using unwanted libraries. The advantages of using smaller container images is, it takes less storage space and Kubernetes can pull images very fast.

6. Perform Readiness and Liveness Probes

Kublet uses a liveness probe to find out when to restart the container. For example, When a application is running, it may go into a deadlock state. This can be caught by using a liveness probe and restarting the pods.

The kubelet uses readiness probes to find out when a container is ready to accept traffic. Readiness probe is used to control which pods can be used as a backend for services. When a pod is not ready, it is taken out from the load balancers services

spec:

 containers:
 - image: nginx
   name: nginx
   resources:
 	requests:
   	memory: "128Mi"
   	cpu: "400m"
 	limits:
   	memory: "256Mi"
   	cpu: "800m"
   livenessProbe:
 	httpGet:
   	path: /prodhealth
   	port: 8080

In this example, the probe pings an application to check if it is still running. If it gets the HTTP response, it then marks the pod as healthy.

Because there is no url path as /prodhealth and nginx is not listening on port 8080, nginx is failing to start.

7. Use different namespaces

Namespaces provide a mechanism for isolating groups of resources within a single cluster. A namespace is intended for use in many environments where there are many users working in a cluster. For example, We can create different namespaces for different environments like production, testing and development and assign each team member permissions to use THE intended namespaces so that each team member can access their own namespaces.

apiVersion: apps/v1
kind: Deployment
namespace: development
metadata:
  labels:
	app: nginx

8. Use Labels

Maintaining all of the resources and keeping track of them in clusters is difficult. This is where labels can be used. Labels are key value pairs that recognize the resources. For example, if there are two similar applications running in a cluster, but each is used by different teams – development and production, for example, use labels to distinguish them.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
	app: nginx
	env: prod

If you have questions related to this topic, feel free to book a meeting with one of our solutions experts, mail to sales@amazic.com.

NEWSLETTER

Receive our top stories directly in your inbox!

Sign up for our Newsletters

spot_img
spot_img
spot_img
spot_img
spot_img

LET'S CONNECT