HomeDevelopmentAgileLet your Pods do the talking

Let your Pods do the talking

In this article, we’ll let your Pods to the talking.

Almost all applications deployed to Kubernetes need to access cloud-native resources sooner or later. The application might need to talk to an SQL database or use S3 to backup its data. Or what about asynchronously processing messages using a message queue. Whatever the reason is, your application needs access to these services in a secure way.

Pods talk to cloud resources
https://pixabay.com/

Overview of the problem

Microservices are packaged inside a container. Pods run one or more of your containers. Kubernetes’ Worker Nodes run the Pods. Every Worker Node uses a specific role with permissions to access cloud resources.

Most of the time these permissions are too broad. There is no good reason an entire Worker Node with a lot of services on it use one role to access all cloud resources. In fact, the Worker Node should not have access to specific services at all (e.g. the database). That should be the responsibility of the application itself. Furthermore, you don’t want application A to have to access to resources which belong to application B. The segregation of duties is at stake. On top of this, it’s difficult to trace changes if all resources are controlled by one role.

Permissions need to be narrowed down. As of today, all three major cloud providers offer solutions to do so. Security teams are always eager for such solutions. DevOps teams should also embrace them. It contributes to the principle of least privilege and is a great step forward to secure your Kubernetes cluster.

High-level overview of the solution

Before we dive into the details, let’s summarize the high-level solution which is almost equal for every cloud provider.

  • An identity (e.g. a role) is stored at the level of the cloud provider (Azure Active Directory, IAM role in AWS, Workload Identity in Google cloud).
  • Map the Identity on the cloud provider level with the required permissions. In Kubernetes you have to use a Service Account (a technical non-personal account).
  • Assign the Service Account to a Pod.
  • The pod uses the permissions of the previously mentioned Identity to control the cloud resources it has access to.

All three cloud provider implementations share the same principle: use Kubernetes RBAC to control the authorization of the Pod. And another great thing: developers can deploy these solutions alongside their applications. Shifting security left even more.

Azure Pod Identity

Microsoft promotes the open-source project called Azure Pod Identity to tackle the problem.

Overview

You need Azure Active Directory in combination with Azure Kubernetes Service. Azure Active Directory centralizes Identity management. A change in the Identity reflects to your Kubernetes cluster automatically. From a Kubernetes perspective, you need to define Roles (permissions per namespace) and/or ClusterRoles (for permissions across an entire cluster). And finally: bind the roles/cluster roles to the users and/or groups in Azure Active Directory.

Using these concepts as a starting point, this is the flow of what would happen when a developer interacts with the cluster:

  • The developer authenticates himself to the cluster using Azure Active Directory.
  • Azure Active Directory generates a token and sends it to the developer.
  • The developer interacts with the cluster with the help of the token. For example: create a new service for his application.
  • Kubernetes intercepts the request and validates the token against the group membership(s) stored in Azure Active Directory.
  • The RBAC permissions of the developer are evaluated and applied to the Kubernetes level.
  • And finally, Azure Active Directory and Kubernetes RBAC grants or denies the request.

Pod identities

A best practice within the Azure domain is to use Pod identities to help you to avoid storing credentials in containers or Pods. Only Linux based container images support Pod identities. When implemented, they can automatically request access to cloud resources using Azure Active Directory as a central point of entry.

There is no need anymore to embed (static) credentials in container images or inject them as a Kubernetes secret. Remember, Kubernetes secrets are not so secure: they are just base64 encoded. Moreover, when not using Pod identities, your secrets need to be manually created and assigned. Rotation of these secrets is painful, thus a lot of companies don’t do this on a regular base. Pod identity overcomes these challenges.

Pod Identity
Source: https://stocksnap.io/

The “Azure Pod Identity” project consists of two important Kubernetes components:

  • An NMI server (Node Managed Identity) which captures requests from Pods that want to access an Azure resource. It is important to run this component on every worker node of the cluster to make sure all Pods can use it.
  • A MIC controller (Managed Identity Controller) acts as a central Pod with the correct permissions to query the Kubernetes API server. It provides the mapping between the Azure Identity in Active Directory and the Pod itself.

Luckily there is a helm chart for both components. They share the same version and are updated in tandem.

Considerations

All good solutions come with some drawbacks. Azure Pod Identity is no exception. A few examples:

  • Despite being promoted by Microsoft, Azure Pod Identity is an open-source project created by the community and Microsoft does not provide any support for it.
  • Changes to the project happen very regularly, also breaking changes. For example there were breaking changes between version 1.5.8, 1.6.0, and also 1.6.2. To make things even more complicated, the changes happened on the level of the application itself (the container image), the Helm chart which deploys the MIC and NMI component as well as the way teams need to implement all of this in their workloads.
  • The NMI component relies on a set of firewall rules – based on iptables. NMI runs as root and this cannot be changed.

You have to take into account these considerations to decide if this solution is useful for you or not.

AWS – IAM roles for Service Accounts

AWS has branded its solution: “IAM roles for Service Accounts” (IRSA). It’s interesting to note that AWS took all of the ideas, suggestions, and feedback of their users very seriously. In September 2019 they came up with this solution, also based on other initiatives like Kube2IAM & kIAM.

IRSA is based on two access models: IAM for cloud-native services and RBAC for fine-grained access to Kubernetes resources. Access is granted on a Pod level rather than an instance (worker node) level.

Key characteristics are:

  • Cloud trail supports the auditing of access and events to resources.
  • Strict isolation: containers can only fetch credentials based on their own Service Account.
  • Fewer dependencies on other third-party solutions. AWS integrated it all. The least privilege also principle applies here.

Overview

A high-level overview is as follows.

First, you need to create an Open ID Connect (OIDC) provider. You can do this using eksctl (v0.5.0+) or use Terraform.

Second: create an IAM service Account using eksctl. The sample command creates an IAM role and attaches a policy to it which holds the needed permissions. Besides this, it also creates a Kubernetes Service Account. Annotate the Service Account with the IAM role.

Change an existing Pod to use the Service Account. This provides the “integration point” towards the cloud resources. You can also include the Service Account into a deployment that takes care of the Pods. Nice to know: the webhook to capture requests of the Pod uses “mutating admission controllers” which is also a standard component of Kubernetes.

Considerations

As of today, teams do not face a big list of limitations. Some considerations are as follows:

  • IRSA is created by AWS itself and it is also possible to use this solution on self-hosted Kubernetes clusters.
  • With the current state, it’s also possible to use IRSA using cross-account roles.
  • It’s also good to know that the concepts around IRSA use OIDC federation access for which it assumes an IAM role using the Secure Token Service (STS) of AWS. The OIDC provider handles JSON web tokens (JWT) to assume the IAM roles.

Google Workload Identity

Google offers a similar solution compared to AWS: “Workload Identity”. It follows the same concepts of IAM and RBAC. Before you dive in, consider the terminology: Kubernetes Service Accounts (KSA) and Google Service Accounts (GSA). Both have a different scope.

Overview

By default, the Google cloud does not trust the credentials that originate from a Kubernetes Identity system. Therefore, Google introduced a concept called “Workload Identity Pool” to understand and trust external Identity systems like KSA. By intercepting the calls towards the Compute Engine metadata, Pods can access any Google Cloud API via which it has access to the resources being served by those.

An important concept here is the following “member” which is constructed automatically:

serviceAccount:some.workload-pool.id.goog[k8s_namespace/ksa_name]

  • In here, some.workload-pool.id.goog represents the Workload Identity Pool which is configured for the cluster.
  • ksa_name is the Kubernetes Service Account which requests access and control to the Google cloud API(s).
  • k8s_namespace represents the namespace in which the KSA is deployed

Google cloud uses Workload Identity to authenticate KSAs and the corresponding IAM role authorizes KSAs to act as GSAs. This way you can hook up with the cloud-native way of handing roles and permissions.

You need to annotate the Kubernetes Service Account with the member described above:

kubectl annotate serviceaccount \
–namespace k8s_namespace \
ksa_name \
iam.gke.io/gcp-service-account=gsa_name@gsa_project.iam.gserviceaccount.com

Be sure to separate Kubernetes Service Accounts for each and every cluster. If you don’t do so, the same member name is resolved from given KSA name, KSA namespace and Identity Pool. This leads to a single KSA which has access to all Kubernetes workloads in every cluster which uses the same service account and namespace.

Considerations

It’s good to know some of the considerations before you implement this solution:

  • No support for Windows Worker Nodes, just like Azure Pod Identity.
  • The Workload Identity Pool is automatically created for you by Google. Only one is supported and it’s fixed.
  • No support for self-hosted Kubernetes clusters.
  • When running a new Pod, it takes a few seconds to start the components to handle authentication and authorization. Therefore you need to retry the call if the Pod is not fully started.

Additional resources

In this article, I gave a high-level overview of the different solutions which the 3 major cloud providers present. There are many more options, settings, and configurations for each solution.

Additional resources
Source: https://pixabay.com/

Therefore you need to follow the howtos and tutorials for each solution. Try things out on our own cluster with the help of the following resources:

Conclusion

Pods can now access cloud resources using the least privilege principle. There is no need anymore to give a full Worker Node access to them. This greatly improves the security of your Kubernetes cluster. It also enables ways to securely access resources in another cloud account. Furthermore, the integration of different services is improved. Yet another reason to utilize the full power of Kubernetes for your microservices.

NEWSLETTER

Receive our top stories directly in your inbox!

Sign up for our Newsletters

spot_img
spot_img

LET'S CONNECT