HomeOperationsAutomated OperationsBuilding out a Kubernetes cluster with Terraform

Building out a Kubernetes cluster with Terraform

If you remember back to our last article in the series, the boss had just had an epiphany and decided that our perfectly functional LAMP stack was not resilient enough and that Cloud Native with Kubernetes for the compute layer was the way to go. After a bit of reading, we understood that it was not asking for the moon on a plate for dinner and we started planning out the end environment. So what exactly will our new environment look like? At a high level, it will still be a three-tier application, we will have our web server, our application layer and our Database. All that is changing is the delivery platform, and we’ll be building out a Kubernetes cluster with Terraform.

In our previous post we gave an overview of the components that made up a Kubernetes cluster and explained the working parts of the management stack. However, we only briefly touched on the worker or minion nodes. These are the rank and file of the cluster, the section that serves out the containerized resources to be consumed by the installed applications.

This is part twelve in the ongoing series on implementing HashiCorp Terraform. Earlier posts in this series are:
How simple Terraform plans make hybrid and multi-cloud a reality: an introduction
Deploying a LAMP Stack with Terraform – AMIs, network & security
Deploying a LAMP Stack with Terraform – Databases & Webservers
How to create resilient Terraform code
Deploying and configuring HashiCorp Vault to service Terraform
Deploying a LAMP Stack with Terraform Modules
How to configure Azure for Terraform
Terraform prepare migrate to azure
Building out an enhanced lamp stack on azure part 1
Building out an enhanced lamp stack on azure part 2
Building out an enhanced lamp stack on azure part 3
Create cache, databases and DDoS protection in Azure with Terraform
it is time to go Cloud Native – moving the LAMP stack to Kubernetes

Terraform

Containers and by association Kubernetes as the orchestration and lifecycle manager appear to be very complicated; a veritable multi-headed beast to control. For example, we will need to understand single node intra-pod communication, inter-node pod communication, pod to service communications, and also pod to external resources communications.

So whilst we are working through the code to building out a Kubernetes cluster with Terraform in the form of an Amazon AWS EKS cluster we will also discuss the principles surrounding the code.

You will recognize a significant amount of this code when we are deploying our EKS cluster in AWS. This is to be expected as we have traveled a long way on this journey.

Deploying an EKS cluster, native Kubernetes on AWS

Our main.tf file has not significantly changed, but for ease and separation, we will create a separate workspace within Terraform called EKS. Currently, there are two options on how to create new workspaces, the depreciated env option and the new workspace option:

terraform workspace new eks
Create a new workspace the new way
Create a new Terraform workspace with the new options

The command works in the same manner as the original env option. Use the list option to see your workspaces:

terraform workspace list
Verify what workspace you are in
Verify what Terraform workspace you are working in

As we can see from the * sign, we are currently in the eks environment/workspace.

Let’s start to deploy our first EKS cluster

For all its vaunted ease of use, Kubernetes is one of the most complicated technology stacks that I have utilized in a long time. From the perspective of automation, one of the drawbacks of using EKS is that there must be a few things in place to correctly use an EKS cluster and authenticate with it. Many of the more tedious aspects of configuring EKS can be handled with Terraform, but a few items must be addressed before you can begin.

First, make that kubectl is installed. To use EKS your version AWS CLI must be at least version 1.16.73 and finally you need aws-iam-authenticator. This will allow you to authenticate to the EKS cluster using IAM. This information is not easily found.

Installing Kubectl

As already mentioned kubectl is a command-line tool that allows commands to be run against a Kubernetes cluster. To install it on Linux or WSL2 download the binary

curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.18.0/bin/linux/amd64/kubectl

Next, make the binary executable:

chmod +x ./kubectl

finally, move the binary to your path:

sudo mv ./kubectl /usr/local/kubectl

you can verify everything is working correctly by issuing:

kubectl version –client

Installing AWS CLI

Make sure that the latest version of glibc, groff, and less are installed on your instance if using Linux, or WSL on Windows, then use the following command to install awscli

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

if you have been following the series you should already have awscli installed. That said you may need to upgrade your version of awscli if it is older than version 1.16.73.

Installing aws-iam-authenticator

Amazon EKS uses IAM to provide authentication to the Kubernetes cluster through the AWS IAM Authenticator for Kubernetes. You can configure the stock kubectl client to work with Amazon EKS by installing the AWS IAM Authenticator for Kubernetes and modifying your kubectl configuration file to use it for authentication. To install AWS IAM Authenticator for Kubernetes run the following commands:

curl -o aws-iam-authenticator https://amazon-eks.s3.us-west-2.amazonaws.com/1.17.9/2020-08-04/bin/linux/amd64/aws-iam-authenticator
chmod +x ./aws-iam-authenticator
mkdir -p $HOME/bin && cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && export PATH=$PATH:$HOME/bin
echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc

Verify that it is working:

aws-iam-authenticator help

From here on in it’s Terraform

The next piece that is required is a VPC (Amazon Virtual Private Cloud) to run the Kubernetes cluster in. AWS recommends creating a new VPC, and this will be done using Terraform. There is nothing from a technical perspective to stop the use of an existing VPC, as long as it is appropriately configured. In either case, ensure that the VPC meets the following requirements:

Create 2 subnets; a public and private subnet one across 3 availability zones. This will give the cluster high availability and protect the worker nodes from the public internet. A little explanation about the subnets:

  • Public subnets are for resources that will be addressable on the public internet such as an Application Load Balancer. Ensure each public subnet has a route to the internet using an Internet Gateway.
  • Private subnets are for resources that should not be accessible from the internet, such as your worker nodes. Ensure each private subnet has a route to the internet using a NAT Gateway.

Tag your public and private subnets appropriately for Kubernetes to create Load Balancers

  • Public subnets should be tagged with Key kubernetes.io/role/elb and Value 1
  • Private subnets should be tagged with Key kubernetes.io/role/internal-elb and Value 1

Make sure that the subnets IP address ranges are sufficiently large to run the workload. Kubernetes clusters created using EKS will use the IP address space defined in the subnets for the pods, by using too small a subnet range there could be an artificial limit to the number of pods that can run in the cluster.

Building out a Kubernetes cluster with Terraform

The first piece of code we are going to look at is the VPC module, the first line shows a data source; these allow data to be fetched or computed for use elsewhere in Terraform configuration. Use of data sources allows a Terraform configuration to make use of information defined outside of Terraform, or defined by another separate Terraform configuration. Think of it as a dynamic variable; not entirely correct but a reasonable analogy. These identifiers must be unique within a module. What this line does is interrogate the availability zones in the selected region and return them for later consumption.

data "aws_availability_zones" "available" {}

Next, we programmatically build up the cluster name, this is because a kubernetes cluster must be uniquely named. Here we are using a locals value, think of this as a regional variable; it can only be used within the module that created it. Here we are defining the cluster name, and you can see that it is an expression created from a defined name “amazic-eks-“ and a trailing derived value “$(random_string.suffix.result”.

locals {
cluster_name = "amazic-eks-${random_string.suffix.result}"
}

Here is our old friend the random value generator. Here we create a random value of 8 characters in length without any special characters. The result of this generation is then appended onto the cluster_name value.

resource "random_string" "suffix" {
length  = 8
special = false
}

Now that we have created our internal variables, we can start on the main course of this module. After the declaration, we can see the source and the version options. The source option is self-explanatory but let us have a little discussion about version control. Here we are stating the minimum viable version of Terraform that can utilized by this module this is denoted by the use of the “>=” symbols. This means any attempt to run this code against Terraform versions under 12.2.0 will error out and not complete any plan, apply or state manipulation actions.

module "vpc" {
source  = "path_to_module/eks/aws"
version >= "12.2.0"

The next section of code is recognizable, in that it is exactly like a standard vpc resource block creation; where we would have used the following to declare “resource “aws_vpc” “my_vpc” {} so we define the name of the vpc, the cidr and pass the private and public subnets their IP ranges, now if your look at the azs option you can see that we are using the value obtained by our earlier declared data value.

name                 = "amazic-vpc"
cidr                 = "10.0.0.0/16"
azs                  = data.aws_availability_zones.available.names
private_subnets      = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets       = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]

The next two lines in this section define how the NAT gateways assigned to your private subnet will work; we are not going to delve into this in detail if you wish to gain a greater understanding read the AWS documentation on NAT Gateways. Finally, “enable_dns_hostnames” tells Terraform to enable automatic DNS names.

enable_nat_gateway   = true
single_nat_gateway   = true
enable_dns_hostnames = true

The final section denote the tags,  these are used to manage your resources. for a detailed explanation of tags and EKS resources read the associated AWS documentation.

tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
}

public_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/elb"                      = "1"
}

private_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb"             = "1"
}
}

Now that we have created our VPC, we can move onto deploying the worker nodes with our “eks” module. These lines explain where we get our code form, what the cluster name is, which is derived from the module local variable that was declared in the VPC code. The subnets values are declared from the output of the VPC Private subnet IP address ranges.

module "eks" {
source       = "path_to_module/eks/aws"
cluster_name = local.cluster_name
subnets      = module.vpc.private_subnets

The tags are just a method to organize.

tags = {
Environment = "Development"
GithubRepo  = "YourRepoNameHere"
GithubOrg   = "YourOrganizationGitNameHere"
}

This is a recognizable line, here we are telling Terraform to deploy these devices in the previously created vpc.

vpc_id = module.vpc.vpc_id

Now we are onto the main course of this section of the module, the creation of the worker nodes. What we have here is two worker groups one that is the smallest recommended EC2 instance as the node size. And the second using the slightly larger t2.medium instance. We will be discussing the security groups details later. But suffice to say that security information regarding ingress and egress is contained within.

worker_groups = [
{
name                          = "worker-group-1"
instance_type                 = "t2.small"
additional_userdata           = "echo foo bar"
asg_desired_capacity          = 2
additional_security_group_ids = [aws_security_group.worker_group_mgmt_one.id]
},

{
name                          = "worker-group-2"
instance_type                 = "t2.medium"
additional_userdata           = "echo foo bar"
additional_security_group_ids = [aws_security_group.worker_group_mgmt_two.id]
asg_desired_capacity          = 1
},
]
}

Finally, we have a couple of data declarations that will be picked up later.

data "aws_eks_cluster" "cluster" {
name = module.eks.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
name = module.eks.cluster_id
}

The next section we are looking at is the security details for the environment. Here we are creating three security groups. The first resource block starts with the ubiquitous VPC_ID and rather than name we are using name_prefix. We are using this rather than the standard Name because Terraform will assign a random string to the end of the stated string to guarantee uniqueness across the environment.

resource "aws_security_group" "worker_group_mgmt_one" {
name_prefix = "worker_group_mgmt_one"
vpc_id      = module.vpc.vpc_id

Next, we manage the ingress to the environment, this section can be specified multiple times, here we are saying that we allow port 22 to pass to port 22, (if we were doing port address translation we would be set the to_port to the desired listening port). The internal cidr is set to 10.0.0.0/8 this will give many many IP addressed for the soon to be deployed pods.

ingress {
from_port = 22
to_port   = 22
protocol  = "tcp"

cidr_blocks = [
"10.0.0.0/8",
]
}
}

We are not going to investigate the second worker group rule as it is very similar to the first the final security group outlines the access to the entire environment, here we have three cidr_blocks defined.

resource "aws_security_group" "all_worker_mgmt" {
name_prefix = "all_worker_management"
vpc_id      = module.vpc.vpc_id
ingress {
from_port = 22
to_port   = 22
protocol  = "tcp"

cidr_blocks = [
"10.0.0.0/8",
"172.16.0.0/12",
"192.168.0.0/16",
]
}
}

We have defined several outputs that will display out at the end of our successful run. These are:

The EKS endpoint for the control plane.

output "cluster_endpoint" {
description = "Endpoint for EKS control plane."
value       = module.eks.cluster_endpoint
}
Kubernetes Cluster Endpoint Address
Kubernetes Cluster Endpoint Address

The security group IDs that are attached to the cluster control unit

output "cluster_security_group_id" {
description = "Security group ids attached to the cluster control plane."
value       = module.eks.cluster_security_group_id
}
Cluster security group ID
The Cluster security group ID

The output of the kubectl config file that was automatically generated during the build process

output "kubectl_config" {
description = "kubectl config as generated by the module."
value       = module.eks.kubeconfig
}
kubernets kubectl generated config
kubernets kubectl generated config

The Kubernetes configuration needed to authentic to the cluster.

output "config_map_aws_auth" {
description = "A kubernetes configuration to authenticate to this EKS cluster."
value       = module.eks.config_map_aws_auth
}
kubernetes Authentication map
kubernetes Authentication map

The region that the cluster was deployed into.

output "region" {
description = "AWS region"
value       = var.region
}
a little reminder of where the kubernetes cluster is deployed
A little reminder of where the kubernetes cluster is deployed

The Cluster name, note you can see the auto-genterated random 8 character string appended to the name.

output "cluster_name" {
description = "Kubernetes Cluster Name"
value       = local.cluster_name
}
Our kubernetes Cluster Name
A little reminder of the EKS name

The final thing that we need to declare is the Kubernetes provider in our main.tf file; (remember this is the root file of the deployment code), and to this, we need to provide the necessary configuration details for Kubernetes authentication. So what we are saying here is do not load a configuration file, and connect to the host that is declared as a data-output of your eks build with the token which is again declared in the eks-cluster.tf file and finally the output of the base64encode of the raw cert created during the deployment.

provider "Kubernetes" {
load_config_file       = "false"
host                   = data.aws_eks_cluster.cluster.endpoint
token                  = data.aws_eks_cluster_auth.cluster.token
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
}

One final thing to note is that EKS has some costed options so remember to issue a terraform destroy to remove the deployment.

Summary

Summary

This has been a long post, but we have covered a lot of detail and traveled far. As seen from this AWS Console screen-grab, we are succesful in building out a Kubernetes cluster with Terraform.

AWS Screen grab of completed kubernetes cluster
AWS Screen grab of completed kubernetes cluster

The next post in this series looks at deploying the containers which run the compute layer of the LAMP stack on the worker nodes and integrating Kubernetes into our Vault server for authentication purposes.

 

 

 

 

NEWSLETTER

Receive our top stories directly in your inbox!

Sign up for our Newsletters

spot_img
spot_img
spot_img
spot_img
spot_img

LET'S CONNECT