HomeOperationsAutomated OperationsDeploying Consul on HCP - Part 2

Deploying Consul on HCP – Part 2

Our series on Service Mesh has reached its third instalment with this post. In the first post, we went over the features and use cases of a Service Mesh, and in the second article, we demonstrated how to set up a Consul cluster on HCP by utilising the user interface. This time, we are going to talk about how to automate the deployment of Consul.

OK so lets get started,  login to your HCP portal and select the Consul box.  Once that page has loaded click the create cluster box and chose your poison, AWS or Azure,  this time we will deploy our cluster on Azure.

Create_a_cluster
Creating our Consul Cluster

Make sure that the radio button for Azure is selected and then click next to continue.

Create a Consul Cluster
Let’s chose Azure today

Currently, “HCP Automated deployment” on Azure is only recommended for Development environments because the generated code will only deploy a single host rather than a cluster.  There are two possible architectures that can be deployed, one deployed on AKS (Kubernetes on Azure).

Consul on AKS
Architecture on AKS

The other deployed on Azure VMs.

Consul on VM's
Architecture of Consul with a VM based deployment

For the purpose of this article let us mix it up a bit and for a change deploy Consul on AKS (Kubernetes on Azure). Further we are going to create an new VNet too.  Verify that your portal looks like the image below, remember to choose the HCP region that is relevant to you.  We chose “UK South” for both options, depending upon your location there may be a difference between your Azure Tenancy default region and the HCP region.

HCP Variables
Where do you want it and how is it deployed

Next we start need our credentials to authenticate terraform with the HCP environment.

Authentication to HCP
Everybody needs some Security, HCP ID and secrets.

If you have not generated a Service Principle and key for HCP, you will need to click the “generate service principle and key” link, once generated the necessary information will display in the code box,  click the “copy code” to capture the information and remember to save it securely as it will not be shown again.  The next section

We have Code
We have our Code – lets get started

Click on the “Copy Code” button and copy it into your favourite editor.  It must be stated that this code is not production ready and will need some work.  The lack of variables makes the code less useful than it should be, this is a relatively easy thing to change.  That said, does have modules, but depending on your corporate policies regarding online and open-source modules, you may need to pull this into your Git and add them manually to your deployment directory structure.  This is beyond the scope of this article.

Adding Variables to DRY out this code

To make this code more flexible and capable of being reused for a production environment, we need to remove the values that are unique to this particular deployment, this is a technique called DRYing out your code (Don’t Repeat Yourself).  To do this create a file called variables.tf in the root folder of your code.  For deeper insight into variables and their uses have a read of the Customize Terraform Configuration with Variables on the Hashicorp site.

This is beyond the scope of this particular article; therefore, as this is a test and development environment which will be destroyed after we have finished the article, we will deploy the environment using the code as created by the HCP portal.

What does this Code do?

We will start to look at what exactly this code is doing, there is a working assumption that you have some knowledge on terraform, if not please read our introduction to terraform series.

The first section is the locals file. This is a special variable definition, for me personally there is nothing in this block that could not be declared as standard input variables.  Local variables as declared here are better thought of as constants.

locals {
   hvn_region     = var.hvn_region #"uksouth"
   hvn_id         = var.hvn_id #"amazic-test-consul-hvn"
   cluster_id     = var.cluster_id #"amazic-test-consul"
   network_region = var.region #"uksouth"
   vnet_cidrs     = ["10.0.0.0/16"]
   vnet_subnets = {
      "subnet1" = "10.0.1.0/24",
      "subnet2" = "10.0.2.0/24",
   }
}

What is interesting it the number of providers this solution will utilize, some you may not have been exposed to previously.

terraform {
   required_providers {
   azurerm = {
      source                = "hashicorp/azurerm"
      version               = "~> 2.65"
      configuration_aliases = [azurerm.azure]
   }
   azuread = {
      source  = "hashicorp/azuread"
      version = "~> 2.14"
   }
   hcp = {
      source  = "hashicorp/hcp"
      version = ">= 0.23.1"
   }
   kubernetes = {
      source  = "hashicorp/kubernetes"
      version = ">= 2.4.1"
   }
   helm = {
      source  = "hashicorp/helm"
      version = ">= 2.3.0"
   }
   kubectl = {
      source  = "gavinbunney/kubectl"
      version = ">= 1.11.3"
   }

}
   required_version = ">= 1.0.11"
}

The most interesting is the kubectl provider that is written by Gavin Bunney and is used to deploy the Demo application HashiCups.  More on that later. When we discuss the modules.

When configuring the providers, the most interesting thing is that the three providers (kubectl, Kubernetes and helm) that are related to deploying and configuring Kubernetes and the applications running on it all accept the same inputs as shown below.

provider "kubectl" {
   client_certificate = base64decode(azurerm_kubernetes_cluster.k8.kube_config.0.client_certificate)
   client_key = base64decode(azurerm_kubernetes_cluster.k8.kube_config.0.client_key)
   cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.k8.kube_config.0.cluster_ca_certificate)
   host                   = azurerm_kubernetes_cluster.k8.kube_config.0.host
   load_config_file       = false
   password               = azurerm_kubernetes_cluster.k8.kube_config.0.password
   username               = azurerm_kubernetes_cluster.k8.kube_config.0.username
}

The parameters that are being passed here are the static credentials supplied by AKS.  These will automatically read the certificated data and pass it to the Kubernetes-based providers.

provider "hcp" {
   client_id     = <HCP Client ID here>
   client_secret = <HCP Client Secret here>
}

The block above is where you will copy the HCP client ID and Secret you created earlier.

What resources are created in Azure?

The majority of the resource creation is obvious. Resource Groups, VNets, NSG, AKS services etc.  the interesting blocks relate to the creation of the Consul service.

resource "hcp_hvn" "hvn" {
   cidr_block     = "172.25.32.0/20"
   cloud_provider = "azure"
   hvn_id         = local.hvn_id
   region         = local.hvn_region
}

The above block relates to the creation of the HCP Virtual network, this is an internal network to the HCP Platform.  It is recommended that you do not change this.

module "hcp_peering" {
   source  = "hashicorp/hcp-consul/azurerm"
   version = "~> 0.3.1"
   hvn    = hcp_hvn.hvn
   prefix = local.cluster_id
   security_group_names = [azurerm_network_security_group.nsg.name]
   subscription_id      = data.azurerm_subscription.current.subscription_id
   tenant_id            = data.azurerm_subscription.current.tenant_id
   subnet_ids = module.network.vnet_subnets
   vnet_id    = module.network.vnet_id
   vnet_rg    = azurerm_resource_group.rg.name
}

This is our first consul module, there is nothing complex about this.  We will look deeper into the modules after working thought the “main.tf” file.

resource "hcp_consul_cluster" "main" {
   cluster_id         = local.cluster_id
   hvn_id             = hcp_hvn.hvn.hvn_id
   public_endpoint    = true
   tier               = "development"
   min_consul_version = "v1.14.0"
   }

The two main options in this block of code are the “public_endpoint” which takes a Boolean value and can be set to true or false, “true” means that the endpoint has a public interface. The default setting for this is false, if this option is omitted, it is assumed that false is chosen..  The second option is tier; there are three potential values “development,” “standard,” and “plus”.

resource "hcp_consul_cluster_root_token" "token" {
   cluster_id = hcp_consul_cluster.main.id
}

The cluster root token is a token that is used to bootstrap the cluster ACL system.

What about the Modules

There are for main modules included with the deployment:

  • network
  • aks_consul_client
  • hcp_peering
  • demo_app

network

we will start with the networking module, this module can be found here. That said it is a defunct branch and the current branch appears to be here.  This is a part of the code should be looked at to see if there is any security updates needed.  This is out of the scope of this article.  The module as currently written, will deploy your VNet and subnets and will assign nsgs to the variously created subnets, however the rules needed to protect the network are out of scope with this module.

module "network" {
   source              = "Azure/vnet/azurerm"
   version             = "~> 2.6.0"
   address_space       = local.vnet_cidrs
   resource_group_name = azurerm_resource_group.rg.name
   subnet_names        = keys(local.vnet_subnets)
   subnet_prefixes     = values(local.vnet_subnets)
   vnet_name           = "${local.cluster_id}-vnet"
   
# Every subnet will share a single route table
   route_tables_ids = { for i, subnet in keys(local.vnet_subnets) : subnet => azurerm_route_table.rt.id }

# Every subnet will share a single network security group
   nsg_ids = { for i, subnet in keys(local.vnet_subnets) : subnet => azurerm_network_security_group.nsg.id }
   depends_on = [azurerm_resource_group.rg]
}

One of the interesting blocks in the code is the depends on option,  this effectively prevents terraform from running a particular block of code before a dependency has been created, in this case the resource group.  This is a very useful piece of code to use.  Place it in your armoury.

aks_consul_client

The next module is the money module,  this creates the HCP Consul Cluster to Azure.

module "aks_consul_client" {
source  = "hashicorp/hcp-consul/azurerm//modules/hcp-aks-client"
version = "~> 0.3.1"
cluster_id = hcp_consul_cluster.main.cluster_id
# strip out url scheme from the public url
consul_hosts       = tolist([substr(hcp_consul_cluster.main.consul_public_endpoint_url, 8, -1)])
consul_version     = hcp_consul_cluster.main.consul_version
k8s_api_endpoint   = azurerm_kubernetes_cluster.k8.kube_config.0.host
boostrap_acl_token = hcp_consul_cluster_root_token.token.secret_id
datacenter         = hcp_consul_cluster.main.datacenter

# The AKS node group will fail to create if the clients are
# created at the same time. This forces the client to wait until
# the node group is successfully created.
depends_on = [azurerm_kubernetes_cluster.k8]
}

And interesting line in the this block is the “consul_host” line.

consul_hosts = tolist([substr(hcp_consul_cluster.main.consul_public_endpoint_url, 8, -1)])

the “tolist” function will convert an input to a list; in this case the contents of the ([ ]).  However the more interesting part is the “substr” function; what this does is extract a substring from a given input string by removing the number of characters as stated in the “offset” value from the beginning of the string and then counting to the “length” and then removing the characters thereafter. “substr{string, offset, length)”. There is a get out of jail free card, in that if you only want to strip characters from the beginning of a string you enter -1.  Our code uses the hcp_consul_cluster.main.cousul_public_endpoint_url as its input and strips the first eight characters “https://” from the beginning of the string and due to to the “-1” in the length position will return all the remaining characters of the url.

hcp_peering

The next module is the hcp_peering module, this will create the peer between the HCP Consul VNet and your tenant VNet.

module "hcp_peering" {
   source  = "hashicorp/hcp-consul/azurerm"
   version = "~> 0.3.1"
   hvn    = hcp_hvn.hvn
   prefix = local.cluster_id
   security_group_names = [azurerm_network_security_group.nsg.name]
   subscription_id      = data.azurerm_subscription.current.subscription_id
   tenant_id            = data.azurerm_subscription.current.tenant_id
   subnet_ids = module.network.vnet_subnets
   vnet_id    = module.network.vnet_id
   vnet_rg    = azurerm_resource_group.rg.name
}

demo_app

The final module deploys the HashiCorp demo application into Kubernetes

module "demo_app" {
   source  = "hashicorp/hcp-consul/azurerm//modules/k8s-demo-app"
   version = "~> 0.3.1"
   depends_on = [module.aks_consul_client]
}

Summary

Deploying Consul via Terraform is not a difficult task,  in fact the core code is automatically written for you by HashiCorp.  This is all well and good and will get you started on your journey; but not fit for purpose for a production environment.  In our next article we will be looking at hardening the code and making it production ready and deployable from Terraform Cloud.

If you have questions related to this topic, feel free to book a meeting with one of our solutions experts, mail to sales@amazic.com.

NEWSLETTER

Receive our top stories directly in your inbox!

Sign up for our Newsletters

spot_img
spot_img

LET'S CONNECT