HomeDevelopmentCloud NativeBuilding auto-scaling groups in Azure with Terraform

Building auto-scaling groups in Azure with Terraform

In our last article, we started to deploy our LAMP infrastructure to Azure. However, we soon hit some difficulties with the azurerm provider not supporting some key needs, there were plenty of workarounds but eventually, we decided to take the path of least resistance and create a Packer image. In this post, we will start building auto-scaling groups in Azure with Terraform. For a refresher please read the rest of the post in the series:

Terraform

 

Preparations – Importing our Image

Now before we start deploying out our auto-scaling groups we need to import our previously deployed resource group and VM image file. If you forget to do this and attempt to run your terraform code you will receive the following error when you issue your “terraform apply”

This is the first time we have had to do this, we know about the following commands:

  • terraform fmt
  • terraform 0.12Upgrade
  • terraform plan
  • terraform apply
  • terraform init
  • terraform destroy

Today we are going to introduce another new option; terraform import. We use this command to import pre-existing resources into terraform state. So that terraform knows and understands what is previously existing and thus what needs to be created, modified, or destroyed at a later issuance of a terraform apply.

Apply Error

As already alluded to, we created our Azure image with packer, we already have our resource group “AmazicDevResourceGroup” set up in Azure. So this and our image file “Amazic-Image” need to be imported into our TF State file. To do this we first need to create the import.tf file this is a minimal, file that just contains the resource block and brackets. (note: you will have to supply your credentials to this file too)

resource "azurerm_resource_group" "main" {}

Once the file has been created, we will need to obtain the ID for the Azure subscription; to do that there are several methods, we used the Azure CLI and issued the following command:

az account list –output table

Terrafrom Account list

This shows that we currently have two subscriptions, so we will need to set our focus on the correct one to do this issue the following command remembering to substitute your subscription id; note this command does not return a visual response:

az account set --subscription "<Your Subscription ID here>"

The final piece of the jigsaw is obtaining the id of the resource group ID. To obtain this issue the following command:

az group show –name AmazicDevResourceGroup –query id –output tsv

Terraform show Resource ID

We finally have all the necessary information to import the current infrastructure into our state file. To do so, we need to issue the following command, remembering to replace the 000’s with the correct subscription id for your needs.

terraform import azurerm_resource_group.main /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/AmazicDevResourceGroup

The final piece of the puzzle now that the import has been successful is to edit our code block and add the name and location to the “azurerm_resource_group” “main” code block so that it will now read:

Resource “azurerm_resource_group” “main” {
Name = “AmazicDevResourceGroup”
Location = “northeurope”
}

You can verify that the import has been successful by issuing another new terraform command option

terraform state list

Terraform State List

Building out the Network

Terrafrom network building

Remember that terraform is a declarative language, so how your write your code is irrelevant as terraform will build out the code in your desired provider in the correct order, together with any required dependencies.

So let’s start looking at the code relating to the virtual network. If you remember back to when we configured the AzureRM provider we noted that there was no network configuration in that code block, that is because you undertake that here with this code block.

# Create virtual network
resource "azurerm_virtual_network" "image" {
name                = "${var.vmname}VNET"
address_space       = [var.addressprefix]
location            = var.regionname
resource_group_name = azurerm_resource_group.main.name
}

The configuration of the machine subnet and associated security groups is relatively straight forward, as can be seen, we have a single subnet (“azurerm_subnet”) to which public IP addresses are dynamically assigned (“azurerm_public_ip”). Finally, there are three inbound port rules set up to allows HTTP, HTTPS, and SSH on a custom port of 902.

So what will this infrastructure look like from a logical perspective?

The image below gives a logical overview.

Logical Infrastructure

Now that we have an idea of what the environment will look like and the overview of resource block placement in terms of logical architecture, let’s start to investigate what each code block does. For ease of reading, we have replaced variables with real-world examples.

The first code block of importance is the resource group. This was already mentioned in our last post in this series, it is the Azure equivalent of a VPC. Remember that this section was filled out after we imported the resource group into our terraform.tfstate file.

# Create a resource group if it doesn’t exist
resource "azurerm_resource_group" "main" {
name     = "AmazicDevResourceGroup"
location = "northeurope"              #(var.region)
}

Next, we need to build out the actual networks and associated rules regarding access. As shown in the above image the first code block of importance is the azurerm_virtual_network; this is the equivalent of the CIDR setting in the VPC block used with AWS.

resource "azurerm_virtual_network" "main" {
name                = "amazicVNET"
location            = azurerm_resource_group.main.location
resource_group_name = azurerm_resource_group.main.name
address_space       = ["10.0.0.0/16"]
}

This hooks in the name and location of the parent resource groups. Next, we need to create the four subnets required to carry the network traffic to our services, under the virtual network resource. For this, we use the azure_subnet block.

resource "azurerm_subnet" "compute" {
name                 = "amazicSubnet"
resource_group_name  = azurerm_resource_group.main.name
virtual_network_name = azurerm_virtual_network.main.name
address_prefixes     = ["10.0.0.0/24"]
}

resource "azurerm_subnet" "redis" {
name                 = "amazicReddisSubnet"
resource_group_name  = azurerm_resource_group.main.name
virtual_network_name = azurerm_virtual_network.main.name
address_prefixes     = ["10.0.1.0/24"]
}

The Redis and Compute subnets only have four options, the Name, and a mapping to the resource_group and the Virtual_network; the final option is the address_prefixs, this option can take a list, but in our case, we have a single subnet range,

However, if you look at the mysql, and storage subnets you will notice a fifth option in the code block, that of service_endpoints, this is an Azure Virtual Network (VNet) provides secure and direct connectivity to Azure services over an optimized route over the Azure backbone network. Endpoints allow you to secure your Azure resources to only those virtual networks that are required. A service Endpoints enables private IP addresses in the VNet to reach the endpoint of an Azure service without needing a public IP address on the VNet.

resource "azurerm_subnet" "mysql" {
name                 = "amazicMySQLSubnet"
resource_group_name  = azurerm_resource_group.main.name
virtual_network_name = azurerm_virtual_network.main.name
address_prefixes     = ["10.0.2.0/24"]
service_endpoints    = ["Microsoft.Sql"]
}

resource "azurerm_subnet" "storage" {
name                 = "AmazicVirtualMachineSTRGSubnet"
resource_group_name  = azurerm_resource_group.main.name
virtual_network_name = azurerm_virtual_network.main.name
address_prefixes     = ["10.0.3.0/24"]
service_endpoints    = ["Microsoft.Storage"]
}

We will also look at our first security code block, below we have the azurerm_network_security_group this code block manages what is allowed into our resource group. By default, when you create a resource in azure access to it is denied, ie. you have to set up the necessary access via an allow list. From a security perspective, this is a much safer profile than using deny lists. A deny list is fraught with un-needed open ports. Take this example I have 50000 M&M sweets, 49397 are Red, Orange, Yellow, Green, and Brown, and 603 are my favorite blue color. Isn’t it more simple to just sort out the 603 Blue candies, rather than removing every other color? This is the difference between allow and deny. When working with allow lists you instantly know when you do not have enough ports open to functionally use your application, when managing deny lists. You are less aware of non-needed ports being open.

From looking at this code block we can see that we have four main options, we have the obligatory name, and our two binding options location and resource_group_name. as per normal these options bind this rule to the required resource group

The interesting option is the security_rule block, this is the functional section of this block. It is used to configure our ingress and egress rules. As you can see all our rules are ingress rules allowing HTTP, HTTPS, and SSH traffic to flow into your environment. If you know about configuring firewalls, these options will look familiar to you. The main concept to understand with this block is that of traffic flow and what is source and what is destination. Destination is your target device in this case our Linux servers running our webserver. And Source is any client device that has access. For ease, and as we are only building a demo environment, we have left our source_port_range and source_address_prefix configured with the wildcard “*”, in a production environment this would be configured with a defined port range and IP prefix to limit the potential ingress source clients unless it is a fully public service.

Our destination_port_range has been set to an individual port, you can set a range here if needed, finally under destination_address_prefix you would set the subnet of, in our case, the auto-scaling group’s virtual machines.

# Create Network Security Group and rule Ingress Rule
resource "azurerm_network_security_group" "main" {
  name                = "AmazicDevMLampNSG"
  location            = azurerm_resource_group.main.location
  resource_group_name = azurerm_resource_group.main.name
  security_rule {
  name                       = "HTTP"
  priority                   = 900
  direction                  = "Inbound"
  access                     = "Allow"
  protocol                   = "Tcp"
  source_port_range          = "*"
  destination_port_range     = "80"
  source_address_prefix      = "*"
  destination_address_prefix = "*"
}

security_rule {
  name                       = "HTTPS"
  priority                   = 901
  direction                  = "Inbound"
  access                     = "Allow"
  protocol                   = "Tcp"
  source_port_range          = "*"
  destination_port_range     = "443"
  source_address_prefix      = "*"
  destination_address_prefix = "*"
}

security_rule {
  name                       = "SSH"
  priority                   = 902
  direction                  = "Inbound"
  access                     = "Allow"
  protocol                   = "Tcp"
  source_port_range          = "*"
  destination_port_range     = "22"
  source_address_prefix      = "*"
  destination_address_prefix = "*"
  }
}

For a deeper look at the potential code block options head the documentation page on the subject

Configuring the Load Balancer

Terraform traffic direction

Now that we have looked at the core networking code blocks, we will move on to the load balancer. This is a core component of our enhanced LAMP Stack deployment. Think of it as a traffic officer, passing traffic to the least utilized path.

In our deployment we will be using the Load Balancer to manage traffic flow to the LAMP stack Virtual machines, this is a reasonably complicated block of code, and there are a lot of moving parts. So, buckle on up

The first block of code is relatively straight forward azurerm_lb has the obligatory name, location and resource_group_name attributes to christen the device and bind the code to the relevant region and resource group.

Next, we have sku, this option relates to azure licensing and sets the options that are available to the load balancer. One thing to note here is that there are cost implications to be taken into consideration when using the Standard SKU as it is only available with a paid subscription.

The next option is frontend_ip_configuration, this sets the network ingress details, again we have the obligatory name option and then the option public_ip_address_id, notice that we have set this to an as yet unexplained identifier – please hold caller, we will get to that later.

#LoadBalancer Resources
resource "azurerm_lb" "main" {
  name                = "amazicDevLB"
  location            = azurerm_resource_group.main.location
  resource_group_name = azurerm_resource_group.main.name
  sku                 = "Standard"
  frontend_ip_configuration {
  name                 = "amazicLBFE"
  public_ip_address_id = azurerm_public_ip.lbpip.id
  }
  tags = {
    environment = "AmazicDevLamp"
  }
}

As promised, here is the azurerm_public_ip block. One thing to note here is that there is the sku option again, this needs to match the sku you entered in the azurerm_lb block.

Allocation method related to whether the IP address is permanently assigned to the resource (dynamic) and re-assigned during a powercycle, or statically assigned and will survive a device restart.

resource "azurerm_public_ip" "lbpip" {
  name                = "amazicPublicIP"
  location            = azurerm_resource_group.main.location
  resource_group_name = azurerm_resource_group.main.name
  sku                 = "Standard"
  allocation_method   = "Static"
}

Next we move on to the backend. Here the azurerm_lb_backend_address_pool is defined. All that is necessary here is that we give it a unique name.

resource "azurerm_lb_backend_address_pool" "main" {
  resource_group_name = azurerm_resource_group.main.name
  loadbalancer_id     = azurerm_lb.main.id
  name                = "amazicLBBEPool"
}

This next section is very important as it effectively provides the inputs to the load balancer to allow it to make its traffic flow decisions. A probe is used to detect the status and health of the relevant backend resources. f a probe fails the inbound traffic to the backend resource is impacted.

These are also attached to the associated azurerm_lb_rule discussed later. We created a block for each traffic type allowed. The SSH probe is slightly different in form as it is a default TCP probe, the two other probes have a protocol and request_path option, these are specific to HTTP and HTTPS probes.

resource "azurerm_lb_probe" "SSH" {
  resource_group_name = azurerm_resource_group.main.name
  loadbalancer_id     = azurerm_lb.main.id
  name                = "ssh-running-probe"
  port                = "22"
}

resource "azurerm_lb_probe" "http" {
  resource_group_name = azurerm_resource_group.main.name
  loadbalancer_id     = azurerm_lb.main.id
  name                = "amazcLBHTTPProbe"
  protocol            = "Http"
  port                = 80
  request_path        = "/"
}

resource "azurerm_lb_probe" "https" {
  resource_group_name = azurerm_resource_group.main.name
  loadbalancer_id     = azurerm_lb.main.id
  name                = "amazicLBHTTPSProbe"
  protocol            = "Https"
  port                = 443
  request_path        = "/"
}

As we have a frontend IP configuration attached to our load balancer we have a requirement to configure some azurerm_lb_rule blocks. These configure the SNAT between the frontend public IP Address and the backend addresses. We can use this to force inbound communication to a nonstandard port

NOTE: if this is done you much remember to edit the rules associated with the resource group too.

resource "azurerm_lb_rule" "http" {
  resource_group_name            = azurerm_resource_group.main.name
  loadbalancer_id                = azurerm_lb.main.id
  name                           = "amazicLBHTTPRule"
  protocol                       = "Tcp"
  frontend_port                  = 80
  backend_port                   = 80
  frontend_ip_configuration_name = azurerm_lb.main.frontend_ip_configuration[0].name
  backend_address_pool_id        = azurerm_lb_backend_address_pool.main.id
  probe_id                       = azurerm_lb_probe.http.id
}

resource "azurerm_lb_rule" "https" {
  resource_group_name            = azurerm_resource_group.main.name
  loadbalancer_id                = azurerm_lb.main.id
  name                           = "amazicLBHTTPSRule"
  protocol                       = "Tcp"
  frontend_port                  = 443
  backend_port                   = 443
  frontend_ip_configuration_name = azurerm_lb.main.frontend_ip_configuration[0].name
  backend_address_pool_id        = azurerm_lb_backend_address_pool.main.id
  probe_id                       = azurerm_lb_probe.https.id
}

The final code block related to the load balancer is the azurerm_lb_nat_pool block, if you were wondering why there was no rule for the SSH protocol; here’s the reason why.

resource "azurerm_lb_nat_pool" "main" {
  resource_group_name            = azurerm_resource_group.main.name
  loadbalancer_id                = azurerm_lb.main.id
  name                           = "amazicLBNATPool"
  protocol                       = "Tcp"
  frontend_port_start            = 50000
  frontend_port_end              = 50119
  backend_port                   = 22
  frontend_ip_configuration_name = azurerm_lb.main.frontend_ip_configuration[0].name
}

Summary

Summary

This post has been a bit of a long haul and we have covered a massive amount of ground. We introduced two new terraform command-line options and we ran through the Terraform code that is used to start building auto-scaling groups in Azure with Terraform. In our next article, we will be concentrating on the compute side of the equation and bringing it all together into a complete solution finally deploying the infrastructure into Azure.

NEWSLETTER

Receive our top stories directly in your inbox!

Sign up for our Newsletters

spot_img
spot_img

LET'S CONNECT