HomeDevelopmentContinuous DeliveryMigrating Terraform from AWS to Azure: changing the provider code

Migrating Terraform from AWS to Azure: changing the provider code

In our previous Terraform post we explained the process to configure the “azurerm” provider to connect to Azure. We showed two methods of carrying out this process one utilizing the standard method of direct connection and the second using vault to provide single-use tokens to access Azure. In this post in the series of migrating Terraform from AWS to Azure: changing the provider code.

 

This post will start to look at rewriting our Terraform LAMP stack code to allow it to be deployed to Azure, at the same time we will also look to providing a little more of that resilience. So some auto-scaling groups, load balancers, and multi-availability zone deployments.

Getting started on environment transformation

The first file you need to look at is “main.tf.” if you remember this is where the providers are specified, together with any modules.

As a reminder here is it:

provider "vault" {
  address = var.vault_addr
   token   = var.vault_token
}
data "vault_aws_access_credentials" "creds" {
   backend = "aws"
   role    = "TerraformDeploy"
 }

provider "aws" {
   access_key = data.vault_aws_access_credentials.creds.access_key
   secret_key = data.vault_aws_access_credentials.creds.secret_key
   region     = var.region
}

module "vpc" {
   source = "D:\\Terraform\\Stage\\modules\\VPC"
   cluster_name = "Stage-LampStack"
}

module "lamp-stack" {
   source = "D:\\Terraform\\Stage\\modules\\lamp"
}

Here you can see that we have two providers and two module statements. First, we will look at the provider statements. For ease we will ignore and remove the vault provider statement this will mean that we have to put the credentials in the variable.tf file.

provider "azurerm" { 
   # The "feature" block is required for AzureRM provider 2.x.
   # If you are using version 1.x, the "features" block is not allowed.
   version = "~>2.0"

   subscription_id = "00000000-0000-0000-0000-000000000000"
   client_id = "00000000-0000-0000-0000-000000000000"
   client_secret = "secret here "
   tenant_id = "00000000-0000-0000-0000-000000000000"

   features {}   
}

We will also have to redirect the modules from the current location which holds the AWS biased tf files to a new location holds the Azure files.

Our resultant file will look something like this:

provider "azurerm" {
   # The "feature" block is required for AzureRM provider 2.x.
   # If you are using version 1.x, the "features" block is not allowed.
   version = "~>2.0"

   subscription_id = "00000000-0000-0000-0000-000000000000"
   client_id = "00000000-0000-0000-0000-000000000000"
   client_secret = "secret here "
   tenant_id = "00000000-0000-0000-0000-000000000000"

  features {}
  }

module "vpc" {
   source = "C:\\Amazic\\Azure\\Dev\\Modules\\VPC"
   cluster_name = "Stage-LampStack"
   }

module "lamp-stack" {
   source = " C:\\Amazic\\Azure\\Dev\\Modules\\lamp"
}

If however, we are redirecting our authentication to use Vault to provide tokenization our main.tf file will look something similar to this:

provider "azurerm" {
# The "feature" block is required for AzureRM provider 2.x.
# If you are using version 1.x, the "features" block is not allowed.
version = "~>2.0"

   subscription_id = "00000000-0000-0000-0000-000000000000"
   tenant_id = "00000000-0000-0000-0000-000000000000"
   client_id = “${data.vault_generic_secret.azure.data[“client_id”]}”
   client_secret = “${data.vault_generic_secret.azure.data[“client_secret”]}”

   features {}
   }

provider “vault” {
   address = var.vault_addr
   auth_login {
      path = “azure\\creds\\Azure-Terraform”
      parameters = {
      role_id   = "00000000-0000-0000-0000-0000000000"
      #var.login_approle_role_id
      secret_id = <your approle secret ID here>
      #var.login_approle_secret_id
      }    
   }
}

data “vault_generic_secret” “azure” {
   path = “azure\\creds\\Azure-Terraform”
}

module "vpc" {
   source = "C:\\Amazic\\Azure\\Dev\\Modules\\VPC"
   cluster_name = "Stage-LampStack"
   }

module "lamp-stack" {
   source = " C:\\Amazic\\Azure\\Dev\\Modules\\lamp"
   }

What else will need to change?

To be fair almost everything, sure AWS and Azure both public clouds, but that is where the similarity ends. They handle Authentication differently and they handle networking differently.

First let’s remind ourselves of what we are attempting to do here, and at the same time lets add a couple of improvements that the boss wanted like caching, load balancing, autoscaling and a DDos attack protection:

  • Deploy a Virtual Machine on a Managed Disk with your preferred Linux OS distribution.
  • Install Apache, your preferred PHP version and other stuff you consider.
  • Deallocate and generalize the Virtual Machine.
  • Capture the Virtual Machine Disk Image to generate the custom golden image.
  • Deploy networking resources (load balancer, etc).
  • Deploy the Azure Cache for Redis.
  • Deploy the Azure Database for MySQL.
  • Create the Azure Storage account and container.
  • Create your Virtual Machine Scale Set ensuring it references the captured Disk Image as the OS Disk.
  • Setup the autoscale settings.
  • Enable protection against DDoS attacks.

What will this look like?

Before we move onto looking at the code, we should look at what we are going to be deploying. If you remember back to our AWS deployment, was a single Linux Host running Apache and PHP attached to a single RDS instance of MySQL. Architecturally it looked similar to the diagram below

AWS-LAMP

As we have already alluded too, whilst we are in the process of migrating our code from the “AWS” provider and resources to the “azurerm” provider and associated resources, we are going to add a few features to add resilience into the Azure design, so we are going to deploy an autoscaling compute layer coupled to a load balancer, together with a Cache layer based on Redis to accelerate reads, and finally a resilient database layer provided by a master and two slave servers running MySQL.

Azure_LAMP

One final thing that we will add is DDos protection.

How our folders will be set out

You will recognize this folder layout, each environment of Dev, Test, Stage, and Production will have the same folders, this layout means that our code is separated into a logical grouping of services.

AzureFolderStructure

As your deployment becomes more complicated you will be thankful for moving to a modular format. Multi 1000 lines scripts are a nightmare to read and even harder to update.

Now that we have got the groundwork set let’s move on the first stage of conversion. Looking at our variables.

What Terraform variables will we need to change?

Apart from the new variables associated with the new services, Redis, load balancers etc, we will use this migration to take advantage and dry out our code somewhat, the AWS deployed LAMP Stack code has quite a few easy targets. CIDR, subnet blocks. Etc.

So from the perspective of networking we have the three subnets and the VNET (this is the equivalent of the AWS VPC).

variable "vnetaddressprefix" {default = "10.0.0.0/16"}
variable "addressprefix" {default = "10.0.0.0/16"}
variable "subnetprefix" {default = "10.0.0.0/24"}
variable "storagesubnetaddressprefix" {default = "10.0.3.0/24"}
variable "mysqlsubnetaddressprefix" {default = "10.0.2.0/24"}
variable "redissubnetaddressprefix" {default = "10.0.1.0/24"}
variable "subnetaddressprefix" {default = "10.0.0.0/24"}

This at first glance seems a significant increase from our AWS environment. This is, however, more an inditement on how Azure does networking than a significant complication.

We also need to add in variables for each of the services, MySQL.

variable "mysqlusername" {default = "azuremysqluser"}
variable "mysqlpassword" {default = "CHang3thisP4Ssw0rD"}
variable "mysqldbname" {default = "amazic-db"}
variable "mysqlbackupretaineddays" {default = 7}
variable "mysqlgeoredundantbackup" {default = "Disabled"}
variable "mysqlsku" {default = "GP_Gen5_2"}
variable "mysqlskucapacity" {default = 2}
variable "mysqlskutier" {default = "GeneralPurpose"}
variable "mysqlskufamily" {default = "Gen5"}
variable "mysqlstoragembsize" {default = 51200}
variable "mysqlversion" {default = "5.7"}

Redis

variable "redisvmfamily" {default = "C"}
variable "redisvmcapacity" {default = 1}
variable "redissku" {default = "Standard"}
variable "redisshardstocreate" {default = 0}

Auto-Scaling and Load-balancer

variable "vmssautoscalermaxcount" {default = 10}
variable "vmssautoscalermincount" {default = 2}
variable "vmssautoscaleroutincrease" {default = 1}
variable "vmssautoscalerindecrease" {default = 1}
variable "vmssautoscalerupthreshold" {default = 50}
variable "vmssautoscaleruptimewindow" {default = "PT5M"}
variable "vmssautoscalerdownthreshold" {default = 30}
variable "vmssautoscalerdowntimewindow" {default = "PT5M"}

There are several other variables that have been created to dry out the code. Have a look at the Variable.tf file to review them.

SummarySummary

This ends part one of migrating Terraform from AWS to Azure: changing the provider code is not that involved, once we substitute the cloud-specific entities like networking. With these changes, we’ve laid the groundwork for the next post, where we’ll be building out our LAMP stack on Microsoft Azure.

NEWSLETTER

Receive our top stories directly in your inbox!

Sign up for our Newsletters

LET'S CONNECT