spot_img
HomeOperationsAutomated OperationsDoing real work with Terraform Cloud - part 3

Doing real work with Terraform Cloud – part 3

This is the third article in our series on Terraform Cloud; our first article, we introduced Terraform Cloud, explained the functions and features of the free tier, and went through the initial process of creating an account and connecting your new environment to your VSC of choice.  The second article dug down deeper into Terraform Cloud and worked through the creation of our first workspace in preparation for doing real work.

This is the article where we will create the magic and configure Terraform Cloud to do real work.

As a reminder, we will be re-creating the magic of this series of articles with a bit of an update to the code to take account of new practices and features that have improved the capabilities of Terraform.

How simple Terraform plans make hybrid and multi-cloud a reality: an introduction
Deploying a LAMP Stack with Terraform – AMIs, network & security
Deploying a LAMP Stack with Terraform – Databases & Webservers
How to create resilient Terraform code
Deploying and configuring HashiCorp Vault to service Terraform
Deploying a LAMP Stack with Terraform Modules
Migrating Terraform from AWS to Azure: credentials & secrets – Amazic
Migrating Terraform from AWS to Azure: changing the provider code – Amazic
Building images and VMs in Azure with Terraform – Amazic
Building auto-scaling groups in Azure with Terraform – Amazic
Create cache, databases and DDoS protection in Azure with Terraform – Amazic

For those of you who want to recreate the environment, you will find the code here.

Amazic Three Tier Application

As a reminder, this is what the architecture will look like when it has been deployed.

high level overview of the Architecture to be deployed.

One of the first things that we need to consider is what changes we are required to undertake to the layout or format of the code to transform the code into a format capable of being used in Terraform Cloud.

Lets have a bit of a Terraform refresher on our code base

When using our code from our local machine we built out a file system with the following folders and files to separate out our modules and environments.  From the image below, you can see that we have four module directories  (compute (holding our EC2 Instance code), Database (Holding the MySQL code), Loadbalancing and network).

Local folder structure

There are also several sub-directories under the terraform folder to separate out the environments (Production, Staging, Testing, and Development).

Our main TF files are set out in the relevant environment folders under the Terraform folder as shown in the above filesystem. Starting with the main.tf file, this is where our locals block is configured each module is declared together with the associated variable values (these variables are either passed directly as var.XXXX values taken from the variable.tf file also in the root directory) or as input into a data Construct in the module.

The last file in the root folder is the “terraform.tfvars” file, this file holds the values for the declared variables, and this is the only file that is modified, between the different environments and deployments.

Running a “terraform plan” should give the following output.

Output of local OSS terraform plan.

Adding the Environment to Terraform Cloud

Log in to your Terraform cloud environment and select the Workspace we created earlier.  The first things that we need to do is create the variable sets that we previously had stored in our TFVAR file into our Terraform cloud instance.  To do this navigate to your workspace.

Ready Steady Go, Let’s get started

Open the menu and click the Variables Option, the following form will appear,  I have already added two variables the access_key and the secret_key.  To add the next variable click the “+Add variable”.

Variables – the way to add uniqueness.

This will open an inline form for you to enter the necessary values.

Adding new Variables

Click “Save variable” to complete.   I will leave the rest of the variable creation to you the reader.

Note: When entering values for things like usernames and password remember to mark them as sensitive by checking the “sensitive” box.  This way the entered value is shown as “Sensitive – Write only

Now we have our inputs created, it is time to test the plan.

Testing your code in TFC

Terraform cloud has a number of methods of running code,  lets review these before we move on to actually testing whether we are correctly configured.

Click on the left menu and select the Settings Item, the sub menu we are interesting is General.  There are several option here that we will now look at.

Firstly Execution Mode

Options – Execution Mode

We are going to use TFC as the central repository of our world of IaC, so select Remote.

What this decision means is that every time you activate a run, either an apply or a plan, Hashicorp will invoke a virtual environment to run your code in, rather than your local machine.

Secondly Apply Method

Options – Apply Method

This is simply the TFC version of “–auto-approve”,  if you are going to integrate TFC into a pipeline, then you would most-likely want to set this option to Auto Apply,  in our case, we are leaving it as Manual Apply.

Thirdly Terraform Version

Options – Terraform Version

Personally, I love this section, as it fixes the version of terraform for your workspace.  The number of times I have upgraded my local version of Terraform and introduced a breaking change.  I am looking at you “version 0.12”.

Fourthly Terraform Working Directory

Options – Terraform Working Directory

By default TFC will use the root directory of your repository to run from; if you have your root code set in another subfolder, this is where you tell TFC where your ”main.tf” and other files are hidden.

Fifthly Remote State Sharing

Options – Remote State Sharing

Layering State files just got a whole lot easier.

Finally User Interface.

Options – User Interface

More about this later, but it suffices to say that we will be using the “Structured Run Output” in our environment, it is the new shiny and I like shiny objects.

Now it is time to run a terraform plan directly inside Terraform Cloud.

Your first Plan.

You may have noticed the “Actions” button in your workspace and been wondering about that.  Now in the words of the “Eighty’s Mod song by Secret Affair;  now is the “Time for Action

Actions and Options

You will notice two options when you click the button,  one “Start new run” and the other “Lock Workspace”.

Clicking “Lock workspace” simply locks the workspace and terraform cannot run anything in there.

Managing a lock – or how to stop people messing.

This must be manually removed by clicking the “Manage lock” button

Oh look, I’ve locked the workspace, that will stop them.

As this is a restrictive mode which is usually invoked when changes are needed to be carried out, clicking the “Big Blue” button will invoke an “are you really sure you want to do this” form, with a scary big RED Button.

ohhh, a Scary Red button

Press the Scary Red button to release the lock on your workspace.

You do receive a calming green response through 😊

calm and serenity has return, workspace unlocked

Clicking the action button again but this time chose the “Start new run” option. This will load the following form.

 

let’s plan some action.

There are a number of run options

  • Plan and Apply (Standard)” this is the default option and is exactly the same as running “terraform apply” locally
  • Refresh State” this is similar to “terraform plan -refresh-only” or “terraform apply -refresh-only” and can be used to update state-files to reflect manual changes.
  • Plan Only” this is similar to just running a “terraform plan” without the -out option.
  • Allow empty apply” this is used to upgrade a state file from Terraform 0.11 format to Terraform 0.12 and later.

We are going to select “Plan only” of our first run.  You will note that there is a final option, “Choose Terraform version

oh look there was something hidden.

This drop-down box will have the current default workspace version pre-entered here, but this is where you can do a test before you upgrade your TF version.

We are going to leave the code as default to start as this is the version the code was tested against locally.

Click the “start run” button

It’s too late now, were planning

After a period of time, you should receive a response as shown below

Our plan was successful

Scrolling down the bottom of the page you will notice that you can add a comment.

You can add comments,

NOTE:  A bit of a warning, when I first created this environment and attempted the plan, I received the following error.

A nasty Error – just recycle your access and secret keys.

After multiple rabbit holes, I found out that the credentials that which worked perfectly locally would not work via TFC.  After recreating my access_key and secret_key and updating the variables in TFC the code started to run correctly.
If you receive this error or a similar one, it is worth resetting your keys.


Your first deploy

OK, now that we have proven connectivity to AWS, and that our plan is still successful now that we have migrated to Terraform Cloud, let’s do our first deploy.  Click your Actions button again, but this time choose the run type of “Plan and Apply”.

look our plan is queued.

Notice that we now have an apply pending section.  Once the plan has run successfully, you will notice that the following has appeared.

We now have a pending apply

One of the improvements is that we can now add a comment rather than a simple “yes

Adding Comments for the win

Click add comment.

That is more impressive than Yes

Click “Confirm & Apply”, you will be requested to add another comment.  The click “confirm run”. You will notice that the Apply Pending state has changed to Apply queued status

Actions Stations, we are queued to go.

Once the run commences “Apply Queued” changes to “Apply Running

Add we’re off, Terraform Cloud deploying resources in AWS – Magic

This section will take some time, depending on the resources being created. Once the run has completed, The status changes once again and we see this.

That’s all folks, Mischief complete

Lets check that everything is working as expected.  At the bottom of the page we can find the outputs and notification that we have successfully created a state-file.

What are our Outputs.

Copy the load-balancer address and enter it in your browser of choice, the following should appear.

Oh look, we have a response from our resouces.

Cleaning up your mess

Because we are all good citizens and don’t like paying un-necessary bills it is important to clear up, we were all told by our parents to clear up any mess we have made.  To do this with OSS terraform, we would just issue a terraform destroy at the command line to clear up the environment.  What is the equivalent TFC procedure?

Time to destroy all life in this workspace.

Navigate back to the left menu and select “Destruction and Deletion

Destruction and deletion

There are two options in this section, it is very important to understand the difference between them, the first section “Destroy Infrastructure” is the equivalent to “terraform destroy

Destroy your Infrastructure.

This is what we are going to do to clear up our environment, but first a brief discussion about “Delete Workspace

Delete your workspace, Goodnight sweetheart

This does NOT delete your Deployed resources.  It deletes the workspace, doing this before you have removed the deployed resources, will leave them orphaned and unable to be managed by terraform.

Delete the Resources

Click the Queue destroy plan button and enter the workspace name in the text box.

Queue Destruction, the end is near.

Finally click the big red scary “Queue destroy plan

You will be forwarded back to the now familiar run environment

Destruction Triggered

You will be asked again to confirm and apply as with a deployment.

Are you sure? really sure.

Notice that we are now running the destroy.  Once again this will take a while.

Past the point of no return.

After a successful destroy you should receive something similar to the below.

And it has gone.

Remember to verify the deletion in your AWS Console.

Summary

Once again, we have travelled a long way with this series of articles.  Starting with setting up the environment and adding the code repository; then moving to build in AWS a three-tier infrastructure consisting of load-balancers, ec2 instances and a RDS database using Terraform Cloud. And all this was deployed remotely in the cloud using GitHub for the repository and Terraform Cloud as the remote CI/CD pipeline.

If you have questions related to this topic, feel free to book a meeting with one of our solutions experts, mail to sales@amazic.com.

NEWSLETTER

Receive our top stories directly in your inbox!

Sign up for our Newsletters

spot_img
spot_img
spot_img

LET'S CONNECT