spot_img
HomeOperationsAutomated OperationsDeploying a LAMP Stack with Terraform - AMIs, network & security

Deploying a LAMP Stack with Terraform – AMIs, network & security

-

Terraform-cover

In our first post found here on Terraform we showed how simple it is to install a provider, in our case the AWS provider, and used Terraform to deploy our first automated virtual machine running on AWS infrastructure. Now it is safe to say that the demo in that post was not all that impressive, but it did show how simple it is to get started. In this second post, we’ll build upon the example laid out in the previous post to deploy a set of servers that reflect a set of operating environments that are common in the real world: deploying a LAMP stack with Terraform, the standard building block of web sites out there.

A LAMP Stack is a server or set of servers that are linked together to provide a three tier application stack based on open source software, this environment consists of the following constituent parts:

  • Linux
  • Apache
  • MySQL
  • PHP

For Linux, Apache and PHP, we’ll instantiate an EC2 Virtual Machine with the right AMI and binaries. For the MySQL database, we’ll use a cloud-based instance using the Amazon RDS database service, and provide the code to instantiate a database instance.

In order for this all to work, there’s some glue needed to marry the different components, like the supporting network, creating the security groups, deploying a standard AMI image, finally updating the binaries and install the necessary applications.

Obviously, this is a significantly more complicated environment that our previous example, but we will follow the same methodology, we will also be introducing some new concepts, such as provisioners and variables.

  • Variables – these provide a method of dynamically assigning a value to an attribute in Terraform code. By using variables your code can become modular and reusable. Simply edit the Terraform variable file or your those inserted into your deployment script to alter delivery options.
  • Provisioners – are used to execute scripts on the deployed machines to add applications, configure them etc
  • Output – used to display messages to the console whilst deploying a Terraform recipe, very useful to show progress.
  • Module – Allows you to use the same sets of code multiple times. Used in conjunction with Variables.

For a more in-depth look at these features review the official documentation. For now, it’s important to understand that Terraform’s TCL language behaves much like an actual programming or scripting language (especially since v0.12), so you can apply most of the concepts of programming languages to Terraform.

If you look back on the first post in the series, you will remember three key commands. These are:

  • Terraform init – initialized the Terraform working directory, ie installs the necessary providers to deploy your code in the desired location (in our case AWS)
  • Terraform plan – this works though your code and instead of applying the code on the desired platform generates an output that shows you your execution plan.
  • Terraform apply – physically runs the code on the platform of choice to build the desired environment.

Due to the more complicated nature of deploying a LAMP Stack with Terraform, it makes sense to also use the graph command, used to create a graph of the deployed network.

Please Declare your Variables

Declare_Variable

The first section in the script is the variable declarations. We add these to enable code to be reusable. Here we see the common syntax of Terraform code, “what we are creating”, “the object we are creating” and “the name you want to give it”.

variable "access_key" { default = "Your Access Key Here" }
variable "secret_key" { default = "You Secret Key Here" }
variable "region" { default = "us-east-1" }
variable "vpc_cidr" { default = "10.0.0.0/16" }
variable "subnet_one_cidr" { default = "10.0.1.0/24" }
variable "subnet_two_cidr" { default = ["10.0.2.0/24", "10.0.3.0/24"] }
variable "route_table_cidr" { default = "0.0.0.0/0" }
variable "host" {default = "aws_instance.my_web_instance.public_dns"}
variable "web_ports" { default = ["22", "80", "443", "3306"] }
variable "db_ports" { default = ["22", "3306"] }
variable "images" {
  type = "map"
  default = {
    "us-east-1"      = "ami-02e98f78"
    "us-east-2"      = "ami-04328208f4f0cf1fe"
    "us-west-1"      = "ami-0799ad445b5727125"
    "us-west-2"      = "ami-032509850cf9ee54e"
    "ap-south-1"     = "ami-0937dcc711d38ef3f"
    "ap-northeast-2" = "ami-018a9a930060d38aa"
    "ap-southeast-1" = "ami-04677bdaa3c2b6e24"
    "ap-southeast-2" = "ami-0c9d48b5db609ad6e"
    "ap-northeast-1" = "ami-0d7ed3ddb85b521a6"
    "ca-central-1"   = "ami-0de8b8e4bc1f125fe"
    "eu-central-1"   = "ami-0eaec5838478eb0ba"
    "eu-west-1"      = "ami-0fad7378adf284ce0"
    "eu-west-2"      = "ami-0664a710233d7c148"
    "eu-west-3"      = "ami-0854d53ce963f69d8"
    "eu-north-1"     = "ami-6d27a913"
  }
}

Each variable relates to a piece of information that is utilized in the core code stanzas, it is good practice to make the variable names human readable. As can be seen from the above we have created variables called “access_key,” and “region”, one variable “host” has a special entry:

aws_instance.my.web.instance.public_dns

This is an internal AWS variable that delivers the public facing DNS address of your newly deployed instance.

Another interesting variable is “images”. As AWS have different AMI codes for the same image in differing regions we must create an array for that information to allow the correct ami image to be deployed as the script will fail on deploying the virtual machine.

Then next section is initializes the “provider” and logs in the Terraform session, here we can see the first use of our newly created variables:

provider "aws" {
  access_key = "${var.access_key}"
  secret_key = "${var.secret_key}"
  region     = "${var.region}"
}

All variables are called in code wrapped in curly brackets and prefaced with a $ sign. Here we are assigning our access key, secret key, and the region that we are deploying the code too, in our case US-East 1 or North Virginia.

Building the Environment

Our next major section introduces the Resource key. The resource key is the building block of Terraform. With this command we build out the infrastructure.

The basic syntax for this command is resource “what you are building” “what you are calling it” followed by the options needed to configure the resource.

The key is used to define the objects that are to be deployed to make up the environment.

Now it is important to note that unlike Ansible which is a procedural language and needs a defined method to deploy, Terraform is declarative, which means only the end state need to be declared. Terraform is intelligent enough when coupled with a providers to work out the required deployment route needed to build your environment, the building blocks still need to be specified, but they do not need a specified order.

The first section of code, creates the VPC, this is AWS’s name for a virtual datacenter, this is just like in a physical world where we start with the DC, or at the least a subset of the datacenter.

Create the VPC with Terraform

As already alluded to the first brick to be created is the “aws_vpc”.

resource "aws_vpc" "myvpc" {
cidr_block = "${var.vpc_cidr}"
enable_dns_hostnames = true
tags = {
Name = "myvpc"
}
}

The two major options in this section are the cidr_block which sets up the IP address range for the deployment, and the seconds set up AWS to create DNS names for the deployed resources.

Buliding the Network with Terraform

Networks

Next we configure the public subnet to allow the machines to communicate with outside world. Again note the common syntax of Resource “what you are doing” “what you are calling it”. Here the resource is “aws_subnet”.

Within this section you will note a number of undefined variables “aws_vpc.myvpc.id” and “data.aws._availability_zones.availability_zones.name[0]” these are aws created variables that get declared as your code is run, and aides in binding everything together. Map_public_ip on launch binds an externally facing IP addressed to your environments resources.

resource "aws_subnet" "myvpc_public_subnet" {
    vpc_id = "${aws_vpc.myvpc.id}"
    cidr_block = "${var.subnet_one_cidr}"
    availability_zone = "${data.aws_availability_zones.availability_zones.names[0]}"
    map_public_ip_on_launch = true
    tags = {
        Name = "myvpc_public_subnet"
    }
}

We are not going to delve into the creation of the two private subnets, as the syntax and options are very similar.

Creating Traffic Flows with Terraform

Traffic_flowNext we create the routing tables, internet gateways, and associate the relevant subnets with the correct routing tables. This is where we start to configure the environment.

Again note the common syntax.

 

resource "aws_internet_gateway" "myvpc_internet_gateway" {
vpc_id = "${aws_vpc.myvpc.id}"
tags= {
Name = "myvpc_internet_gateway"
}
}
## create public route table (assosiated with internet gateway)
resource "aws_route_table" "myvpc_public_subnet_route_table" {
vpc_id = "${aws_vpc.myvpc.id}"
route {
cidr_block = "${var.route_table_cidr}"
gateway_id = "${aws_internet_gateway.myvpc_internet_gateway.id}"
}
tags = {
Name = "myvpc_public_subnet_route_table"
}
}
## create private subnet route table
resource "aws_route_table" "myvpc_private_subnet_route_table" {
vpc_id = "${aws_vpc.myvpc.id}"
tags = {
Name = "myvpc_private_subnet_route_table"
}
}
## create default route table
resource "aws_default_route_table" "myvpc_main_route_table" {
default_route_table_id = "${aws_vpc.myvpc.default_route_table_id}"
tags = {
Name = "myvpc_main_route_table"
}
}
## associate public subnet with public route table
resource "aws_route_table_association" "myvpc_public_subnet_route_table" {
subnet_id = "${aws_subnet.myvpc_public_subnet.id}"
route_table_id = "${aws_route_table.myvpc_public_subnet_route_table.id}"
}
## associate private subnets with private route table
resource "aws_route_table_association" "myvpc_private_subnet_one_route_table_assosiation" {
subnet_id = "${aws_subnet.myvpc_private_subnet_one.id}"
route_table_id = "${aws_route_table.myvpc_private_subnet_route_table.id}"
}
resource "aws_route_table_association" "myvpc_private_subnet_two_route_table_assosiation" {
subnet_id = "${aws_subnet.myvpc_private_subnet_two.id}"
route_table_id = "${aws_route_table.myvpc_private_subnet_route_table.id}"
}

The common layout makes the code easily understandable, each stanza is a defined block and delivers another brick to the building. Also now we can see the advantage of human readable variables.

It is all about Security

security

Security is very important, you really do not want a totally open network, Terraform allows you to create AWS security groups and create ingress and egress rules, each defined subnet will have their defined set of rules.

In this instance we are creating an “aws_security_group” this is a wrapper object taht contains other, there is little to configure, and yep you guessed it correctly, once again the common syntax:

resource "aws_security_group" "web_security_group"

we name it, give it a pithy description and bind it to the vpc.

create security group for web
resource "aws_security_group" "web_security_group" {
name = "web_security_group"
description = "Allow all inbound traffic"
vpc_id = "${aws_vpc.myvpc.id}"
tags = {
Name = "myvpc_web_security_group"
}
}

We then do the same for the firewall rules, but here we created an “aws_security_group rule”. These are a little more complicated as we are defining traffic flow and what ports are open.

## create security group ingress rule for web
resource "aws_security_group_rule" "web_ingress" {
count = "${length(var.web_ports)}"
type = "ingress"
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
from_port = "${element(var.web_ports, count.index)}"
to_port = "${element(var.web_ports, count.index)}"
security_group_id = "${aws_security_group.web_security_group.id}"
}
## create security group egress rule for web
resource "aws_security_group_rule" "web_egress" {
count = "${length(var.web_ports)}"
type = "egress"
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
from_port = "${element(var.web_ports, count.index)}"
to_port = "${element(var.web_ports, count.index)}"
security_group_id = "${aws_security_group.web_security_group.id}"
}

Unfortunately we cannot create a single egress and ingress rule, therefore the are always two rules per security group, whether the rule is ingress or egress is defined by the use of the “type =” option. If you are opening multiple ports you will need to create an array to handle the ports, this is identified with the “count = “${lenght(var.web_ports)}” option, this is then used by the from_port and to_port options to loop through the defined ports, setting the accessibility

NOTE – if you want to set your network wide open you can just set “count” to -1 this obviously is not recommended and should only be used for testing or troubleshooting purposes.

Again we are not going to examine the security group or rules for the Database as they obviously follow the same syntax..

Summary

Step-by-step, we’re discovering how to use Terraform in a real-world example. In this post, we’ve learned how to define the AWS constructs, like the AMI, networking and security. In the next post we will build out the web server and the RDS-based MySQL database.

NEWSLETTER

Sign up to receive our top stories directly in your inbox!

spot_img
spot_img

LET'S CONNECT