HomeOperationsAutomated OperationsDeploying a LAMP Stack with Terraform – Databases & Webservers

Deploying a LAMP Stack with Terraform – Databases & Webservers

Terraform-cover

In this third in the series on Terraform, and the second on deploying a LAMP stack, we continue on with explaining the code used to deploy the solution. Please review part one to see how the network was created and configured. In this post we will build out the web server and the RDS-based MySQL database.

The Terraform code for this is in a text file here, remember to rename the file with a *.tf extension. The php index is shown below, you will need to save this out to a file called index.php and store it in the same directory as your terraform code.

<?php
$link = mysql_connect('Your Database Server address here', 'myuser', 'mypassword');
if (!$link)
{
die('Could not connect: ' . mysql_error());
}
else
{
$selectdb = mysql_select_db("mydb");
if (!$selectdb)
{
die('Could not connect: ' . mysql_error());
}
else
{
$data = mysql_query("SELECT visits FROM counter");
if (!$data)
{
die('Could not connect: ' . mysql_error());
}
else
{
$add=mysql_query("UPDATE counter SET visits = visits+1");
if(!$add)
{
die('Could not connect: ' . mysql_error());
}
else
{
print "<table><tr><th>Visits</th></tr>";
while($value=mysql_fetch_array($data))
{
print "<tr><td>".$value['visits']."</td></tr>";
}
print "</table>";
}
}
}
}
mysql_close($link);
?>

Creating the web server with Terraform

The first major active resource to be created is that of the Web-server. Now once again the resource command as with all Terraform commands follows a common form; command, what is it to be, what are you calling it. Followed by definitions specific to the object being created. The Stanza below shows the code that will deploy the Web Instance of the LAMP stack. For full details on all the options available to configure a server please review the terraform documentation, but the example below will create a ready-to-go web server.

resource "aws_instance" "my_web_instance" {
  ami                    = "${lookup(var.images, var.region)}"
  instance_type          = "t2.large"
  key_name               = "myprivate"
  vpc_security_group_ids = ["${aws_security_group.web_security_group.id}"]
  subnet_id              = "${aws_subnet.myvpc_public_subnet.id}"
  tags = {
    Name = "my_web_instance"
  }
  volume_tags = {
    Name = "my_web_instance_volume"
  }
  provisioner "remote-exec" { #install apache, mysql client, php
    inline = [
      "sudo mkdir -p /var/www/html/",
      "sudo yum update -y",
      "sudo yum install -y httpd",
      "sudo service httpd start",
      "sudo usermod -a -G apache centos",
      "sudo chown -R centos:apache /var/www",
      "sudo yum install -y mysql php php-mysql",
      ]
  }
  provisioner "file" { #copy the index file form local to remote
   source      = "d:\\terraform\\index.php"
    destination = "/tmp/index.php"
  }
  provisioner "remote-exec" {
   inline = [
    "sudo mv /tmp/index.php /var/www/html/index.php"
   ]
}
  connection {
    type     = "ssh"
    user     = "centos"
    password = ""
    host     = self.public_ip
    #copy <private.pem> to your local instance to the home directory
    #chmod 600 id_rsa.pem
    private_key = "${file("d:\\terraform\\private\\myprivate.pem")}"
    }
}

The first line defines the AMI and supplies the image name and the region in which the deployment is taking place:

ami                    = "${lookup(var.images, var.region)}"

This code block also introduces the lookup command which is a variable that takes the contents of the “images” variable and the “region” variable to select of the correct AMI image for the region that is being deployed too. In our case we are deploying a Centos 7 image in North Virginia.

The “Instance” type defines the size of the virtual machine to be deployed, and the “key_name” relates to the keypair created in your AWS Console. The other sections relate to the defined security group created in this stanza which was defined in part one:

resource "aws_security_group" "web_security_group" {
  name        = "web_security_group"
  description = "Allow all inbound traffic"
  vpc_id      = "${aws_vpc.myvpc.id}"
  tags = {
    Name = "myvpc_web_security_group"
  }
}

and the subnet that the VM will be attached too which was created with this stanza outlined in part one:

resource "aws_subnet" "myvpc_public_subnet" {
  vpc_id                  = "${aws_vpc.myvpc.id}"
  cidr_block              = "${var.subnet_one_cidr}"
  availability_zone       = "${data.aws_availability_zones.availability_zones.names[0]}"
  map_public_ip_on_launch = true
  tags = {
    Name = "myvpc_public_subnet"
  }
}

Tags are a used to define information that can be later used to identify items for billing or auditing purposes or to mark an object for another action for example being automatically added to a backup job.

The next section of interest is the provisioner command, where the resource command is the master builder the provisioner is the labourer of the team, this command allows terraform to manipulate the deployed virtual machine, and is used in conjunction with the connection command which controls access to deploy files, install applications, update etc.

In our connection example we are using the SSH keys we defined earlier in our code to allow the user ‘centos’ to logon to the server update the image’s binaries, deploy a number of applications including HTTP, php, MySQL and files, and manipulate permissions and ownership of files within the object.

Deploying the Database Service with Terraform

warehouseAmazon Relational Database Service (Amazon RDS) is easy to set up, operate, and scale a relational database service. It provides cost-efficient and resizable capacity to AWS customers, there is less complexity as it removes the hardware and the building of the hardware and removes other more complex tasks such as, database setup, patching and backups. according to AWS this then frees you to focus on the application.

RDS as a service is available on several database instance types, all optimized for memory, performance or I/O – and provides you with six familiar database engines to choose from, including Amazon AuroraPostgreSQLMySQLMariaDBOracle Database, and Microsoft SQL Server. As this is a LAMP Stack, we will be creating a MySQL instance.

Again Terraform comes to the rescue. For all the advanced configuration options again please review the documentation, but here’s a working example:

resource "aws_db_instance" "my_database_instance" {
    allocated_storage = 20
    storage_type = "gp2"
    engine = "mysql"
    engine_version = "5.7"
    instance_class = "db.t2.micro"
    port = 3306
    vpc_security_group_ids = ["${aws_security_group.db_security_group.id}"]
    db_subnet_group_name = "${aws_db_subnet_group.my_database_subnet_group.name}"
    name = "mydb"
    identifier = "mysqldb"
    username = "myuser"
    password = "mypassword"
    parameter_group_name = "default.mysql5.7"
    skip_final_snapshot = true
    tags = {
        Name = "my_database_instance"
    }
}

If you remember the first post, I mentioned a terraform commnad called terraform graph. What this does is provide a graphical representation of the deployed environment.

terraform graph -type=plan | "c:\\Program Files (x86)\\Graphviz2.38\\bin\\dot.exe" -Tpng > graph.png

running this command produces the image shown below. This gives us a visual overview of the deployment (click for large).

Running the Code

Now that we have run though the code it is time to run it. Since the introduction of version 0.12 there have been a number of changes to the way you run your terraform code. If you have code the was written in an earlier version, run the command terraform fmt to configure the code to be compliant with latest version. if successful you should receive a response similar to below.Terraform tmt

 

With the introduction of version 0.10 Hashicorp split the core code from the code for the providers, (ie the code that provides the access to AWS, Azure, GCP etc). This allows the maintainers of the provider code to update their code at their own cadence, adding extra functionality outside of the core code stream, however it does mean that every time that you work on a new deployment you must run terraform init to initialize the environment, and download the latest providers, in our case the AWS one. On a successful initialization the response should be similar to below.

Terraform Init

it is also recommended that you run the terraform validate command to confirm that the code is valid, this one is obvious.

terraform_validate

 

 

Next it is recommended to run the terraform plan command, this runs through your code and provide a run though with out applying the result in your environment, once run your screen on should display a response similar to below.

On successful completion you will see a response similar to below, what this shows that when the script is run there will be 28 changes to your AWS environment.

Now to deploy the infrastructure you just issue the command:

terraform apply auto-approve

The auto-approve option means that you are not asked to approve the changes to the environment, this is great for true automation, and is safe as you have proven your code with the terraform plan command.

The entire process

The entire apply process looks like this:

Summary

I think that you will agree that the work to build out this deployment was well worth the effort, with this script we can deploy a fully repeatable LAMP Stack deployment without any issues, or risks of configuration drift due to manual intervention. Every deployment is the same.

In the next post in this series we will look at productionizing the code, where we will move on to more complex entities, like modular code and scaling the deployment to produce a cluster environment that is available across multiple availability zone.

NEWSLETTER

Receive our top stories directly in your inbox!

Sign up for our Newsletters

LET'S CONNECT