HomeDevelopmentDevOpsIntroduction to HashiCorp Packer

Introduction to HashiCorp Packer

Packer is a tool from Hashicorp that can be used to build images in an automated and repeatable way.

Why building images is important in a DevOps World

Now why would you need a tool like Packer? Building images is not something that everybody does every day; traditionally they tend to be a manual process that once configured is forgotten about or at best the procedures carried out to build the image are captured in a runbook that gets saved on a share or in a document library never to be read again until somebody corrupts the gold master image or it becomes too bloated with updates.

Packer_logo

It is then that you find out that the process that you had thought was captured in the aforementioned document is missing several important steps, missing programs that are pre-requisites for later programs that need to be installed and assumes a level of knowledge that was only ever in the head of the template builder. Therefore, it is completely useless as a guide and you end up having to start from scratch.

And let’s not forget how manual those runbooks tend to be. It’s documentation of a process, but the process is still a manual step-by-step done by an engineer. Error-prone and time-consuming, to say the least.

So why even to this yourself? AWS has hundreds of AMIs. so does Google, and Azure. Just search for ‘OVA’ and you’ll find thousands of images for VMware platforms. Surely, you could just pick the best one and customize it? You could, but you’d have no control over what’s in the image, and only build on top, not change what’s already in there. And what if the maintainer doesn’t patch important security holes?

And even if you’ve found the perfect pre-built image, doesn’t it make sense to automate all the customizations you need to do?

So building your own set of customized images and having that build repeatable has value, by automating a process, saving you time and creating a consistent result. You can guarantee that every build that is deployed is up-to-date and secure before letting it loose on your production environment. An added benefit is that your Terraform builds can be simplified as there will be no need to shoehorn Terraform into installing applications into your code, this will make for a quicker deployment process, only post-deployment configuration will be required.

Read more on how Packer fits into the portfolio of HashiCorp products in this whitepaper: Increasing Developer Velocity in the Cloud Operating Model.

Let’s talk about Packer

There are many tools out there that are capable of automating the creation of an image. Most of these were traditionally aimed at the Microsoft ecosystem, used for building Gold masters for VDI (Virtual Desktop Infrastructure) or templates of server operating systems.

They were never intended to do much configuration and installation other than the bare operating system, resulting in many homegrown scripts and an entire community of people contributing their code. Commercial templating tools were expensive and still required you to cobble together tooling to make it work for you. Packer provides a single interface for Windows, Linux and even MacOS builds.

The rest of this post will concentrate on building out first a Centos image for deployment into a vSphere environment and then how to alter it for deployment into AWS as an AMI.

Getting started with Packer

We’ll start by deploying our Packer environment on a Centos 7 Linux machine. The installation of Packer is a pretty simple process.  Open up a terminal on your Linux machine, move into the right directory, download the binary, unzip it, clean up and verify that Packer works:

cd /usr/local/bin
wget https://releases.hashicorp.com/packer/1.5.5/packer_1.5.5_linux_amd64.zip
unzip packer_1.5.5_linux_amd64.zip
rm  packer_1.5.5_linux_amd64.zip
packer --version

Building your first Virtual Machine

Building your first image will look reasonably familiar if you have been following our Terraform series. So what are the building blocks of a Packer deployment? Firstly we have a json file that provides the necessary details to configure the environment.


NOTE: Since the release of 1.5.x Packer has been able to read HCL formatted files, however as this is still only a beta feature and as such not production-ready. We’ll use the json method in this post.


Below is a sample json file (taken from Hashicorp’s website) that shows two stanzas; one builders and the second provisions.

{
"builders": [
{
"type": "vsphere-iso",
"vcenter_server":      "vcenter.vsphere65.test",
"username":            "root",
"password":            "mypassword",
"insecure_connection": "true",
"vm_name": "packer-ubuntu",
"host":     "myvmware.host.test",
"guest_os_type": "ubuntu64Guest",
"ssh_username": "sshuser",
"ssh_password": "mypassword",
"CPUs":             1,
"RAM":              1024,
"RAM_reserve_all": true,
"disk_controller_type":  "pvscsi",
"disk_size":        32768,
"disk_thin_provisioned": true,
"network_card": "vmxnet3",
"iso_paths": [
"[datastore1] ISO/ubuntu-16.04.3-server-amd64.iso"
],
"floppy_files": [
"{{template_dir}}/preseed.cfg"
],
"boot_command": [
"<enter><wait><f6><wait><esc><wait>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs>",
"/install/vmlinuz",
" initrd=/install/initrd.gz",
" priority=critical",
" locale=en_UK",
" file=/media/preseed.cfg",
"<enter>"
]
}
],
"provisioners": [
{
"type": "shell",
"inline": ["ls /"]
}
]
}

Builder stanza

The most important stanza is the builder stanza, this is very much like a provider stanza in Terraform, in that it defines what, how, and where a new virtual machine is going to be deployed too.

Hashicorp has four possible builders available to allow Packer to install new virtual machines to VMware products. These are:

  • vmware-iso– builds a VM from an ISO file, it is supported for Workstation, Fusion, and Player and can install instances directly on the ESXi server using SSH. It is however from the perspective of VMware legacy.
  • vmware-vmx– Use this builder to import an existing VMware machine via a VMX file, Packer then runs provisioners on top of that VM, and exports that machine to create an image. Again this is only supported on Workstation, Fusion, and Player.
  • vsphere-iso– This builder is very similar to the vmware-iso builder as it starts a build from an ISO file, but as it utilizes the vSphere API rather than the esxcli it is the recommended method as you talk to vCenter. Another benefit is that there is no requirement to enable SSH access to your vSphere clusters.
  • vsphere-clone– Use this builder to clone a VM from an existing template, then modifies it and saves it as a new template. It uses the vSphere API rather than the esxcli to build on a remote esx instance. This allows you to build VMs even if you do not have SSH access to your vSphere cluster.

As we are deploying a new virtual machine into a VMware environment the obvious choice is between the vmware-iso or the vSphere-iso, and based on the comments above the vSphere-iso builder is the best method as this is a more integrated builder, and there is no requirement to lower your defense profile on your ESXi Hosts by having to activate SSH.

Provisioner stanza

The final stanza is the provisioner this section is optional. But it fulfills a very important function. This, just like the provisioners’ section in a terraform file, it where OS customization is carried out, the installation of applications either built-in or third party.

Packer vs. Terraform

Now, why would you use Packer as all this can be done with Terraform? Providing configured machines rather than programmatically building out each server into for example an autoscaling group on AWS will significantly increase the speed that a scale-up process will take in times of congestion. It’s a matter of separating different parts of a process. The image is assumed to contain the stack ‘below’ what you’re developing, and is assumed to not change as part of your development process, but change as security patches, software updates for operating system and middleware come in.

Our Template file

Our JSON file is different from the sample file above we also have a kickstart configuration file to enable post-installation configuration. We will have a look at both these files and explain what is happening.

Here you see the first difference to the above file: we have a variables section. This is not exactly necessary but it makes the JSON file more readable and a one-stop-shop for your template build details at the top of the file.

{
   "variables": {
      "vsphere-server": "ESXi-server-name-here",
      "vsphere-user": "administrator user level account name here",
      "vsphere-password": "password here",
      "vsphere-datacenter": "vSphere Datastore name here",
      "vsphere-cluster": "vSphere Cluster name here",
      "vsphere-network": "VM Network name here",
      "vsphere-datastore": "vSphere Datastore name here",
      "vm-name": "CentOS7-Template",
      "vm-cpu-num": "1",
      "vm-mem-size": "1024",
      "vm-disk-size": "25600",
      "iso_url": ~Path to OS ISO here"
   },

The next stanza is the builder section the first line is type this informs Packer what builder it is to use.

The next sections are self-explanatory for those that have ever build a VMware-based virtual machine. Things to note are that we have set an SSH User and password, this is for the virtual machine, not the ESXi environment, and is used later in the build process and is in actuality the root account and password.

Another line to note is floppy_file, this shows that we are loading a floppy drive with a ks.cfg file. This line feeds into the later boot_command line where we load it as part of the installation process to configure the machine.

Also note the line convert_to_template we have set this to true, which means that once deployed Packer will issue a command to vCenter to convert the deployed machine to a template.

The vast majority of inputs are passed into the builders’ stanza from the previous variable stanza.

   "builders": [
      {
      "type": "vsphere-iso",
      "vcenter_server": "{{user `vsphere-server`}}",
      "username": "{{user `vsphere-user`}}",
      "password": "{{user `vsphere-password`}}",
      "insecure_connection": "true",
      "datacenter": "{{user `vsphere-datacenter`}}",
      "cluster": "{{user `vsphere-cluster`}}",
      "network": "{{user `vsphere-network`}}",
      "datastore": "{{user `vsphere-datastore`}}",
      "vm_name": "{{user `vm-name`}}",
      "notes": "Build via Packer",
      "boot_wait": "10s",
      "boot_order": "disk,cdrom,floppy",
      "guest_os_type": "centos7_64Guest",
      "ssh_username": "root",
      "ssh_password": "server",
      "CPUs": "{{user `vm-cpu-num`}}",
      "RAM": "{{user `vm-mem-size`}}",
      "RAM_reserve_all": false,
      "disk_controller_type": "pvscsi",
      "disk_size": "{{user `vm-disk-size`}}",
      "disk_thin_provisioned": true,
      "network_card": "vmxnet3",
      "convert_to_template": true,
      "iso_paths": ["{{user `iso_url`}}"],
      "floppy_files": ["ks.cfg"],
      "boot_command": [
      "<esc><wait>",
      "linux ks=hd:fd0:/ks.cfg<enter>"
      ]
   }
],

The final stanza is the provisioners stanza. Remember we mentioned that this stanza is where the configure and deploy applications. Here we are installing the application cloud-init which is a tool to allow customization via YAML files during deployment. This will allow post customization to take place for example installing databases or webservers, configuring firewalls and much more.

   "provisioners": [
   {
      "type": "shell",
      "inline": [
         "sudo yum install -y cloud-init"
         ]
      }
   ]
}

Next, let us have a look at the ks.cfg file.

The majority of this file is self-explanatory, for example, we are doing an install and using the cdrom to perform that task. Certain configuration in the ks.cfg file is mandatory, and it is here that we set the requirements for the base template installation.

install
cdrom
lang en_US.UTF-8
keyboard uk

We have set the build to obtain an IP address from the DHCP server (this is really a necessity because setting a static IP address in a template is counter-intuitive as the second server to be deployed will fail due to IP address conflict)

network --onboot yes --device ens192 --bootproto dhcp --noipv6 --hostname CentOS7Template

The next item of interest is that you will notice that the password is encrypted. It can be generated with openssl:

openssl passwd -1 “MyAwesomePassword”

And voila, you have a secure password for your kickstart file

rootpw --iscrypted $6$rhel6usgcb$aS6oPGXcPKp3OtFArSrhRwu6sN8q2.yEGY7AIwDOQd23YCtiz9c5mXbid1BzX9bmXTEZi.hCzTEXFosVBI5ng0

NOTE: as this is just a test build security is not as strict as it would be for a production environment, we have disabled the firewall and lowered the selinux level to permissive, in a production environments obviously the firewall would be set to enabled, and selinux to enforced.


firewall --disabled
selinux –permissive

this section configures authconfig which is a system for configuring the network to handle /etc/passwd and /etc/shadow the files used in shadow password support.

authconfig --enableshadow --passalgo=sha512

Time zone set to Central European time

timezone --utc Europe/Amsterdam

Bootloader configuration

bootloader --location=mbr --append="crashkernel=auto rhgb quiet" --password=$6$rhel6usgcb$kOzIfC4zLbuo3ECp1er99NRYikN419wxYMmons8Vm/37Qtg0T8aB9dKxHwqapz8wWAFuVkuI/UJqQBU92bA5C0
autopart --type=lvm

Delete any existing partitions

clearpart --linux --initlabel

the relevant installation packages that need to be installed

# Packages selection
%packages --ignoremissing
Require @Base
@Base
@core
sed
.....
ntp
man
#mysql
postfix
chkconfig
gzip
%end
# End of %packages section

This section is the post-installation configuration section; here we install custom applications that are not a part of the core installation and configure services. One thing to notice is the installation of open-vm-tools, the opensource and now-recommended version of VMware Tools. Failure to install vmtools will result in the installation never completing and appearing to stall at “waiting for IP”. Another annoyance is quiting out of the Packer build process will result in a fully rewind of any prior completed actions.

%post
yum -y install open-vm-tools
systemctl enable vmtoolsd
systemctl start vmtoolsd
sudo yum upgrade -y
chkconfig ntpd on
chkconfig sshd on
chkconfig ypbind on
chkconfig iptables off
chkconfig ip6tables off
chkconfig yum-updatesd off
chkconfig haldaemon off
chkconfig mcstrans off
chkconfig sysstat off
%end

The final command is for the server to reboot and also eject the CD-ROM drive so as not to restart the installation process.

reboot –eject

Now that we have run through the necessary files, and uploaded to the desired location the necessary ISO file needed to install Centos 7 we can move forward with the build process

Building the Packer

Deploying the packer image is as simple as issuing a three-word command, from the directory that the json and ks files are stored issue:

packer build centos.json

once this has been issued you will see a change in vCenter form:

before_install

To this, obviously the name of this device will reflect the name you configured in your json file.

template

Your packer machine will be sat at showing this:

packer-start-install

Notice that the install appears to stall at Waiting for IP. This is expected and will not change until the build has completed.

You can verify that installation is continuing my monitoring the console of the virtual machine where you will see the install continuing.

Packer-Installing-vSphere

Once the installation has completed, and the deployed device has rebooted, the packer deployment session will continue and if successful you will receive a result similar to the below

vSphere-Packer-Finish

This is due to the now installed open-vm-tools correctly reporting back to packer the IP address of the device.

But there’s more remember the line in the json file that told packer to create a template:

Its Gone

Well the newly created device has gone from Hosts and Clusters and if you switch to the VMs and Templates view you can no see that the device is showing up there with the template icon showing that it was successful

template

Summary

Hopefully this introduction to Packer has shown you the power of the product. It is not a difficult product to use and the flexibility it brings to your CI/CD workflows is well worth the benefit of the time taken to get personal with it. The fact that you can build out a fully-configured machine to be utilized in your autoscaling groups instead of waiting for your terraform builds to complete a full machine deployment and application installation is golden. Read more on how Packer fits into the portfolio of HashiCorp products in this whitepaper: Increasing Developer Velocity in the Cloud Operating Model.

NEWSLETTER

Receive our top stories directly in your inbox!

Sign up for our Newsletters

LET'S CONNECT