In the third post of this series you will have noticed that we configured the Terraform environment to utilize a HashiCorp Vault server to provide one time use authentication for your AWS environment and Terraform deployments. In this post we show you how to configure a basic Vault server running on AWS, this is not suitable for production as it will not have any resilience built in to the design, but is suitable for Development and Staging environments, and is a great way to familiarize yourself with Vault.
How simple Terraform plans make hybrid and multi-cloud a reality: an introduction
Deploying a LAMP Stack with Terraform – AMIs, network & security
Deploying a LAMP Stack with Terraform – Databases & Webservers
How to create resilient Terraform code
What is Vault?
At is base lever Vault is a secrets keeper. It is an Open Source tool that securely manages secrets and can be used to encrypt data in transit. One of the key challenges for modern enterprises is the with the growth of Cloud, IaaS, PaaS and even SaaS is the number of username, passwords, credentials for systems, keys for access to external services has exploded and managing them in a secure manner is a lot of work. What Vault brings to the party is a tool that understands who is accessing what secrets, when, where from and where to. This is across multiple access points and authentication methods. For example Vault supports 17 different secret engines, including Active Directory, RabbitMQ, SSH, Azure, AliCloud, AWS and PKI certificates, and 16 different Authentication methods: including AWS, Azure, GitHub, LDAP. RADIUS and basic Username and Password to name a few.
All this is available in the Free version of the product, if you upgrade to the Enterprise product you get support for KMIP, HSM Support, replication, Entropy Augmentation, FIPS 140-2 and several other addition features.
This post is just going to build a single instance of Vault in AWS to support the Development and or Staging environments. For production it is recommended that a more resilient clustered solution is deployed (this will be discussed in a later post). But now we will deploy our first Vault, Secret engine and integrate it into AWS for authentication of our Terraform users. Now there is nothing wrong with deploying this as a Terraform module, and the code will be shown to do so; however remember that you will have to do some manual configuration when setting up Vault to configure the environment. Review the initial post on how to deploy a lamp stack and examine the EC2 instance section and remember the Provisioner section is your friend.
Deploying the Vault
The main reason for this post is that the vast majority of articles about Hashicorp Vault are either focused on development environment or assume a large amount of pre-existing knowledge, so even though some of the command in this post may seem a little condescending, it is not meant to be so. That said, there is a working assumption that you know how to deploy a AMI image and allow SSH access to it using SSH Keys.
Create a new VPC in your environment and deploy a new AMI image, we have utilised a hardened CentOS 7 image. Configure the environment to accept access from anywhere and remember to assign your keys to enable SSH access.
Once your environment has deployed open your SSH client of choice and issue the following commands to update the libraries on the guest this is important to make sure that your machine is secure. We need to install a couple of new applications, depending on the AMI used or your build process utilized to deploy the Centos image, the second command may or may not be necessary.
sudo yum -y update sudo yum install -y wget curl unzip
Once completed we need to obtain the relevant binaries, these are downloadable from HashiCorp, create a new directory we recommend that you use /opt/vault/directory and then use wget command as shown below to download them. At the time of writing the latest 1.3.2 is the latest version.
wget https://releases.hashicorp.com/vault/AULT_VERSION/vault_VAULT_VERSION_linux_amd64.zip
Finally we need unzip the download into the directory. Then copy the Vault file to /usr/local/bin, change the owner to the root user and verify the version
sudo mkdir -p /opt/vault/directory wget https://releases.hashicorp.com/vault/1.3.2/vault_1.3.2_linux_amd64.zip sudo unzip vault_1.3.2_linux_amd64.zip sudo mv vault /usr/local/bin/ sudo chown root:root vault vault --version
Next we need to create a system user to enable the automatic starting of the service on reboot. Create the users $HOME directory mkdir /etc/vault.d. then create the user.
sudo useradd --system --home-dir /etc/vault.d --shell /bin/false vault
To confirm that your user has created correctly grep the passwd file.
Next we need to configure systemd. For the non-linux native consider systemd the equivalent of Services on a windows server. systemd uses sane defaults, so we only need to configure non-default settings in the configuration file, open up your editor of choice and create a vault.service file in the following directory /etc/systemd/system/
Enter the following, this is a copy of the file on the Hashicorp website. What this file does is set the configuration parameters of the application. The first stanza configures the some start up constraints, for example “ConditionFileNotEmpty” will mean the service will not start if the file shown there is not present, and the “StartLimitIntervalSec” and “StartLimitBurst” sets constraints on the number of start attempts within 60 seconds.
[Unit] Description="HashiCorp Vault - A tool for managing secrets" Documentation=https://www.vaultproject.io/docs/ Requires=network-online.target After=network-online.target ConditionFileNotEmpty=/etc/vault.d/vault.hcl #StartLimitIntervalSec=60 #StartLimitBurst=3 [Service] User=vault Group=vault ProtectSystem=full ProtectHome=read-only PrivateTmp=yes PrivateDevices=yes SecureBits=keep-caps AmbientCapabilities=CAP_IPC_LOCK Capabilities=CAP_IPC_LOCK+ep CapabilityBoundingSet=CAP_SYSLOG CAP_IPC_LOCK NoNewPrivileges=yes ExecStart=/usr/local/bin/vault server -config=/etc/vault.d/vault.hcl ExecReload=/bin/kill --signal HUP $MAINPID KillMode=process KillSignal=SIGINT Restart=on-failure RestartSec=5 TimeoutStopSec=30 #StartLimitInterval=60 #StartLimitIntervalSec=60 #StartLimitBurst=3 LimitNOFILE=65536 LimitMEMLOCK=infinity [Install] WantedBy=multi-user.target
The second stanza the [Service] defines the capabilities of Vault from a security perspective. For example only the user (and group) vault can start the application. “Restart” states that the server will restart on a sealed state. This protects the environment where your secrets are stored. There is no method of entering this storage to obtain secrets whilst the fault is sealed. More on this later.
Next we need to create the vault.hcl file again open up your editor of choice pointing to the following file /etc/vault.d/vault.hcl enter to following. This section sets up the physical access to the vault and also the monitoring capability. If you remember when you created the vault.service file we set a constraint in the [Unit] section that the vault application cannot start without the presence of a valid vault.hcl file
storage "file" { path = "/vault-data" } listener "tcp" { address = "0.0.0.0:8200" tls_disable = 1 } telemetry { statsite_address = "127.0.0.1:8125" disable_hostname = true } ui = true
This is file is written in hcl (Hashicorp configuration language) and if it looks familiar it is because it is fully compatible with JSON, and in fact you can use a JSON formatted file as valid input here. What this file is doing is setting up from where and what can access Vault, the first stanza of the file says that the storage file is found on the path “/vault-data”. This is relative to the location of the vault application. There is a listener set up to accept access from anywhere (0.0.0.0) but only on port 8200, in our example tls is disabled.
The telemetry section sets up a listener on the localhost port 8125 to collect various run time metrics about the performance of the various libraries and subsystems utilized in the running of the Vault service; which can be used in debugging or just gaining a deeper understanding of what is happening. The final line defines whether the Vault can be accessed via the web interface, in our example this is set to true.
As we are utilizing a hardened version of Centos we also need to modify the SELINUX configuration to allow the custom port to be passed. To do this we use the semanage command
semanage port -a -t trivnet1_port_t -p tcp 8200 semanage port -a -t trivnet1_port t -p udp 8020
Starting the Service
The next stage is to start the service.
sudo systemctl enable vault.service
The first command sets the autostart on the service, you could reboot your vault guest now and it would start but this time we will manually start the service
sudo systemctl start vault sudo systemctl status vault
The final command will report back the status of the service you should receive something similar to below
This finalizes the installation stage of building a vault service. Next we look at configuring the vault server.
Configuring Vault for First Time Usage
The first thing that you need to do is configure the environment. From your SSH session issue the command ‘vault operator init‘.
This is VERY IMPORTANT make sure that you copy and store the unseal keys in a safe location. If you lose them and have to restart your Vault YOU WILL NOT BE ABLE TO UNSEAL YOUR VAULT.
There is no method of getting your secrets out of the environment and you will have to start again. Not too much of an issue with a single user environment like the one we are configuring here. But if this is for a production system, you run the risk of locking everybody out of their authenticated systems – not good for promotion prospects or employment status.
Also take note of the initial Root Token you will need this to access the User Interface later.
The final part of stage one configuration needed is to create two environment variables. To do this, from the command line issue the following commands:
export VAULT_ADDR='https://127.0.0.1:8200' export VAULT_TOKEN=<vault token here>
Alternatively, you could do the initial configuration from the Web Interface.
Configuring Vault from the Web Interface.
Form your AWS portal copy either the DNS name or the IP Address and point your browser to
https://<Vault Server address>:8200
Assuming that you decided to do the initial configuration using the web interface you will be presented with the following form:
Note that the vault is currently unsealed. This is because it is not yet configured. Enter your Root Token that you saved earlier.
The next form lets you decide the number of segments that your unseal key is split into and then number of fragments that is needed to unseal the Vault server. It is recommended that you have at least three segments and you set a threshold of at least three segments to unseal, between three to five segments sets a reasonable level of complexity, with out the unseal process being too onerous.
You will now see the form shown above. DO NOT FORGET to download these keys. You will NEVER get another chance to do so.
Save them some place safe and do not lose them. Your environment can not be unsealed without these keys.
Once you have saved the keys press Continue to Unseal.
Finally enter the root token and press sign in.
Configuring Vault Secrets
You will now find yourself in the main screen, at the moment it is showing the default screen. On first entry you will see the cubbyhole engine is already installed, this is of no real use to us at the moment. What we need to do is configure your AWS Secret engine.
In the top right of the screen you will notice the “Enable New Engine” click this to move to the next screen.
As we can see there are several secret engines that can be configured. Click the one titled AWS and the grey Next button will turn blue, click it
Enter a path the default is aws, clicking the down arrow titled Method Options will display advanced options, these are beyond the scope of this particular article and are shown for completeness
Close the screen by clicking the “Hide Method Option” link. Click the Enable Engine button to activate the Engine.
Next we need to give the engine access to your AWS account. To do this click the Configuration tab and enter your AWS secret and access keys.
Press the “Configure” link and enter your keys again there are some more options but again these are beyond the scope of this call. press save to finalize the configuration. On the successful configuration, click the “View Backend” and click the “Create Role”.
Enter a role name, in the Policy ARNs box enter a ARN that matches the desired permissions and finally enter the following into the “Policy documents” section.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1426528957000", "Effect": "Allow", "Action": [ "ec2:*", "iam:*", “rds:*” ], "Resource": "*" } ] }
This section of JSON code outlines the policy and click “Create Role”
Finally click “Generate credentials” if the role has been successfully created you will receive the following:
You have now successfully configured your VAULT server to generate AWS credentials,
Summary
HashiCorp Vault is a powerful tool once it has been configured, this is not an insignificant task. And we have just scraped the surface of its power, there are many more powerful features that will be investigated. That said, you should now have a working AWS Secret Engine to use against your Terraform code. In a later post we will show how to deploy a resilient highly available Vault Cluster for production purposes.