HomeOperationsAutomated OperationsHashicorp Vault as a CA – how to automate your Certificates part...

Hashicorp Vault as a CA – how to automate your Certificates part two

In the first article in this series, we spoke about Certificate Authorities (CA) and Certificates, their types and use-cases, and the operational pain points and challenges they present in keeping them up to date.  We also introduced the concept of using HashiCorp’s Vault service to provide the ability to provide seamless auto-rotation of your certifications to prevent or lower the instances of Sev-One incidents due to expired or compromised certificates.   In this, our second article, we will move on to explain how the configure an instance of a Vault cluster running on Hashicorp’s “Hashicorp Cloud Platform” to operate as a Certificate Authority (CA).

OK, so where were we?

At the end of the last article, we left you with a working cluster and showed you how to log in to the cluster using a generated token from HCP.  So re-generate your token in HCLP and log back into your Cluster; you should be back at this form.

Not much to see yet, but believe me things will improve

Before we start, perhaps a quick overview of the UI of Vault may be in order.

We will start at the top

clicking on the admin dropdown menus will reveal the following sub-menu:

It’s all in the name-space

Here we can see that we are currently in the “admin/” namespace.  Namespaces are the method that Hashicorp vault uses to enforce tenancy.  Everything in Vault is path-based and often uses the terms path and namespace interchangeably.  As already mentioned, the application namespace pattern is a helpful construct for providing Vault as a service to customers, allowing customers to implement a secure multi-tenancy within Vault to give isolation and ensure teams can self-manage their own environments.  Clicking the Manage Namespaces link opens the following page

Oh it’s empty, that because we only have the default namespace

Namespaces are beyond the scope of this article and we will discuss them more fully in a later article.  The next section is the Secrets engine, this should be familiar to you as it is the default landing zone when logging into Vault as Admin.

You may recognise this space

You many notice that there is a path already enabled, this is titled “cubbyhole” it is the storage location for the currently enabled token. For further information read the Hashicorp doc on the subject,  to add new engines click the link titles “Enable new enging +

oh – so many choices

You will be forwarded to the selection page, (we will be returning to this page later as it is core to the creation of the PKI environment in Vault).  As can be seen there are a number of generic, cloud and infrastructure-related engines, including Nomad and Consul, two other Hashicorp products.

Clicking the Access link on the header menu will move you to authentication methods,  as expected the token method of access is already enabled; this makes perfect sense are you actually used token authentication to access this instance.

Currently there is only one

Clicking the “Enable new Method +” will open the following form,  here we can see the various methods of authentication that can be configured; fom OIDC, and traditional Username and Password to LDAP, and OKTA integration.

There are many-many ways to prove who you are.

What is interesting about this are is unlike the rest the form has a submenu, that enables further Authentication configuration, (this is outside the scope of this series of articles, but will be subject of a more in-depth set of articles).

we have even more options

The next top menu item is Polices,  as everything in Vault is path-based, and policies are no exception. The purpose of policies provide a declarative way to grant or forbid access to certain paths and operations in Vault.  All vault policies will deny access to a path by default, this means that an empty policy will grant no permission to the system.

ACL this is all about policy, not sports injury.

Following a common theme, clicking “Create ACL policy +” will allow the creation of policies.

Write your own policy, well only the powerful can

The final main top menu item is Tools,  this opens the following form.  Wrapping data is a method for transferring securely a secret for example a private key to a known entity.  For a more indepth introduction into wrapping please read the Hashicorp article on the subject.

shush, wrapping data, how you send things – secretly

On the opposite side of the top menu you will find the “client count” menu, clicking this leads to the “Vault Usage Metrics” form,  this is where you will find details about your usage, it is this information that will drive your monthly billed amount.

This is where you find out how much it is all going to cost you.

Next we have the Vault CLI, we will be using the console locally, so will not be using this.

The UI web terminal

The final drop down is the user account dropdown.

The user account dropdown

Clicking the “Multi-factor authentication” link will take you to following form this will enable you to enter the code for the provider that the Vault Administrator provided for your user account.

Let’s be more secure and have two methods of authentication

Next click the “Restart Guide,” this will enable a side bar that grants access to a set of HowTo articles that are focused on your access capabilities.

You stuck, it’s OK here are some HowTo guides

Copy token” will rather obviously copy the current token to allow you to capture it.  The final item is your link to logging off.  Now that we have had a very high-level overview of the Vault UI, we will start to configure our environment to provide certificates.

There is one more piece of the preparation left to do and that is the install the Vault CLI locally.

Installing the Vault Client

We are going to install Vault into our WSL instance, that is running ubuntu 20.04, but you can installing onto Native windows 10/11, MacOS, and a number of common Linux distributions.

To install the VAULT locally on Ubuntu follow the instructions below. For other platforms read the documentation stored here.

Add PGP for the package signing key.

$ sudo apt update && sudo apt install gpg

If the update and installation is correct you will receive a response similar to that shown below.

Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
gpg is already the newest version (2.2.27-3ubuntu2.1).
gpg set to manually installed.

Add the HashiCorp GPG key.

Next, we install Hashicorp’s GPG key, this is a public key that allows for the secure transmission of data between two parties and is used to confirm that the communication path is secure and verified.

$ wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg >/dev/null

If the command successfully completes you will receive a response similar to that shown below

Resolving apt.releases.hashicorp.com (apt.releases.hashicorp.com)...,,, ...
Connecting to apt.releases.hashicorp.com (apt.releases.hashicorp.com)||:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3195 (3.1K) [binary/octet-stream]
Saving to: ‘STDOUT’

100%[=================================>]   3.12K  --.-KB/s    in 0s      

2022-12-09 17:59:08 (35.8 MB/s) - written to stdout [3195/3195]

Verify the key’s fingerprint.

$ gpg --no-default-keyring --keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg –fingerprint

The fingerprint must match E8A0 32E0 94D8 EB4E A189 D270 DA41 8C88 A321 9F7B, which can also be verified at https://www.hashicorp.com/security under “Linux Package Checksum Verification”.  The command will display a response as shown below.

gpg: /home/howartht/.gnupg/trustdb.gpg: trustdb created
pub   rsa4096 2020-05-07 [SC]
      E8A0 32E0 94D8 EB4E A189  D270 DA41 8C88 A321 9F7B
uid           [ unknown] HashiCorp Security (HashiCorp Package Signing) <security+packaging@hashicorp.com>
sub   rsa4096 2020-05-07 [E]

Adding the  official HashiCorp Linux repository.

The following command will add the official Hashicorp repository to the apt sources list.

$ echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list

Update and install.

Finally we will install Vault.

$ sudo apt update && sudo apt install vault

To verify that the installation is successful, and you can use the command.

$ which vault

The final thing that is needed is to configure your terminal session to be able to access your HCP installed Vault instance, to do this you need to set three variables.

$ export VAULT_ADDR=https://<your vault server address here>:8200
$ export VAULT_NAMESPACE=admin
$ export VAULT_TOKEN=<your vault token here>

Finally lets see if we can connect to our server:

$ vault secrets list
Path          Type            Accessor                 Description
----          ----            --------                 -----------
cubbyhole/    ns_cubbyhole    ns_cubbyhole_1a8ebba4    per-token private secret storage
identity/     ns_identity     ns_identity_487f5dce     identity store
sys/          ns_system       ns_system_ab5b08a0       system endpoints used for control, policy and debugging

Well I think the answer to that is a resounding yes!

Activating the PKI Certificate Engine

As we have already mentioned,  Hashicorp Vault supports many differing Secret engine plugins.  These are enabled at a mount path.  In a production environment, you would take a little time planning your Path/ Strategy, but as this is a demo environment, we will accept the default pki/ path.  From the terminal enter the following command, if successful full you will receive a notification.

$ vault secrets enable pki

Success! Enabled the pki secrets engine at: pki/

Well, that was simple enough.  If we click the secrets link again, we can now see the new PKI/ path enabled

Creating a Root CA

Next, we will configure the Root CA and issue a root certificate for our domain, again in a production environment you would have a root cert signed by a trusted authority, or your RootCA will be somewhere else, perhaps even on an on-premises datacentre, and for greater security, after the root certificate has been created and copied, the Root CA machine will be powered off, and the NIC cards disconnected.  Read the documentation on additional security considerations for greater insight.

By defaut Vault limits certificates to a maximum of 720h or 30 days. To extend this we need to run the following command

vault secrets tune -max-lease-ttl=87600h pki
Success! Tuned the secrets engine at: pki/

Again, as this is a demo environment, we will create our root certificate on our vault instance.

$ vault write -field=certificate pki/root/generate/internal common_name="planetvm.net" issuer_name="root-2022" ttl=87600h > root-2022-ca.crt

The above command will create a certificate with a ttl (time to live) of 10 years, and it will be stored in the path pki/root/generate/internal.  As you may have noticed we pushed the output of the command directly to the file so that there was no visible output on the terminal screen.  Next let’s verify the certificate issue information.

$ vault list pki/issuers/

OK lets read the certificate but not show the actual Certificate data, copy the GUID from the last command to use in the below command:

$ vault read pki/issuer/8fa21ed4-0958-877c-3b40-1dc0a4e8fd76 | tail -n 6
leaf_not_after_behavior           err
manual_chain                      <nil>
ocsp_servers                      []
revocation_signature_algorithm    SHA256WithRSA
revoked                           false
usage                             crl-signing,issuing-certificates,ocsp-signing,read-only

Next, lets create a role for the Root CA, this will allow you to specify the issuer by name.

$  vault write pki/roles/2022-servers allow_any_name=true
Success! Data written to: pki/roles/2022-servers

The final part of configuring your Root CA  is to configure the CA and the CRL URLs

$ vault write pki/config/urls issuing_certificates="$VAULT_ADDR/v1/pki/ca" crl_distribution_points="$VAULT_ADDR/v1/pki/crl"
Success! Data written to: pki/config/urls

Creating an Intermediate CA

The next stage is to create our intermediate CA,  this is so that we do not have to expose our rootCA directly.

The first command to run is to enable a second PKI secret engine.

vault secrets enable -path=pki_int pki
Success! Enabled the pki secrets engine at: pki_int/

Next we will once again need to turn the time to live of the pki_int secret engine.  This time we will set the expiry time to one year.

$ vault secrets tune -max-lease-ttl=48300h pki_int
Success! Tuned the secrets engine at: pki_int/

Next we will actually generate an intermediate certificate, firstly we must create a certificate signing request and save it as pki_int.csr

vault write -format=json pki_int/intermediate/generate/internal common_name="planetvm.net Intermediate Authority" issuer_name="planetvm-dot-net-intermediate" | jq -r '.data.csr' > pki_intermediate.csr

once this command has successfully completed, we need to have the rootCA sign the certificate and we will save this certificate with the name planetvm-int.cert.pem.

vault write -format=json pki/root/sign-intermediate \
     issuer_ref=" my-root-2022" \
     csr=@pki_intermediate.csr \
     format=pem_bundle ttl="43800h" \
     | jq -r '.data.certificate' > planetvm-int.cert.pem

The final stage in creating the certificate is to import the certificate into vault.

vault write pki_int/Intermediate/set-signed certificate=@planetvm-int.cert.pem
Key                 Value
---                 -----
imported_issuers    [177a8ef9-0801-218d-0fa8-bfb174e14744 31790ca7-19d0-41b2-2026-cb82f13c6eb2]
imported_keys       <nil>
mapping             map[177a8ef9-0801-218d-0fa8-bfb174e14744:5e33ff4c-c869-5a1f-68dc-47b86698a059 31790ca7-19d0-41b2-2026-cb82f13c6eb2:]

Creating a role

The final stage in creating your Certificate Authority (CA) environment is to configure a role; this is a logical name that map to a policy to generate credentials, in our case certificates. Use the following command to create our role.

$ vault write pki_int/roles/planetvm-dot-net issuer_ref="$(vault read -field=default pki_int/config/issuers)" allowed_domains="planetvm.
net" allow_subdomains=true max_ttl="720h"
Success! Data written to: pki_int/roles/planetvm-dot-net

OK final test, lets ask Vault to create a certificate from our authority.

vault write pki_int/issue/planetvm-dot-net common_name="test.planetvm.net" ttl="24h"
Key                 Value
---                 -----
certificate         -----BEGIN CERTIFICATE-----
expiration          1670765673
issuing_ca          -----BEGIN CERTIFICATE-----
private_key         -----BEGIN RSA PRIVATE KEY-----
private_key_type    rsa
serial_number       1f:15:b9:1d:a9:f1:db:15:17:de:3a:0f:20:f9:c2:ad:fb:8a:84:f4

There you go, a successful test, with a working certificate.  Let us now verify that the certificate is in the correct location on the VAULT cluster in the UI.  Log on to you VAULT instance and click on the pki_int/ link as shown below.

It is starting to look a bit more utilised

If everything is working as expected you certificate should show up

Yep there is something there – that’s good

To verify everything is as expected click on the link.

Yes, there is our Certificate in all its glory.


Once again we have covered a lot of ground, but we now have a working Certificate Authority (CA) installed on our VAULT Instance and have verified that it is issuing certificated.  In our next article we will set up the auto-renewal capability, after which your operational colleagues will be toasting you for the next decade for removing the pain of Certificate renewal from their day-to-day roles.

If you have questions related to this topic, feel free to book a meeting with one of our solutions experts, mail to sales@amazic.com.


Receive our top stories directly in your inbox!

Sign up for our Newsletters