HomeOperationsAutomated OperationsAnnouncing General Availability of HashiCorp Nomad 1.1

Announcing General Availability of HashiCorp Nomad 1.1

Nomad is one of Hashicorp’s products that I feel more people should take a look at. It offers a viable alternative or even a supplement to Kubernetes in that it orchestrates the deployment and management of Containers, but it can also be used to managed non-containerized applications too. Before we move on to what’s new in Nomad 1.1, let’s refresh what our minds as to what Nomad is, exactly. We all know about Terraform and packer; we have also heard about Vault, and are starting to understand Consul. But Nomad is one of Hashicorp’s children that does not get the time in the limelight it truly deserves.


What does Nomad do?

According to Armon Dadgar, the co-founder and CTO of HashiCorp, it is an open-source utility to greatly reduce the complexity of automating, scheduling, and rescheduling application deployment. Sounds great as a piece of marketing fluff, but what exactly does that mean? Nomad sits between the OS and the Application to provide a layer of abstraction between Developer Processes and Operational processes. Often the speed that a developer needs to work at, is in direct conflict with the requirements that Operations work with. Developers have little to no interest in things like patching and capacity, keeping the lights on; they care about getting their code to production as quickly as possible.

The primary goal of Nomad is to sit in between here and mediate –really provide a layer where we have a southbound API focused on the operator and a northbound API focused on the developer” says Armon Dadgar (here).

How does it do this?

At its base level, it is a cluster of resources that consist of a minimum of three and no more than seven nodes and a number of client agents. Nomad clusters are similar to Consul, but they differ in that they can divide their infrastructure into regions, which are served by a single Nomad Cluster, but it has the ability to manage multiple data centers or availability zones via a concept termed WAN Gossip, a protocol based on the Serf protocol which uses SWIM to maintain consistency. This means that this is a vastly scalable architecture with common interfaces for your developer to interface with, or your operation teams to manage. Pictured below is a high-level architecture of a multi-region Nomad Cluster.

Nomad Architecture
multi-Region Nomad Deployment

Back in January 2021, we spoke about Nomad reaching the venerable milestone of a 1.0 release cadence. On the 18th of May, Hashicorp released version 1.1, which has further expanded the capability of the program with a number of significant new features in both the free and Enterprise versions. Let’s first look at what’s new in the Enterprise version.

  • Consul namespace support (Enterprise): You can now run Nomad-defined services in their HashiCorp Consul namespaces in Nomad Enterprise by using a dedicated namespace value in a consul stanza.
Example consul namespace definition stanza
  • License autoloading (Enterprise): Due to the changes in the way that licensing is handled for Nomad Cluster members, each server must now validate its license during startup, this can more be automatically validated when a Nomad server agent starts using Nomad Enterprise. For a deeper understanding of this necessary process read the guide to learn how to enable your enterprise license.
Nomad Auto-Load License
Example auto load license stanza

Now let’s move on to those features that are common to both platforms; the next three features relate to performance and utilization enhancements.

  • Memory over-subscription: Improve cluster efficiency by allowing applications, whether containerized or non-containerized, to use memory in excess of their scheduled amount. Read our blog post on Managing Resources for Workloads with Nomad 1.1 to learn more.
Memory Over-Subscription
Memory Over-Subscription in Nomad 1.1
  • Reserved CPU cores: You can now improve the performance of Nomad deployed applications by ensuring tasks have exclusive use of client CPUs similar to pinning a CPU to a VM in vSphere. Read the blog to learn more.
Nomad CPU Reservation
Example CPU Resource Reservation Stanza
  • Autoscaling improvements: Scale your applications more precisely with new strategies based on a pass-through strategy where you can defer scaling logic to your APM of choice, a fixed-value strategy that will maintain a fixed number of nodes, and a threshold strategy where you toggle different scaling strategies based on the position of a tracked metric against a defined range. To gain a deeper understanding of this new feature a read HashiCorp’s post on New Auto Scaling Strategy with HashiCorp Nomad to learn more.
Auto-Scaling Improvements
New Auto-Scaling Application options
  • CSI enhancements: Nomad is known as an alternative or underlay to Kubernetes when deploying Container workloads. This release has expanded the set of CSI (Container Storage Interface) plugins to include Ceph. For greater detail read HashiCorp’s Storage Plugin Documentation.
  • UI improvements: This release seems several important enhancements to the UI including a fuzzy search capability. And NameSpaces are now a filterable property, so you can now view jobs across all namespaces and also individual namespaces.

Enhanced resource monitoring that integrates with the new capabilities of memory overcommit and CPU reservation and Allocation metrics now report on resource consumption for individual tasks.

Nomad Monitoring Improvments
Example Image showing new UI metrics

And finally improvements to authentication with a new “-authenticate flag on the nomad UI command that will open the UI based on a one-time generated token generated using the NOMAD_TOKEN environment variable.

  • Readiness checks: With this release, you can now differentiate between application liveness (Nomad application health checks) and traffic health and routing readiness (Consul cluster checks) with new options for task health checks. This will now fail a Nomad deployment if the “on-update” option is set to “require healthy”. If your stanza sets this to “ignore” or “ignore_warnings” this will allow a Nomad Cluster to start where it would have previously failed to deploy.
Nomad Readiness Checks
Example Stanza showing the new readiness checks syntax

The final improvement we will look at is not as yet production-ready, so it is not advised to use this in a production environment, but it is stable enough to be deployed in your development or staging cluster to test the new functionality it will deliver.

  • Remote task drivers (technical preview): This feature enables Nomad to deploy and manage workloads on a much wider variety of environments. You can use it to manage your workloads on more platforms, such as AWS Lambda or Amazon ECS.


Nomad 1.1 is yet another feature-rich minor release for a HashiCorp product. This one is more focused on new features rather than stability. This is to be expected as the major feature of the coming of age 1.0 release was on the whole stability-focused. HashiCorp needs to spend some Marketing time on Nomad, most people think of it as a Kubernetes alternative, but it is, as shown by this update much more than that. The ability to manage none containerized applications and containerized applications from a single interface is a major function win. Couple this with greater Consul integration and the potential for greater footprint with their new remote task drivers tech preview feature should mean it deserves a bit more coverage. To download it and kick the tires point your favorite browser here.


Receive our top stories directly in your inbox!

Sign up for our Newsletters