HomeDevelopmentWhat a typical GitOps pipeline on AWS would look like

What a typical GitOps pipeline on AWS would look like

The arrival of GitOps significantly affected how the world handled infrastructure as it could be easily integrated with any and all infrastructure configuration requirements. Furthermore, Git and the GitOps methodology make it simple to operate and manage Kubernetes clusters in a world where Kubernetes has been in the spotlight for a while now. This makes GitOps quite popular and widely adopted. It has also been a fan favorite among all the prominent cloud vendors as they have integrated it within their services.

Today, we pick AWS. This article will focus on what a typical GitOps pipeline on AWS would look like. So, let’s dig in!

The four principles of GitOps – a rundown

GitOps is an operating model that helps configure, deploy, update and manage Kubernetes and all components – like networking, security, and more – as code. The four governing principles of GitOps are:

  • The entire system, including cluster specifications, components, and workloads, is declarative.
  • The canonical desired system state is versioned with Git.
  • Approved changes to the desired state are automatically applied to the system.
  • Software agents ensure correctness and alert on divergence.

At the heart of a GitOps system is a repository that contains declarative descriptions of all the elements currently required in a production environment. The desired state of the entire system is versioned in Git thus enabling complete configuration management. GitOps allows you to approve changes or automatically apply them to the system if they pass automated tests.

When a developer approves a change that gets automatically applied to the system, this makes it the new desired state and a GitOps kicks in to implement the changes. A GitOps agent like Flux is vital to GitOps, as it constantly checks for any new updates to be deployed or any deviances from the canonical version in Git. This ensures the production system is not straying from the desired state.

Why is GitOps important?

  • GitOps enables trivializing rollbacks.
  • It provides exceptional auditing and attribution as long as the Git repos and processes are properly secured.
  • It allows for a suite of well-integrated best-of-breed software and better team collaboration around these tools.
  • It equips the system with the ability to self-heal.
  • GitOps enables us to automate things so that things can be deployed rapidly without any human intervention.

GitOps delivery pipeline on AWS

Let’s look at a general GitOps pipeline for AWS EKS and/or AWS ECS.

  • Source code – This is where all of the declarative infrastructure code will be stored and it will be managed by Git.
  • Building/testing – The container will be built, validated, and tested.
  • Publish – The container will be published to a container registry before deployment
  • Deployment – Infrastructure changes will be implemented and containers will be deployed to the production EKS/ECS cluster.

Your code and configuration are stored in a repository. You push that through a pipeline that runs your tests, builds, and signs your artifact. Then, you need to push your artifact to Elastic Container Registry (ECR). Once you have the change ready for release is when GitOps comes into play. With GitOps and a GitOps policy engine like Flux, you can validate everything is running how it is supposed to be. So, GitOps essentially comes in at the deployment part of the CI/CD pipeline and provides different ways to secure release policies.

The AWS GitOps delivery workflow

There are quite a few very useful tutorials around implementing GitOps in AWS using open source tooling like Crossplane and Flux or Argo. Here, however, we would just look at an overview of AWS tooling used to set up a basic GitOps pipeline.

To begin with, you will need a place to store your configuration like a Git repo. In AWS, their managed Git service – CodeCommit – is used to store the Git configuration.

Ops creates CloudFormation templates ahead of time. During deployment, a CloudFormation template gets committed in the AWS CodeCommit repo.

When a new commit is observed, the pipeline is triggered.

Then, the CloudFormation template is pulled for CodeBuild to execute to build the new template.

The template is run and applied to the relevant resources by AWS CodeBuild.

As a final step, the pipeline verifies if the changes are applied correctly.

GitOps and CI/CD

GitOps is a component of CI/CD that creates a clear boundary between continuous integration and continuous deployment as each stage has a different objective.

Continuous integration is all about producing a reliable artifact that is performing and continuous deployment is all about safely delivering and exposing that artifact to your users. So, you operate both of these stages in different environments. In an integration environment, you are pulling packages from the outside world, running tests, and building and signing artifacts. Then, you have multiple run times that are going to run a release and validate that it can be promoted to the next stage. You are essentially creating a boundary between CI and CD, you need a form of workflow mechanism that allows you to declare a sequence of actions that will be necessary to produce a release. This is called a pipeline service.

In the continuous deployment side of things, GitOps tooling plays a significant role – Two of these GitOps tools are Flux and Flagger.

Flux

It’s part of the Cloud Native Computing Foundation (CNCF) ecosystem and is intended to monitor Git repos and do GitOps around Kubernetes. Flux is a daemon that applies Kubernetes configurations stored in Git, to a cluster, continually. It helps extend declarations of the desired state to Git. Flux can be used to bootstrap clusters and drive cluster operations. Flux, essentially, is a Kubernetes-native continuous delivery tool.

Flagger

Flagger is a Kubernetes operator that automates canary deployments and helps to route traffic in a systematic and reliable way. In combination with a service mesh, Flagger is able to orchestrate canary releases with great precision. It is essentially an open-source library that helps you make reliable deployments and reduce outages and failures during release.

Conclusion

GitOps was conceptualized and created with Kubernetes in mind. However, it is expanding beyond Kubernetes to include other domains of interest where it has proven to be quite useful and powerful. AWS is a big supporter of GitOps, and their tooling is capable of creating a simple, or a complex GitOps pipeline. You can use just AWS tools or augment them with capable opern source tooling like Flux and Flagger for more advanced capabilities. As GitOps becomes more mature and more tools are built to follow the GitOps principles, you will see this concept of shifting left of GitOps over time.

If you have questions related to this topic, feel free to book a meeting with one of our solutions experts, mail to sales@amazic.com.

NEWSLETTER

Receive our top stories directly in your inbox!

Sign up for our Newsletters

spot_img
spot_img

LET'S CONNECT