Producing software applications from idea to production involves a lot of steps. Your mind needs to wire every process together from the initial design to actually monitoring your application in production. Developers need to understand so many aspects nowadays, they heavily rely or even completely depend on other teams to get their applications running in a smooth way. There is a need to simplify the whole process. Especially in the era of DevOps, things go differently compared to the past which we were used to silos. AWS Copilot helps to ease the path from source code to deployment. In this article we will explore what is AWS Copilot and how developers benefit from it.
DevSecOps quickly becomes mainstream in every software application factory. Shift left principles push nearly everything to the developers. They become responsible for every stage of the software development lifecycle (SDLC). It also includes security related aspects and the deployment to production. Even on-call schedules become a thing to take into account.
At the same time, developers need to put themselves in the shoes of business representatives to translate business features into working software. Furthermore, they need to react to feedback from the end-users which are actually using their applications. All of this is complex and requires a constant balance between time, quality, price and stakeholder management.
Technology and CI/CD automation tools can help to ease the efforts to turn source code into running systems, AWS Copilot is one of the tools available to bridge that gap.
What exactly is it?
Developers can run Copilot using their console. It is a CLI to build, release and operate micro-services and single-service applications which are based on containers. More specifically, it uses Amazon ECS and Fargate as underlying cloud services. Copilot helps to setup the desired infrastructure to run these applications by provisioning CI/CD pipelines, target environments that also includes raw infrastructure related resources such as Virtual Private Clouds or load balancers. All within the context of an application with contains one or more services.
AWS uses best practices to set these services up and they can also be customized according to the needs of the developers. Besides this, it also offers an option to reverse engineer whatever has been created in the cloud to create a Cloud Formation Template out of it. When picking this up, you can fully customize your cloud infrastructure. In fact Copilot acts as a kick-starter for the complete SDLC.
Business representatives and developers benefit in multiple ways. Speed is number one. Copilot helps to speed up the process since it simplifies a lot of steps. Less thinking about how to deploy and configure cloud infrastructure resources. No need to worry about provisioning and maintaining environments.
Developers no longer need to create complex IaC templates using Terraform, Pulumi or CloudFormation. Instead they can deploy their applications right from their laptops. Copilot translates simple commands into ready to use building blocks that enable their applications to run on production grade cloud environments. Since developers are no network (and security) experts by default, they do not need to know the insight outs about Access Control Lists, Load Balancers, exposed ports, Security Groups, Web Application Firewalls, TLS, etc.
Perhaps the biggest advantage is the way Copilot set things up. Since Copilot uses best practices derived from the industry and their own network, developers do not need to think about (critical) security issues that need to be fixed sooner rather than later.
You only pay for the cloud infrastructure resources that you use. Since this tool is licensed under an Open Source license, you require no up-front costs and licenses to use it.
Before we dive a little bit deeper in how Copilot works in a organization, let’s explore which common operating models exist today to release and deploy software applications.
- Decentralized deployments. Developers build and deploy their own applications from A – Z. If they would use Copilot, they need to understand it and operate it in the same manner. Besides this, they also need to communicate with each other to push new versions, otherwise conflicting changes might break their systems.
- Centralized deployments. Developers do not deploy their applications themselves, they trust on a centralized team to do it for them. Suppose the centralized team uses Copilot, developers face big delays since they depend on this team. In addition to that, they never become truly responsible for their (running) application since they don’t have any control over the actual production systems. It’s Dev without Ops.
- Meet in the middle: a centralized pipeline with (security) guard rails. Every developer can push their application deployments through a centralized pipeline. This pipeline checks the application (change) on various aspects before it’s permitted to go through.
The last operating model is the one that provides best of both worlds. Developers can create a CI/CD pipeline using Copilot and then forget about it already. From there on, they can use ordinary Git commands to push their changes to the Git repository. On push, Git then triggers the deployment to a Test or Production environment. It’s up to the developer. Since the pipeline actually contains (integration) tests, it helps to make sure no broken applications are deployed.
Consider using Copilot using the last operating model that has been described. You need to understand the following main concepts before you get your hands dirty.
- Applications. These act as a grouping mechanism for everything you want to manage and release. Separate teams can create one application for their services / components or multiple ones.
- Environments. They follow the typical stages of an application life-cycle. Think of Test QA and Production.
- Service. Micro-services are based on one or multiple isolated and long running services which run inside a container. Applications consists of multiple services such as website front-ends, back-end systems and internal APIs.
With these main concepts in mind, you can actually start installing Copilot and explore the most important features.
Copilot works on all major systems: Linux (x64, ARM), MacOS and even Windows. Please see the installation webpage for instructions on how to install it. Besides the Copilot binary, you need the AWS CLI, Docker Desktop and your own AWS credentials.
Once these components are installed and configured, you need to execute the following steps:
- Make sure to use the default AWS profile using the aws configure command.
- Use a sample application. This can be a simple static website that includes a so called Dockerfile.
- Run copilot init to set it up and prepare your application to run on AWS ECS.
- Answer four questions such as the name of the application, a description and point to your Dockerfile. Copilot now starts to provision the infrastructure in AWS.
- Deployment of your service. This takes a couple of minutes and at the end, Copilot presents a link to access your application.
- Clean up when you’re done (use copilot app delete). This deletes every resource you’ve created earlier.
After you have setup your (demo) application you can explore the various features in more detail.
As explained earlier, the main concepts of Copilot are Applications, Environments and Services. Besides these three, there are Jobs and Pipelines.
Applications are at the heart of everything. You can setup, configure and get an overview of the components of your application. The following sub-commands are useful: copilot init to define the underlying services the application uses. It also creates a so called “application account” in which it stores the various parameters that are used. The cloud resources it creates are tagged with the “copilot-app” tag to distinguish them from the other resources. Use copilot app ls to browse the list of applications in your account and copilot app show gives a summery of your applications, environments and services.
Services the application and infrastructure together. During the copilot init step you can select the type of service you require: internet facing services for websites or back-end services for business features that should not be exposed to the entire world. It’s also possible to create so called “worker services” to implement service-to-service communications such as public and subscribe architectures. Every service consists of a so called “manifest” file that describes the service (as code) in Yaml format. And last but not least: copilot svc show presents the AWS resources that are part of the service. It’s all a bit like kubectl (svc) to control your Kubernetes clusters.
Once you run copilot env init it creates a new environment for you. You need to specify the name of the desired environment as well as the region to host it. AWS uses named profiles which are associated with your account and region. Deploying a service to an environment is achieved using copilot deploy.
Every environment uses it’s own VPC and multiple Availability Zones. This helps to make sure you get a High Availability setup. Copilot sets up an Application Load Balancer and optionally a Route 53 component to access your application using a domain name. Copilot env show lets you view your environment.
Your CI/CD pipeline is extremely valuable to build and release software in a controlled manner. Even for relatively simple applications, you benefit from having an automated process. Copilot pipeline init let’s you specify the release order (from Test, via Acceptance to Production) and the source code repository. The result is a manifest file (in Yaml format) that can be changed later on. All pipeline instructions are stored in a buildspec.yaml file.
Check those files in into your source code repository and after you run copilot pipeline update you would see your pipeline running in AWS itself. The above mentioned steps free you from setting up your own AWS pipeline using IaC templates.
Amazon ECS uses events to trigger so called tasks. Copilot uses Scheduled Jobs to trigger these tasks using a fixed schedule or periodically. Scheduled jobs are created as part of the copilot init command. Once done, your Job definition resides in a manifest file. It contains information such as the number of containers to use, the container size, the timeout of the specified task, etc. It also contains the Event Rule to trigger events which need to be fired when a job fails to execute.
Summarizing, the Jobs are needed to make your application work correctly and react to unexpected behavior.
Developers are under extreme pressure to release features fast. They don’t always have the knowledge and time to learn about Cloud deployment patterns for their applications. On top of that, they won’t always use the available security best practices. If you want to run containers in AWS and free yourself from having to learn the inside outs, you can give Copilot a try. It will help you setup your target environments, services and pipelines in an easy fashion. Practitioners can use a simple CLI to run everything they need and keep an eye on their running application (bot the infrastructure related aspects as well as the service itself). Reverse engineering of whatever has been created automatically helps to craft alternative scenarios.
If you have questions related to this topic, feel free to book a meeting with one of our solutions experts, mail to email@example.com.