The story of the benefits of the 12 Factor App principles continues in part 3 of our articles. In our last article, we’ll zoom in into disposability, runtime environments, logs, and admin processes. Benefits when applying the 12 Factor App principles (part 3)
Factor 9: Disposability
Cloud-native applications should only run when it’s needed. To save costs and resources as well as from an environmental perspective, you need to get rid of them as soon as possible. All of these justifications are seen from a “controlled perspective”. But there are also cases in which you do not control exactly what happens. For example a sudden decline in traffic or an unexpected failure or a (network / DB) connection. What do you do when this happens?
The answer is to make your application more robust. To make this a reality, you need to strive for the following aspects:
- Keep startup times as limited as possible. This ensures you can quickly switch the infrastructure layer without much impact on the end-user. Besides this, this principle also helps when you scale out your application. Fast and transparent scaling is not an option if your application takes like 10 minutes to come up. That is too slow to respond to external changes that require immediate action.
- Shut down gracefully. There are two aspects here: one for a web application to silently shut down and one for a worker process that handles long-running tasks.
Two approaches and benefits
- First: A web application that needs to be gracefully shut down should stop receiving new requests but handle the current requests before it pulls the plug. Besides just actually stopping the service, it should respond to connecting clients with a nice message indicating the service is not operating anymore.
- Second: Worker processes can return the current job to the work queue to be handled later. This can happen automatically when a worker node stops operating. Results can be wrapped in a single transaction to avoid the requests from being lost or partly processed (this gives data corruption). Another option would be to create operations that can be executed multiple times without affecting the actual result. This is also a form of robustness.
Besides these aspects, your application should also survive a sudden failure of the underlying hardware. While all cloud providers offer a lot of managed services (such as Kubernetes clusters, fail-over solutions, redundant hardware, etc) to handle this, it’s also wise to architect your application to avoid being dependent on those solutions. To survive sudden deaths of the infra layer, be sure to keep a record of your current transactions and requests and architect your solution so the application will pick them up once the application becomes available again.
Factor 10: All environments equal
To build, ship, and deploy applications fast and with as few errors as possible is the goal of CI/CD and DevOps. Ideally, every environment from development to production is the same. This is not always the case since developers sometimes need to be very patient with new releases. Not only from a time perspective (it can take weeks or even months to release a new version) but only from tools or collaboration perspective.
Organizations that do not practice the 12 Factor App principles have a hard time bridging the gap between developers who writes code and Ops engineers who actually run that code in various environments. Sometimes there is a high concrete wall between Development, Test and Acceptance, and Production environments. Tiny differences in those environments can cause an application to fail. Don’t underestimate the human factor. Even if you have proper deployment scripts, things can go wrong since personnel from both roles can make mistakes. Communication is king here, but it would be much better to have the same people run the code (application) as the people who wrote it.
Invest in a great developer workstation
Besides the human aspects, (slightly) different tools for all the environments also pose a problem. It’s very tempting to run a lightweight version of an application server on a developer’s laptop whereas the production environment uses a fully-fledged application server that has many more bells and whistles. When this is the case, small differences creep in, and in the end, this leads to numerous problems.
So it’s very well advised to invest in a well-equipped developer workstation. Don’t buy laptops with 8GB of ram, give developers the power they need to simulate production-grade systems.
IaC on localhost
To keep all environments the same and to ease the setup for developers running their software locally, you can apply the industry-standard IaC principles on your local machines as well. Virtual environments such as Docker and Vagrant are some of the solutions here. Provision them with tools like Ansible, Puppet, or Chef and you’re ready to go in your local environment.
All of the aspects mentioned above provide the following benefits:
- Less rework (in production) since errors are detected early on in the SDLC.
- Less overhead in keeping the environments the same everywhere.
- Fewer discussions and communication overhead between developers and ops engineers
- Visibility into the actual status of the applications that run in production can be tracked down to the exact source code from which it stems.
Together, these benefits make up a more positive business case for your next business application.
Factor 11: Logs
Every application should log specific actions so operators or developers can trace its behavior. Typically, logs are collected from every piece of the application or the infrastructure on which it runs. They consist of events that also include a timestamp to be presented in chronological order.
In traditional applications, logs are written to so-called “log files” on a filesystem. These log files are “plain text” files that do not have any formatting. Sometimes the log files are in structured formats such as JSON so tools can parse them to quickly find the right information.
Storing log files on a filesystem has the following disadvantages over more modern ways such as streaming logs:
- High traffic peaks and unexpected errors can create a massive amount of log lines, thus creating larger log files. Disks can become full.
- Developers and operators face a delay from the applications’ perspective in relation to the actual storage of the log files.
- Logs need to be archived for specific purposes and log file rotation (split log files for a specific time period such as a day or a week) needs to be implemented. This adds more work to the backlog of the development team.
- Since log files can contain sensitive information, they need to be protected, so the security of the underlying infrastructure also plays an important role.
The 12 Factor App principle to handle log files is different. Logs should be treated as “event streams” to let logs flow continuously to another system that is dedicated to logging handling. A modern application never handles these event streams themselves to “segregate the duties” and to “focus on the business logic”. This also connects to the principle of “loosely coupling” and “high cohesion”. There are specialized solutions to handle log streams that come from the stdout and stderror event streams which are part of the application.
Having a separate logging router in place brings the following benefits:
- Segregation of duties and focus on the business logic of the application.
- Re-usability of the log router and storing mechanisms for multiple DevOps teams
- Once logs are stored in the cloud you never face a “full disk”
- Logs can be protected much better using IAM roles and permissions tight to individual components.
- It’s much easier to find patterns such as recurring problems if all logs over time are stored in a single system instead of a bunch of log files.
- Auditing and evidence collecting in case of trouble become a simpler task that does not depend on someone that has access to the infrastructure storage.
All of these benefits help organizations to spend more time on development and to strip away unneeded processes, thus giving them more flexibility.
Factor 12: Admin processes
Every now and then, developers need to run an admin process against the application. Often, the benefits do not outweigh the benefits of having an automated pipeline to run those processes. Therefore these one-time actions need to result in expected behavior to make sure they won’t disrupt the continuous flow of delivering features.
One-time admin processes include the following:
- Data-related actions such as cleaning up records in a database.
- Watching the behavior of an application to trace problems. A bit similar to debugging an application while you code.
- Database migrations or upgrades which are not executed through a CI/CD pipeline.
Connected to principle 10 of part one of this article these admin processes should run in an identical environment as production. This prevents misbehavior and unexpected (configuration and data-related) errors. The same is true to handle dependencies: this should be the same as described in principle 2. It’s not enough to use the same version of the dependencies, the way those dependencies are set up and configured (including the paths) should also be the same.
May programming languages such as Python, Perl, and Ruby provide an easy-to-use interactive shell to execute those environments. In essence, you can run those shells using the same commands which you also use to run your application. This lowers the barrier to focusing on this principle.
Furthermore, those admin scripts should reside in the same code-base as the application. They should be versioned along with the main application and packaged together. This ensures executing the correct script since these can be correlated to each other. Those scripts should be executed inside the working directory (checkout directory) of the application itself. This is fast and as reliable as possible.
As seen in this series of articles, 12 Factor Apps bring a lot of benefits to organizations as well as individual developers and system operators. The 12 factors which are outlined follow development patterns from different perspectives and disciplines. Topics range from treating logs as event streams to treating dependencies and configuring environments. As well as a focus on backing services and port exposure.
Please refer back to articles one and two to read the details about all of the principles which are not outlined in this last part of the series. I hope everyone in your organization will find the time to read about the benefits and push forward to support the principles of their day-to-day work.
If you have questions related to this topic, feel free to book a meeting with one of our solutions experts, mail to email@example.com.