In our previous article, you can read bout the first principles. We’ll continue our journey to explore the most appealing advantages of 12 Factor Apps over traditional apps. Benefits when applying the 12 Factor App principles (part 2).
Factor 5: Build, release, run
Every application that follows a DevOps specific SDLC needs to have a strictly separate build, release and run stage. Together, these stages ensure you can build, ship and run application changes with speed and confidence. In short, the stages have the following main purposes:
- Build stage: converts the code-base of a repository and it’s dependencies plus binaries to an executable package. It stores the package in an artifact management system such as Nexus, Artifactory or Azure Artifacts.
- Release stage: picks up the executable package and also fetches the deployment configuration. Together they form the release which can be executed to the desired target environment right away.
- Run stage: running the actual release on a run-time environment.
Having these stages as flexible as possible helps to break the entire chain of SDLC processes into manageable proportions. Consider a build phase which compiles your Java application and perform a set of unit tests. This can take long. You want to make sure to save the outcome and package that has been created (successfully) when this stage is finished. If all of the stages would be chained together and had a dependency on each of them, you might loose a lot of time when you need to re-build your application when something fails. This is seen from the build-stage perspective.
The same is true from the runtime perspective. Imagine you have rolled out a new version of an application. All of a sudden, your database performs very slow while you did not have any changes to the database system itself. The root cause is very likely the application itself. It would be too time consuming to completely build, release and redeploy your application. In turn, you pick the last known “well-working version” from the artifact management system and deploy it at your target environment.
These examples share the following benefits:
- Considerable speed of changes: no need to rerun every stage again in case of an error in one of the stages while the others remain intact. Pick up your work where you left it.
- Better visibility of changes: since the actual source code is strictly separated from the release configuration, you can focus on these areas when you troubleshoot issues.
- Rollbacks are much more easier and faster. Don’t search for the right branch or other aspect of the code-base and rebuild your application. Every release has a release ID and based on this ID (which can also be tracked down to the original commit) you can quickly rollback the release to a previously working version. If this is all done by the one and only team that creates and maintains an application from A to Z, less communication (overhead) is needed.
No human intervention needed
- Deployments can happen without any human interference. If the releases are simple, no manual intervention is needed. This ensures also systems can trigger a (new) deployment when needed. Consider application that crashes in production due to an error that happens 3 times in an hour. Your system can decide to redeploy a previous version in the middle of the night.
- Immutable infrastructure and applications. Simply said: there is no way except for the standard release process you can change an application that already runs on an environment. Only your “run pipeline” can change anything on the target environment by picking up a specific release. This avoids configuration drift and manually messing with the runtime environment. It also leads to more stable systems.
DevOps teams as well as dedicated pipeline teams might need to rethink the way they create, process and run software using these principles, but the benefits are extremely powerful. It takes a significant up-front investment but it’s well worth the effort.
Factor 6: Processes
Every time you run your application, you would execute one or more scripts. These scripts fire up the runtime environment, ideally the same for every stage in the SDLC. Running it on your local developer laptop would be the same as in production. In fact, the developer should “fire and forget” everything that the process does to run the actual application. This does not mean you won’t take any responsibility. It means that you can run the process independent of the underlying infrastructure or environment.
This principle has several benefits:
- Your application does not depend (tightly coupled) on the target environment in which it runs. It becomes highly portable across different types of environments. Furthermore, you do not need to store configuration-specific information which can change for every run.
- Don’t share anything between (application) requests or CI/CD jobs. This removes a hard dependency and makes your application more flexible.
- Stateless is the norm. Your applications needs to become stateless for every request because you can never assume the next request will be handled in the exact same environment. Remember: environments are disposable. Just an example: their IP addresses change (frequently) so you can’t rely on it. Store data that needs to be available all the time in a database or (independent) caching system.
These principles apply to every process to run the application. It could be more than one and this underscores the need to make every process independent of it’s underlying infrastructure and the data which it needs.
It’s also good to remember that “sticky sessions” are a no-go here. You should never assume that your previous session will continue on the next process if the previous session has ended (unexpectedly). Processes can vary across environments and keeping sessions open makes your application less predictable and flexible.
Factor 7: Port binding
Besides the processes of the runtime environments that actually run the application, you also need to consider the way you run them. It’s vital that your application becomes completely self-contained and independent of your runtime environment. In short this means that your application packages everything that is needed to run in the environment it supports.
Developers are used to run an application inside a “runtime container” such as Tomcat or Jboss for Java applications or Nginx for web applications that are written in NodeJS or PHP or Angular. This paradigm changes now since your application should not be dependent on external systems and runtime containers anymore. Having the configuration of your “runtime container” packed into your application increases portability and your solution is much more isolated.
More benefits of this principle include the following:
- Fewer chances of configuration discrepancies of your application and it’s runtime environment. This is especially true when the runtime environment differs a bit depending on the Operating System or low level dependencies.
- It’s easier to create an SBOM (Software Bill of Materials) of your complete system. There is no need to group the runtime environment/container dependencies together with the dependencies of the application itself. You have better visibility into the versions of the dependencies as well as it’s issues (for example security-related ones).
- Spot problems and potential conflicts earlier. It is possible to examine the runtime environment together with the application in one go. Perhaps your IDE can reveal problems since it combines both aspects in one analysis. It prevents problems later in the process and thus saves time to fix things early on.
- No separate runtime/container configuration is needed. With this principle in mind, you use unique ports per application/component that runs in your environment. Every (application) service becomes isolated of the other, no port conflicts arise and you also don’t need to tweak the runtime environment to support multiple applications or different versions at the same time.
Not only HTTP
The principle of “port binding” has another benefit: one service can act as the backing service for another one. It’s possible to serve every kind of software through port binding: web applications based on HTTP, but also applications that rely on streaming data on a lower layer such as Secure Web Services (WSS) or Redis which uses the Redis protocol.
One final thing to remember is to expose your services through a publicly accessible host-name once you run your application on Development, Test, Acceptance or Production environments. This is not needed on your local developer laptop.
Factor 8: Concurrency
Processes are not “an afterthought” anymore. Developers should treat them as first-class citizens. The way they treat processes differs from a traditional approach, mainly to support concurrency. Scaling out and scaling in becomes “the facto standard” in the application runtime landscape.
In essence, the processes become much more visible to developers since it’s not “hidden” in the runtime environments which they did not control (before). Instead, developers should think from the perspective of “share nothing and be able to scale horizontally”. Scaling in- and out should be made as easy as possible to handle peak loads and to avoid time-consuming configuration changes which are pretty risky.
When applying this principle, your application needs to use support multiple process types. Think of backup processes or reporting processes that are long-running. On the other hand, also support short running processes such as handling vast volumes of transactions.
Both types of these processes need to be handled differently. Doing so brings the following benefits:
- Only for Unix-based systems: processes do not a create PID (ProcessID) anymore. In fact, this is a file on the machine’s filesystem. And this file is a piece of persistent data. It is dedicated to the machine on which it is created. When scaling out an application to be run across multiple Virtual Machines, this is not valid anymore. Run the process via the process manager of the Operating System that handles scaling it as needed. This method also ensures your service recovers from (unexpected) reboots or other disruptions.
- Being able to scale out and back in becomes simple and reliable, thus eating away fewer costly development hours for the developers. Loosely coupling is a key work here again.
- There is less or even no need for vertical scaling. This means: no need to change the CPU or memory specifications of your underlying runtime environment. If this includes new Virtual Machines, you can expect a certain amount of downtime. This can be avoided when focusing on horizontal scaling.
Above mentioned principles and benefits are called the “process model” and this is extremely important when running on scalable cloud infrastructure.
More to come
This is the end of part two. Read about the last benefits in part three of this series.
If you have questions related to this topic, feel free to book a meeting with one of our solutions experts, mail to email@example.com.