HomeDevelopmentAgileNew trends and buzzwords in the DevOps era: are you ready for...

New trends and buzzwords in the DevOps era: are you ready for 2021?

As the year 2020 comes to an end, it’s nice to look back at the trends and hypes we got so far. It is even more exiting to see which trends might become reality in 2021. For sure the new year will bring us new and exciting stuff which we do not think of so much today. In this article, I collected a number of them to give you an overview of new trends you are likely to hear more of.

Value stream management

Nearly every enterprise organization deals with a large number of processes to deliver great software applications. The flow of artifacts, user stories, meta-data, documents, etc is huge. To deliver value for end-users is a constant challenge, especially since “everything moves” at all times. There is a need to manage all of this to link everything together. The website of Tasktop has a great definition of value stream mapping which puts this into the perspective of the Software Development Life Cycle:

A Value Stream Management solution connects the network of best-of-breed tools and teams for planning, building and delivering software at an enterprise-level.

Since value stream management is about value, the following questions are important:

  • Which products (applications) do your customers need?
  • Why do they need it?
  • What makes them happy about it?

In DevOps, there is a major focus on the missing links between Dev and Ops, but the actual questions which do matter answer the business needs that really make a difference for the (perceived) value for customers. What to deliver and why organizations are doing that is most important from a DevOps perspective. It’s like a beautiful rainbow with bright colors that make everyone smile.

New trends and buzzwords for 2021
Source: https://stocksnap.io/

Business rationales steer the discussion. Tools and improved processes help to deliver fast and efficient, but what if the feature you are focusing on is not used at all. Perhaps the customer does not even need it. Even worse, it can also hinder the performance of another part of the application. Or what about vulnerabilities in the feature which is not even used? It broadens the attack surface without providing any benefit for the end-user.

Teams that are building the wrong features can’t spend their time on features that actually do matter to customers. That is why value stream management is important. You need to constantly verify what you are building with which purpose.

Continuous Verification

Given these considerations, a new phrase gains more traction: Continuous Verification (CV). From a DevOps perspective, CV helps to answer two fundamental questions:

  • How to make sure business requirements are met
  • How to avoid breaking (critical) stuff (and be quick at the same time)

Observability and Site Reliability Engineering play a key role in this area. In the end, the feedback loop which provides insights into the true value should be shortened. Reversibility comes into play.


Developers sometimes have to “reverse engineer” software applications or other systems. They take an existing (compiled) application and try to see how it has been constructed. A compiled package needs to be de-compiled to understand the original (or intermediate) source code.

From the DevOps angle: work your way back from a deployed application/component to the original request which triggered the deployment. Techniques like canary builds, blue-green deployments, and feature flags can help here. More on this topic in the article about deployment patterns.

Less focus on scripted pipelines

Verification is all about business features/requirements for end-users. They are not interested in how they get what they pay for. However, for DevOps teams, “how” to deliver is important. As of today, there are so many CI/CD and DevOps oriented tools, it’s hard to choose the best one which fits your needs. Whatever the tool you use it’s vital to build and maintain (custom) pipeline scripts to enable CI/CD.

Pipelines quickly become a commodity. They should be “just there” and do “whatever is needed”. DevOps team members should not spend so much of their time building, maintaining and improving their pipelines. Low code is already on the rise for traditional application development practices. For pipelines this can be very well the case.

The less time spend on the construction, the more time can be spend on business value. More and more companies start to see this and seek solutions to overcome the burden of creating and maintaining pipelines.

AIOps and MLOps

The last couple of years, nearly everyone focused on the “shift left” principle of almost everything that has to do with DevOps. As a result, Developers learn the tools and techniques from Operators. Knowledge is spread, team autonomy thrives. In addition to that comes a negative side effect: Developers can spend less time on building actual business features. That’s a (potential) loss for the business and subsequently for the end-users. The more time developers spend on the automation of deployments, the more frequenter things might and will break.

Besides this, the cloud systems in which the applications run become more complex every day. Artificial Intelligence Operations (AIOps) and Machine Learning Ops (MLOps) can help here. As a result, deployments fail and/or applications malfunction. With these techniques, the typical deployment and operations tasks become much more robust.

AIops and MLops
Source: https://pixabay.com/

AI and ML help to enhance IT operations like automation, monitoring and reporting. Given the “intelligence” of these systems there is less room for human error. Consider the following generic steps in which AI and ML work together to further automate things:

  • Data and tools to collect data points
  • Build the mode with the usage of AI and ML
  • Use the output and define actions (e.g. code the build scripts)

It’s Interesting to see that manually scripted pipelines are in fact an enemy for AIOps since these are “dumb” and all based on static configuration. These scripts cannot be used in an AI related environment.

Moving away from human intervention helps to scale DevOps processes. Another big topic for DevOps engineers.

Security and chaos engineering

Security remains a major topic for all organizations which build and run advanced software applications. Especially in an hostile environment like the public cloud, security is more important than ever. Every year, cloud providers offer more advanced security solutions for their cloud native services. These encompass both raw infrastructure as well as cloud native services to store data. The responsibility of the protection of whatever runs in the cloud shifts to the consumers of these services.

Proper usage by DevOps teams (for example when using S3) becomes critical to keep your application(s) and data secure. Trends are rising that the huge majority of security breaches stems from human errors. Strong disciple and technical knowledge within the DevOps team helps to keep the number of mis-configurations and vulnerabilities low. However, more is needed to keep the delivery speed high.

The primary focus of chaos engineering was to test out the stability and resilience of any system. It might shift more towards the discoverability of the above-mentioned issues. This approach helps to discover unexpected vulnerabilities and errors which pose a risk.

DevOps certifications

Although now new, but gaining more interest are the DevOps certifications. There is a huge demand for DevOps Engineers which master both the Dev as well as the Ops related tools, processes and way of working. Organizations increasingly require certificates from their DevOps engineers.

DevOps certificates are difficult since they require you to master a broad set of key areas:

  • Code control
  • DevOps tools and processes
  • One or Two main programming languages you need to master
  • Familiarity of Continuous integration and continuous delivery
  • Site Reliability Engineering
  • Specific cloud provider knowledge (e.g. Azure pipelines)
DevOps Certifications
Source: https://pixabay.com/

Common DevOps certifications include Amazon’s AWS Certified DevOps Engineer – Professional,  Google’s Professional Cloud DevOps Engineer & Microsoft’s AZ-400 – DevOps Engineer Expert.

Besides these, other certificates from IaC vendors, container platforms and CI/CD tools also play a role here: Jenkins Enterprise, Hashicorp Terraform or Kubernetes Certified Administrator. Classroom or self studied training helps you to prepare.

Most DevOps engineers spend at least a couple of months to achieve the above-mentioned certificates. This greatly emphasizes why they are so difficult to get. However, the (financial) reward is high since it boosts their options for high paid jobs.

More trends

As always, there are many more trends to put the spotlight on. The following upcoming trends are also relevant:

  • It’s not enough to just be “in the cloud”. It’s about the type of services you use. Pure cloud native services become top priority. Release the burden to setup and maintain lower level services.
  • Connected to the previous issue: serverless becomes even more popular. Focus on business outcomes instead of infrastructure related scripts & configurations which do not contribute directly to it.
  • Multi cloud becomes more popular, so it also becomes more important to develop cloud-agnostic applications. Exit strategy is always an important topic, but portability even more.

And last but not least: Kubernetes usage is still and will be on the rise.

Closing words

Trends come and go. In a DevOps world, the cycle is extremely fast. As described in this article, it looks like 2021 will become an exiting year with new trends and technologies. I’m curious to see where we’re going and I hope to read more stories which put these topics into practice.


Receive our top stories directly in your inbox!

Sign up for our Newsletters