HomeDevelopmentAgileShifting right: testing in production

Shifting right: testing in production

Every day, more and more companies shift to the DevOps way of working. Applications are delivered more frequently and with higher quality. But how well do these applications and their subcomponents perform? This is all about performance testing. You need to how the system performs when it’s put under high pressure. So load testing is also important. For both types of tests, executing them in the middle of the software development life-cycle increases the time to production. Shifting right: testing in production is the answer.

This article specifically focuses on performance testing, as that, usually, is part of the software development lifecycle. Load testing generally falls onto the Operations team, not the development teams.

Benefits of performance testing

Before we dive deeper into the principles of shifting right, it is necessary to understand the benefits of performance testing. You need a clear business justification to spend time and effort on the tasks to practice this.

Applications and components are often made up of many different moving parts, even in monolithic applications. A single change in one component can incur a breaking change or a drop in performance in another component.

Components
Source: https://pixabay.com

Your end-users are more demanding than ever: your application needs to look good and perform well on a desktop environment as well as on a mobile phone. Performance tests also help you understand the behavior of end users and you can capture valuable information about it to extend features or to improve existing features.

All of these arguments contribute to the need to implement proper performance tests. Not just for every component in isolation, but also for the system as a whole. It’s not just the test or staging environment, but also the production environment.

As a lot of developer activities are shifted left and applications make their way to production faster and faster. It’s vital to verify your new version keeps performing as expected. Business opportunities should be captured when you increase the performance of your application.

A suitable test environment

A common task when starting to test performance is to set up a dedicated test environment. Performance engineers mimic the production environment and set it up. This should be a simple task, but in reality it’s not. A couple of arguments why not:

  • First: the test environment must have the exact same infrastructure configuration as the production environment. From a technical point of view, this is easily done when practicing Infrastructure as Code. However, it can be pretty expensive, since normally only the production environment has the biggest machines. The cost goes up pretty soon.
  • Second: the test environment should resemble the same data as the production environment. This sounds easy, but it’s not. For a reliable performance test, the same (amount) of data need to be available. A database query executed against a table with 1000 records is not the same as running it against a table with 10.000.000 records. Maintaining representative datasets costs extra time.
  • Third: as of today it’s not allowed anymore to just make a clone of the production database and use it for your test environment. Protect customer data to comply with privacy regulations, especially in Europe. The analysis of test results becomes more complex when you have to anonymize the data first, and the process of anonymization also eats up valuable time.
  • And last: keeping the hardware and software configuration in test and production environments equal is hard. When using Infrastructure as Code this becomes easier. But still, the number of configuration settings for the software applications can differ easily in different environments. It’s very time consuming to make sure they are always identical. And how do you make sure data sets are in sync? Keeping data sets in sync is insanely difficult.

Huge impact

One small error or change in any of these environments can have a huge impact on performance. When this occurs a lot of time is needed to find the root cause and re-run everything again. This breaks the advantage of getting software faster into production.

Considering all of these factors, it would be much more useful to actually execute the performance tests in production. But how to we test in production without making customers unhappy?

Moving closer to production

DevOps is all about shifting left, automating stuff and bring operations closer to development. It is also about shifting right to gather earlier feedback after each change. Since there are so many moving parts, continuous feedback is essential to judge whether the most recent change proves what it intends to do.

So, where to start from here?

Traces from your customers

Your customers are your first starting point. Every customer which traverses through your application leaves traces of his/her behavior. Click actions as well as transactions are recorded as well as page visits.

For all of these, a log entry with a proper timestamp, name of the request and the time it took is logged. In case a transaction fails, an error is logged.

Traces
Source: https://pixabay.com

All of these leave free traces of evidence which you can use to get an indication on how your components perform. Analyzing these log entries manually is very hard and time-consuming. It is by far better to use specialized software tools like Splunk or Tableau. These tools can help you visualize bottlenecks that you could easily overlook when analyzing data manually.

APM tools

The tools mentioned above are just a start. An even better solution would be to use Application Performance Management (APM) tools. One of the biggest benefits is that these tools concentrate on the actual user experience instead of raw logs. It monitors user transactions from start to finish. From the initial request on a customer device, all the way through all of the infrastructure components which are needed to handle the transaction.

This gives you a better overview of what actually happened under the hood. When hunting down bottlenecks, the software can tell you in which component or even in which line of code the problem occurs. To go short: it can really pinpoint the exact cause so developers or performance engineers can immediately concentrate on a fix it the next sprint.

Popular and well known APM tools are: Dynatrace, AppDynamics and New Relic. For containers and Kubernetes, SkyWalking can be a good choice.

APM tools not only give you valuable information about potential bottlenecks, they also give you feedback about the actual usage of a feature. Suppose you build a new feature based upon feedback from one of your customer success managers. You can release an MVP of this feature to see if and how it’s being used by your customers. Once you see it’s being used very heavily, you know it’s in high demand. This will accelerate the deployment of the next iteration since you can justify it.

Operational requirements

In one of my previous articles, I highlighted the concept of operational requirements. When executing performance tests, the operational requirements are critically important.

These requirements should be defined by your developer teams and performance experts. Don’t just google some fancy numbers, since these lack the context of your situation. There is no predefined magic number. Use the SMART methodology to define them. From here, you should write down and prioritize your operational requirements.

Every iteration, you can include a task to increase the performance of your application by X percent. Performance tests become part of the development work (a shift left here ;-)) and this also makes sure it is not an afterthought.

Synthetic monitoring

Synthetic monitoring is an addition to APM tools. Add dummy accounts to your production system. A robot executes a script on certain time intervals. This uses the dummy accounts to simulate the behavior of real user accounts.

The APM tool itself measures the performance of the steps being executed. It is important to focus on the key pages and common flows which real customers also follow to get realistic information about potential bottlenecks. It is not useful to define click paths for features which are not used by your customers.

Needed requirements

One of the key requirements for performance testing in production is to have a mature DevOps team and an application that supports this. It all does not make sense when your organization is still organized around silos. If you have completely separated developer teams, performance experts and system operator teams, you can never run performance tests in production without too many risks. You will also not reap the benefits, since the processes will be too slow.

To be successful in this area, the operator teams needs to be involved and become automation experts as well. They need to become kinds in describing, isolating and replicating environmental configuration and variables. Once they notice performance bottlenecks or other problems they need to put down useful user stories on the backlog.

Smaller is better

Your application should not take hours to deploy: performance tests in production do not work when you have a big monolith that can’t be fine-tuned and redeployed in an easy way. The performance tests need to be broken down in smaller chunks. This is needed to be able to run multiple tests in parallel without interfering with each other. One test should focus on a single goal, just like a micro-service. Every test should have a “what-if” scenario.

Mock services are needed to fulfill the “what-if” scenarios. When using them, your tests don’t require dependent services which needs to be up. Faster tests are possible now. The test is not affected by the (lack of) performance of the dependent service.

Measure performance
Source: https://pixabay.com

The requirements mentioned above help you to run a series of small tests every time you deliver to production. You are able to gather real-time feedback and to quickly make small improvements. In the end you might be able to predict potential bottlenecks and proactively fix issues before they become a real problem.

Load testing

If you are already this far, you might consider doing load tests in production. The website of Guru99 describes load testing as follows:

LOAD TESTING determines a system’s performance under a specific expected load. This purpose of Load Test is to determine how the application behaves when multiple users access it simultaneously.

Be very careful here since errors in production affect your customers directly.

Risk assessment

Before executing any load test in production, consider the following risks since you cannot test everything you want or need. These risks are an example of what should be acknowledged and prioritized:

  • Load tests which have a high impact on the user experience. For example: tests which cause too much stress on resources or cause delays or even fatal time-outs.
  • Tests which touch weak areas of your application like poorly build components. If these fail, what does that mean to the rest of the application?

Once you have defined and prioritized the risks, you can set your test goals or stop here.

State performance goals

Test goals for load should be set by people in your organization. Simple examples are:

  • Response times per component. What are realistic response times when the system is under heavy load.
  • Guaranteed uptime: the system should be kept online all the time. Failing gradually under a high load can be an answer, the system should not fail catastrophically.
Performance bottleneck
Source: https://pixabay.com
  • How many users can it serve simultaneously. Off course, you can scale up the number of containers you run, but what about scaling of the number of VMs which host these containers? You need to be very careful here.

Overall the system must perform reliable over a certain period in time and perhaps also being able to handle expected peak loads (e.g. handling the Christmas season).

From here you should create your load test strategy. A next article might dive deeper into this topic.

Conclusion

This article focused on performance tests in a DevOps world. It’s about shifting right to run performance tests in production. When doing well, it will bring great benefits to your DevOps teams by fast discovery of performance bottlenecks and other real-time feedback like the actual usage of your application components. Key requirements to put this in practice are: your teams and applications must be well prepared.

I hope this has inspired you to start performance tests as the next step in your DevOps journey.

NEWSLETTER

Receive our top stories directly in your inbox!

Sign up for our Newsletters

spot_img
spot_img

LET'S CONNECT