Performance testing trends – from testing to performance engineering

Performance
testing trends –
from testing
to performance
engineering

Performance Testing Trends

Performance testing is hard, yet vital. Knowing its trends can be eye-opening for dealing with the performance testing challenges out there.

Digital products must be functional but, even more importantly – enjoyable to use. From the best UX design, safety of user data, and privacy to the actual performance.

The performance of digital products is critical for the user experience, and it translates directly into revenue streams.

The human attention span has been shortened from tens to mere seconds. Your users want to jump in, do what they want, and switch to other activities. Anything else is considered a waste of time and a lesson in frustration.

Performance testing, and even more importantly, performance engineering, is an integral part of the software development lifecycle, ensuring that your digital product’s users won’t be negatively affected by performance issues and that your blink-of-an-eye fast digital experience will win your clients over.

Known hardships with performance testing phenomenon

The meta challenge of performance testing can be expressed as “performance testing is hard.”

What does it mean in practice?

Performance testing requires proper load scripts, the correct test data, and a proper configuration of agents hitting the APIs. If anything goes wrong, the performance measured by the agents will be far from the actual performance in the real world.

There are these additional challenges as well:

  • Test scripts and test data require a lot of code to be maintained, so it takes effort and can be challenging.
  • Versions change and constantly require changes.
  • Performance testing automation evolution needed to catch up to functional testing automation for years.
  • Too little, too late – can often be attributed to the importance of performance testing as a separate sub-project after development and not as a critical part of the process.

The list can go on and on. However, the abovementioned challenges are enough to understand that performance testing should be done correctly to bring the results you expect. In such a case, when it comes to showing how to deal with the hardships above, it is crucial to look at some notable performance testing trends.

The performance testing trends

Multiple performance testing trends help you deal with performance issues and boost the performance environment. However, when it comes to selecting the trends that stand out, these come to mind:

Performance engineering

Performance testing from a traditional separate activity is transforming into performance engineering. It aligns with the global trend of full-cycle development and developers. The current meta tendency is to use the best software engineering practices everywhere when possible and, in this case, with performance testing. All the parts of the system come together. Metrics matter, test data, and scenarios are becoming inseparable parts of the digital product.

It is never too early to take care of your performance

Testing later is too risky for digital products as it quickly becomes too little, too late. The changes in the system architecture may be too deep to do later on, so it’s better to do them at a very early stage to ensure the performance characteristics of all the components of their design and implementation.

Who is responsible for performance?

In the true full cycle development/developers spirit – it’s the entire team. Any quality concern is every team member’s concern. As part of daily conversations, reminding all the team members about the importance of the application performance and testing is one of the social techniques that help avoid surprises later.

Tools we all love

Performance testing inside SDLC involves developers using the same tools and frameworks from the start that they used for creating the applications. Less switching of context/technology/language helps to apply more professional developers, as tests tend to be even more complex than application logic.

Developers often reject the big distant, unfriendly testing platforms of the past as strange, complex, and unfamiliar. For instance, Python devs should use a Python-based performance testing tool and even use the same inside PyCharm Integrated Development Environment (IDE).

Every day is a performance day

All developers should use performance tools regularly as part of their everyday developer life; for instance, LightHouse for Chrome in the case of front-end developers. In addition, developers should be constantly involved in performance engineering.

CI/CD pipelines

Performance goals and practices should be baked into the process, and nowadays, it must include Continuous Integration and Continuous Deployment pipelines.

The current trend is to include performance testing results as critical criteria for acceptance of the builds. Performance regression monitoring is also essential for avoiding costly performance-related bugs by providing fast feedback that results in corrective actions, such as withdrawing the service version which caused performance problems.

Protocol based testing vs. user performance testing

Performance testing scripts have been traditionally based on a protocol such as HTTP. Unfortunately, protocol-level testing is arduous with modern, complex front-end frameworks like React or Angular.

Protocol-level tools are usually based on capturing messages and then editing and maintaining them. A lot of effort is required to keep them current and relevant.

It does not mean that protocol-level testing frameworks are obsolete, as they are still helpful for testing HTTP-based API performance. However, they are not good enough for testing the actual world performance from the user’s perspective..

Human perception

People interact with actual web browsers or mobile applications, not with networking protocols. So the scripts running in the user space in the browser that simulate user behavior are the best way of measuring real-world performance as perceived by the actual users.

It is the users who will complain first about the performance. They will switch your SaaS or digital sales product for your competitors if they suffer from a terrible performance experience.

Users tend to mix performance with responsiveness and even scrolling fluidity; they expect at least 60 Hz. The role of the engineering team is to address those needs and avoid the pointless discussions of performance being good but some UI elements stuttering a little bit.

There is no doubt about the need for these types of tests. But what you can experience on the ‘surface’ as the end-user is just the tip of the iceberg, and the protocol-level testing shines there.

Hybrid testing

The prevailing opinion is that there’s no single silver bullet to solve all the known limitations of the techniques from the past.

Therefore hybrid testing, a combination of protocol-level API testing and client (web browser, mobile app) testing from the user perspective, is the best combination available today.

Functional test as a performance test?

Testing for a single user performing functional tests already at hand can be used as a fundamental performance test. Still, they are merely a verification of the performance of a single user.

The timer is always there, and the time to execute functional tests can be measured. So, it enables finding extreme performance dramas, like the system being unable to work with proper performance for the single user.

In the old times, functional testing was used for performance testing. Then there was the capability to use protocol-level (HTTP, TCP) performance tools and suits.

There’s the temptation to treat it as a two-in-one solution. It is very cost-effective to do both things simultaneously. Still, turning an excellent idea of simplified testing (as one of the many performance tests) into an antipattern (no other performance tests) is effortless.

AI to find the patterns

AI dominates the IT world, so why not use it for performance testing?

Analyzing the detailed results and the testing to find trends and complex dependencies between thousands of services (i.e., which impacts which performance) is a role that can be offloaded to advanced algorithms. These algorithms combine rule-based systems for known rules and machine learning for undiscovered patterns.

Current-generation AI technologies are excellent at finding patterns, including the usage patterns of applications. For example:  what users are doing and in what order; how long did they have to wait for the pages to load or for the business transactions to be completed?

Automating the creation of test models based on observing the actual system running in production is a new trend. The result is the creation of test scripts and flows which represent the actual usage pattern of the system.

This is much better than the traditional approach when test engineers try to anticipate the actual usage patterns of the applications, and the actual users always tend to surprise them. This conventional approach makes performance test results less relevant to the reality of the actual live system performance.

Pattern analysis can also be applied to the metrics of resource utilization (CPU, GPU, Network IO, Disk IO) to find the weak performance spots of the application as well as a resource overutilization. Again, adequately implemented AI can find patterns humans can not predict, especially within complex systems containing tons of data and large numbers of concurrent users.

The elements of predictive analysis can be applied here to forecast the system’s performance in the following weeks and months and how they will cope with the spikes of user demand (for instance, Black Friday in e-commerce systems). The performance issues can be fixed before they become a production problem, similar to AI applications in manufacturing (predictive maintenance).

Resilient software

Chaos engineering and other techniques prepare the system for strange behavior. Unfortunately, reality shows that unexpected events will happen. It’s no longer ‘fashionable’ to be surprised by them, but a solid trend exists to be better prepared for the unknown. There are no more unexpected errors, just untested scenarios.

The tools generate unusual traffic spikes and API calls to show the bottlenecks of the systems. New techniques, such as circuit breakers and others, are here to prevent the domino effect, which would result in total performance degradation and the lack of the system’s availability for users.

There’s no lack of examples, especially in these pandemic times. Sudden spikes in traffic, the popularity of different usage patterns, increased purchases of SaaS products, and sudden switches from traditional to digital products are everywhere.

Engineering teams should be aware that when things break, they should try to make the individual components more resilient and, even more, essential to ensure that the critical parts of the systems will last just as long. The normal optimization is that the parts related to purchasing services or products must be able to take orders from the clients, even when the more complex backend processing may be overwhelmed by the increased spike in the load.

With current microservices architecture appropriately done, especially in the context of the public cloud, the scalability and maintaining of response times, despite increased traffic, is much easier to achieve than before, but not simply. Expect the unexpected by proactively trying different scenarios and optimizing system performance characteristics.

This also includes informing the end users about such issues as temporary performance degradation or unavailability of the services. Informed users, who are not surprised, react better to performance problems when they do occur.

Decline of APMs

The promise of Automated Performance Management was an excellent sales pitch, but the reality of complex digital products requires a more custom approach. The automation is a ‘yes,’ but the promise of ‘just run it through our APM product, and now everybody can be a performance optimization expert’ is a definite ‘no’; those times are over.

The bottom line

The meta trends in IT are visible in the performance testing area. Full cycle trends are represented here by the transformation of the mindsets and organizations from traditional performance testing to performance engineering; everything is code – that uses the same languages and similar environments for development.

Another excellent meta trend nowadays is adaptability, represented here by a proactive and holistic approach to performance engineering and as something that is a part of everyday life and not as a separated, disconnected set of activities.

Performance engineering is complex, but it can be much easier and smoother with the right IT consulting partner with tons of experience, a modern mindset, and the proper skill set. So contact us, and you get the vendor with all these positive trends, along with a partner paying a lot of attention to the performance of the digital products we help envision, design, build, and maintain for our customers.

Other articles

or

Book a meeting

Call (Toll-Free*) +1 (800) 917-0207

Zoom 30 min

* US and Canada, exceptions apply

Ready to innovate your business?

We are! Let’s kick-off our journey to success!