Let’s ‘perform’ it right: Performance Testing, from planning to overcoming challenges

Let’s ‘perform’ it right: Performance Testing, from planning to overcoming challenges

Performance testing is critical to customer satisfaction; if your application’s performance doesn’t meet the expectations of your customers, they will move on to your competitor. Performance testing can be complicated and requires specialized test planning. To plan a comprehensive test strategy, it is important to fully understand the challenges of performance testing as well as the process and tools available to create an effective performance test.

Building a fully functional software application is important but how it performs is an equally important challenge. Mobile application product leads, managers, developers, and marketers have realized the difference between the two in the face of performance issues in their software, threatening the reputation of their companies. Thus, breaking the illusion ‘a good software serves it all and testing for performance is just an option’.

In fact, it is just the opposite. Without effective testing and metrics that easily show how an application is performing in real life, brands, and app owners would never know what is truly driving their uninstalls.

It is important to understand that performance testing has its own set of challenges, which include change in behaviour of an application in scaled-down environments. Yet first we must understand why it is required in the first place.

Why is performance testing needed?

Here are some of the reasons why performance testing is important:

1. Experts believe that mobile application errors are much higher than what have been reported. Mobile applications struggle with network issues, especially when the server is congested. And if the applications are running on unreliable mobile networks, it becomes even more difficult. Some of the problems that apps face in such a situation are:

  • Issues in downloading images or broken images
  • Giant black holes in content feeds
  • Booking or checkout errors
  • Frequent timeouts
  • Stalling and Freezing
  • Failed uploads

2. Poor application experience means frustrated customers, which translates into lost revenues. A research shows that over 47% of the respondents, when faced with a broken image would exit the application and transact on a different platform.

3. Application speed changes as per regions. It is important to update an app country-wise and test it for the same. Internal testing should be done on the performance of the applications in various speeds and different networks. Some countries have 2G connection, some have 3G and others 4G. It is important to check whether users of the application across the world can use it conveniently, without any network issues. There are high chances of the app functioning at an optimal level in the developed nations like US, UK, Germany, Japan, and so on. However, the same app is very slow in developing countries like China, India, Brazil and Southeast Asia.

4. Moreover, a system may run conveniently with only 1,000 concurrent users, but it might behave randomly if the user base increases to 10,000. Performance testing determines whether high speed, scalability, and stability of the system is achieved by the system under high demand.

While there are different tools to test the above-mentioned criteria, there are different processes that determine whether the system is functioning according to the set benchmark. It is also important to plan the way performance testing should be done.


Planning performance testing

Performance testing has become an integral part of software application testing, more so because of the improved digital experience that clients expect. Testers, therefore, have been pushed to adopt a multi-layered testing approach above the regular load and testing schedules.

Building a complete testing strategy is the first step. A detailed test strategy needs to be chalked out to determine the types of test that need to be performed to test the application. It is best to analyze the user demand and how the components interact to the given stress scenario. The testing strategy should closely mirror the real-life environment to get the best outcome.

It is best to include think time in testing, which means the time taken by a typical user to view the information playing across the screen. This is done typically by the users when they switch from one section to another or when they apply their intelligence to surge ahead with their plan of purchase. Usually this time lag occurs when the customer verifies the card details or the address.

While creating test scripts, this time can be fixed between two consecutive requests or an ideal time between maximum and minimum values.

Experts believe, it is best to test a system component-wise. This eliminates the risk of issues suddenly cropping up at the time if testing. In such cases, it is best to learn from earlier experiences or get experienced testers on the loop who can handle complex test situations. Baseline tests play an important role here. They help in determining the error fast and at the basic level. In fact, 85% of the errors are easily identified at the basic level alone.

Challenges: Selecting the environment and testing tools

Over time, performance testing tools and environments have changed drastically, owing to the complexity of the applications and their development stages. Ensure that your tools and environment answer all these questions in a relevant scenario:

  1. What do I do when I have to script against one environment, but run my tests against another?
  2. What do I do when the IP/URL of the performance test environment is constantly changing?
  3. What do I do when my performance test environment doesn’t match production?
  4. What do I do when I’m asked to run a “quick test” against a different environment altogether?
  5. What do I do when I need to stimulate different users going to different URIs to stimulate diversity in the geographic location?

Clients usually do not have dedicated environments for performance testing, which is a big challenge as tests should be done in real environments. While some clients cite budgetary restraints, others claim issues with resources to conduct tests in real environments. Thus, testers end up with minimal or no hardware, required for performance testing.

Some of the challenges that testers must overcome while selecting the perfect performance testing tools include:

  • Budget & Licensing costs
  • Protocols
  • Hardware requirements
  • Technology and Platform
  • Browser and OS compatibility
  • Tool trainingsupportforms
  • Result generation option

Offering a complete test coverage, covering all the functionalities of an application, is a big hurdle for all performance testers. At most, all the scenarios of the key functionalities that need to be automated are identified to ensure that most of the test cases are covered.

Adding to the challenge is the ability of the tester to develop a test system as per his expectation. A system has two kinds of requirements- functional and non-functional. A performance tester should know where the system stands in terms of all these requirements.

Analyzing the performance test results is another challenge since it needs a keen eye for detail and a lot of practical experience on the part of the tester.

In an ideal situation, the performance testing environment should be sized with the same capacity as that of production to avoid any risk associated in interpreting the system performance characteristics for production environment. This helps testers to focus primarily on the performance and scalability analysis of the system, instead of the environment in which the test is being conducted.

However, there are situations when clients cannot provide similar test environments. In such cases, there are other alternatives to carry out the performance tests, but they come with added risks. Here, the tester has an added responsibility of updating the client about the risk factors. A few such alternatives are:

Use of production environment: Here the clients are convinced that it is best to set up a real-environment testing scenario, rather than performing tests in scaled-down environment since the risk involved in the latter scenario is higher than conducting the performance tests in production environment with proper planning and control.

Use of cloud-based production similar environment: This alternative is preferred when the need for carrying out performance test on production similar environment is very high, and it is not feasible to set up dedicated performance test environment at their on-premise data center. Though it addresses major risks in setting up the environment quickly and mapping test results, this method is not preferred by certain industries due to security reasons.

Use of scaled-down environment: Although the most preferred alternative, clients hardly are aware of the risks involved in it. Clients usually mistake this to be a bigger hardware in the production and are not able to differentiate between the TEST and the PROD environment.