Exploratory Automated Testing

Exploratory Automated Testing

Many people tend to get confused between exploratory testing and Ad-hoc testing. Ad-hoc testing is a way to perform testing randomly, without following a structured test plan and scripts. Usually, the main reason why organizations perform Ad-hoc testing is because time and budget constraints prevent them from complying with the requirements of the scripted testing process, so shortcuts need to be implemented. Completely unplanned random actions on an application are not exploratory testing but simply bad testing (Exploratory Testing by Peter Marshall).

By: Ayal Zylberman and Nitzan Shenar

What is Exploratory Testing?

The most common definition of exploratory testing was provided by Cem Kaner and James Bach:

Exploratory testing is simultaneous learning, test design, and test execution
(Exploratory Testing Explained by James Bach, v.1.3)

Exploratory testing is distinguished from Scripted testing, which represent the traditional way of performing tests in which the test process is divided into several activities such as Study, Test Documentation and Test Execution that have to be performed in a sequential way (for example: Test Execution can not start until all test documentations are ready).

Many people tend to get confused between exploratory testing and Ad-hoc testing. Ad-hoc testing is a way to perform testing randomly, without following a structured test plan and scripts. Usually, the main reason why organizations perform Ad-hoc testing is because time and budget constraints prevent them from complying with the requirements of the scripted testing process, so shortcuts need to be implemented. Completely unplanned random actions on an application are not exploratory testing but rather, bad testing (Exploratory Testing by Peter Marshall).

Session Based Testing

Test Session is the most common approach of implementing exploratory testing. In session based testing, testing occurs in special time-boxes called test sessions. A test session is a period of uninterrupted time where exploration occurs, usually 60-120 minutes long.  Each test session contains a specific portion of what needs to be tested. A test session can be a business process of the AUT, a screen, a DB table or any other components that need to be addresses as part of testing.

Pair Exploratory Testing

A common approach of implementing a pair exploratory testing is Pair Testing when exploratory testing is done with two testers and (typically) one machine.  One example of implementation of Pair testing is when one tester is sitting at the keyboard and explaining the test ideas and hypothesis which emerge, while another sits alongside, taking notes and suggesting additional ideas along the way.  Another example might be a tester instructing the developer and taking notes.

Main problems and risks with Exploratory Testing

As a testing service provider I found it hard to convince many of my clients to use exploratory testing. I was facing a lot of resistance to the concept of exploratory testing. Some of the arguments were mainly about the inherent resistance for change at least for conservative people that are used to perform testing in a certain way for many years. However, I found some of the arguments reasonable and thus, I addressed those arguments when planning the test strategy proposed to those organizations. Although I’m not sure the risks below should be considered when underlying an appropriate test strategy to an organization, I included them in the article since organizational and political constraints needs to be addressed as part of the testing activity (I once defined organization politics as any activity performed by an organization that does not comply with its interests):

  • Coverage Review – deep auditing and review of the test coverage (at least prior to test execution) is missing in exploratory testing and hence increase the chance that parts of the AUT will not be tested.
  • Visibility and transparency – many organizations use the test documentation as a way to achieve visibility and transparency prior to test execution. Metrics (such as amount of test cases written in each day) are used in order to monitor the performance and progress of the team.
  • Resources and Time Constraints during test execution – usually organization tend not to allocate enough time for the test execution. Even when appropriate time is allocated, delays in development cause the test execution period to shrink, sometimes by more than 50%. Since exploratory testing requires more time and resources than scripted testing, this might cause low coverage of testing.

Automating the exploratory testing, as described later on in this article, will help organizations to mitigate the above risks.

What is Test Automation?

Test automation is a process in which a tool is being used in order to support a certain test activity. The most common test automation tools are GUI Test Automation such as QTP, TestPartner, Robot, TestComplete and many more. Those tools are mainly used in order to automate the test execution activity. Using those tools the organization can create a test script that performs actions on the GUI of the AUT, so later on those scripts can be executed in each one of the test execution iterations instead of the testers.

Keyword Driven Testing (KDT)

Keyword Driven Testing is a comprehensive, cross-organization test design approach that bridges the gap between the Test Automation team and the rest of the roles involved in the testing process. It allows both manual testers and other subject-matter experts to design, build and execute test automation scripts without any programming knowledge. KDT approach divides the test automation infrastructure layer from the logical layer. The infrastructure layer contains set of scripts that are triggered by a call made from the logical layer (i.e. Keyword). The logical layer, developed by the manual testers, is a set of keywords that formalize an automated test script.


What is Exploratory Automated Testing?

Exploratory Automated Testing is a method integrating Test Automation within the Exploratory Testing Session that enables the testers better bug reproduction, regression tests execution and evidence gathering. In addition, Exploratory Automated Testing is a cheaper way to enjoy the benefits of automation.

I recently exercised this approach in 2 major clients. In the first case, no automation existed in the specific project so we used a 3rd party tool that can record all tester’s actions during test execution and then analyze the log file afterwards and even use it for future regression test. This method is called Passive Exploratory Automated Testing (Passive EAT).

On the second case, a KDT approach was used. We used the KDT infrastructure to automate the test execution. This method is called Active Exploratory Automated Testing (Active EAT).

Passive Exploratory Automated Testing (Passive EAT)

Passive exploratory test automation is a method in which specific tools are installed on the testers PCs that record each one of the testers actions performed during testing. This method can be performed in pair exploratory testing or by a single tester. The execution of the tests is performed in the same way as manual exploratory testing. The only change is that at the end of the test session, an additional task is being scheduled for designing the test results based on the recorded session for reporting purpose as well as making the recorded session reusable.

In order to complete the passive automated exploratory approach – video capturing tool is also being used during the session – enabling the tester to save more time on the test documentation and bug reporting stage. Reporting bugs via video (possibly with supporting commentary) have its benefits such as:

  • A simple and clear step by step view of the scenario until the bug appears
  • Eliminates some of the bug irreproducible debates between the developers team and the testers
  • Enables Old bugs to stay valid despite resource changeover during the project
  • Replace the need for endless descriptions of system state,
  • Captures the exact response time of the system at a specific moment – a fantastic way for performance test at a low cost testing tool.

By reducing the time and effort dedicated to report a bug – the exploratory testing is now fully automated and powerful way for running, documenting and reporting bugs.

The following figure shows the recorded automated test script.

exploratpry-test-automation1Figure 1 – Passive EAT script

When implementing passive Exploratory Automated Testing, consideration and efforts should be made for the preparation of the tests. The following activities should be included prior to the test execution:

  • It is highly recommended to perform a pilot prior to the full implementation of the tool. The pilot should include execution of a typical test session with the tool followed by re-design of the test logs. The main purpose of the pilot is to measure the time needed for the re-design. If more time is needed for the re-design than to the test execution, a decision has to be made to improve the tool performance, change the tested tool or even consider not using Test Automation for the specific project.
  • The test tool should be installed in each one of the test machines. The developers should also have visibility to the test tool. This can be achieved by either installing the tool on the development environment or by providing the developer access to the test machines using tools like VNC or Remote Desktop Connection.
  • All GUI objects of the AUT should be organized in the test tool so when recording starts, all objects will have a meaningful names that will allow the tester or any other domain experts to analyze it without having to execute the test.


Active Exploratory Automated Testing (Active EAT)

Active Exploratory Automated Testing integrates the KDT approach with the Session Execution. The implementation of the Active EAT is recommended for pair testing. Usually, the first tester is responsible to create the automated test script while the second tester is responsible for the execution. The creation of automated test scripts is different than in conventional testing. In EAT, the automated test scripts are created during the testing process and their design is an outcome of what has been learned at the previous tests that ware preformed.

The following figure shows an example of an automated test script created using Active EAT:


The initial activity of the test execution is to execute all existing regression automated test scripts (if this session was already executed in the past).

At the end of each session, a closure phase should take place with the following tasks:

  • The first and second tester should switch rolls so the first tester (who created the automated test scripts) should re-execute all the tests to ensure robustness and reliability.
  • A definition which one of the automated test scripts should be used for regression tests. The main criteria used are:
    • Coverage – does the automated test script simulate a situation that is generic to the application performance?
    • Uniqueness – does the automated test script simulate a situation that is less likely to be performed by an exploratory tester during the next test sessions?
    • Existence – does the same or similar automated test script already exists in the automated test repository.
  • For each one of the automated test scripts, high level description should be made (usually the same syntax used for definition of the test log in manual test script) and other characteristics used for definition of the test.

Configuration Management of the Test Automation

During EAT, many automated scripts are being produced. Thus, it is highly important to create configuration management platform that will store all scripts for later use for analyze, reproduction and regression tests.

Each automated test script should be attached to a certain test session, so next time this session is executed, the relevant automated test scripts will be attached  to it and also when analyze and investigation is needed.

Main benefits of EAT

The main benefits of EAT compared to manual exploratory testing are:

  • Analyze Test – EAT allows better analyze of the tests
  • Reproducing bugs – since in ET tests are not planned in advance, in many cases bugs are found but the tester is unable to reproduce the activities that led to the bug. Due to the fact that the test steps are recorded in the KDT scripts, using EAT enables better bug reproduction.
  • Enhance Test Coverage – since at the beginning of each session, the existing test automated scripts are being executed, the coverage of the tests is increased.
  • Save time – when using passive EAT, the same tasks formally performed by 2 testers can be performed by one tester while the session duration may increase dramatically.
  • Providing assurance to the stakeholders – using EAT provides assurance to the stakeholders that all activities are being documented in a level that is not less than the level used for scripted testing. For one of my clients, using EAT was the only way to convince him to use ET.