A few years ago someone explained to me the pesticide paradox of test automation. It was a term I had not heard of, and I was given the following explanation:
Imagine yourself as a farmer whose fields are attacked by locusts. The damage is huge and all of the farmer’s crops are lost. The loss is tremendous and means his family struggle to survive for the year and he knows that the next year he must take some precautions. He saves what little money he has available and spends this on a pesticide to treat his crops against locusts. Sure enough the locusts attack, but they are systematically killed saving most of his crop. The farmer is very happy but does not realize that his plan is flawed. 1 percent of the locusts did not die during the attack as they had a natural immunity to the pesticide. Each year the farmer now sprays his crops and kills locusts, but each year there are more locusts, which are born with the natural immunity. Eventually the farmer realizes that he must change his pesticide or risk losing his crop again.
So what is the moral of the story and how does it relate to automation?
Automation is often used as the means of performing tests that will be repeated; the most natural use is around the regression pack, akin to the farmer’s pesticide. With vast savings possible as a result of script automation, dramatically reducing the time taken to run them, it is a natural area to apply automation. In a well-organized test function, the subject of maintenance should be a natural step; as changes are made to the function, the automated scripts are modified and maintained up-to-date. The second aspect, which is a natural inclusion, is for new scripts to be automated to support wider regression and cover new functionality. So we cover what is known, what is changed and what is new. But if there is an area or function which is not altered, the automated scripts remain the same. Over a period of time, bugs can develop around the areas not covered by the automated regression pack. If the automation pack is our pesticide, the reliance on the same pesticide over time will become less effective at detecting defects and the code will be deemed to have passed testing, whilst the bugs are undetected and released into live, gradually increasing until the functionality is corrupted, resulting in production issues.
As test professionals, we must not repeatedly rely on the same scripts in our regression pack. This is more relevant in automation because of the overhead of creating new scripts, but equally applies to manual scripts. We can avoid complacency by increasing the size of the regression pack and alternating which scripts comprise the regression, but more importantly, we should take feeds from production, understand the issues that arise there and extend the regression coverage to pick up these areas as well. Of course there is no harm thinking “out of the box” and adding new scripts, perhaps asking a peer team to contribute or looking to introduce more negative testing, changing parameters or data to elicit different responses.
Locusts are bugs! Don’t let them corrupt your output! Keep things new and alter your approach.