How did I miss that bug?  As a tester, how many times have you asked yourself, or has your test lead or test manager asked you, that question?  How many bugs have you missed that were clearly easy to spot?  As a software tester lead, I have often asked myself and my teams to consider that question.

There is always a cost when a bug escapes into production, costs range from negative publicity, loss of sales and customers to possibly loss of life.  So, in many organizations, each time a bug, as tiny and insignificant as it may be, crawls into production, mayhem of magnanimous proportions ensues.  We engage in root cause analysis, and sometimes, the focus on finding out why it happened takes priority over the fix.

How or Why?

As I thought about missing bugs, I realized that the “how” is far more important than the “why”.  To understand how we miss bugs, we must critically examine what software testing is and how we test.  In its purest form, software testing is making judgements about the quality of the application under test.  It involves both objective comparisons of code to requirements and subjective assessments of functionality, fitness for purpose and usability.  So then, a missed bug is simply an error in judgement!  To miss fewer bugs, we need to understand how we think.

Daniel Kahneman and Amos Tversky developed the idea of cognitive bias from their research which showed people’s inability to think critically in complicated situations. Tversky, A., & Kahneman, D. (1974).   People tend to use heuristics, or rules of thumb, to make decisions when the subject matter is complicated or when time is limited.  Heuristics are usually focused on one aspect of the problem while ignoring others. Some biases that especially impact testers include the representative bias, the curse of knowledge, the congruence bias, the confirmation bias, the planning fallacy and anchoring.

But what is going on when we miss the obvious bugs, the ones that are literally staring us in the face?   This, too, is attributable to a bias, inattentional blindness.  Christopher Chabris and Daniel Simons demonstrated this bias in their famous invisible gorilla test.    As testers, we become so focused on executing our test cases that we sometimes fail to see the obvious bug.

Thought Process

How do we use the concepts of bias and preconceived notions to find more bugs?  We must manage our thought processes not only as testers, but as test managers and as a professional community. Rather than allowing our biases to hamper our testing and cause us to miss bugs, we, as testers, should plan and execute our testing in recognition of the fact that we do have these biases and preconceived notions.  We should add additional time to our estimates in recognition of the planning fallacy and include negative test scenarios in our test planning.

Exploratory Testing

In addition to executing our test cases, we should perform exploratory testing.  Exploratory testing prior to running our test cases is useful because we have not yet made any assessments, and potentially developed biases, about the quality of the product. Learn more about exploratory testing here

But how do we make sure we see the “Invisible Gorilla”?  As testers, we need to approach our testing holistically; we need to focus our attention on determining if this is a quality product in addition to tracing our test cases to requirements and executing all our test cases. As test managers, we should empower our testers to use their intuition and to test beyond the obvious.

Most importantly, as a professional community we encourage test frameworks where scripted testing is balanced with exploratory testing and foster a focus on providing valuable information about the quality of the applications under test.   You can find out more about cognitive bias in testing here: