When we discuss test analysis, we do so in reference to the test basis that collection of documents, standards, and attitudes that we use to guide our testing. Unfortunately, it seems that for some testers the test basis begins and ends with the requirements as captured by a business analyst or a requirements analyst. Many testers seem reluctant to push back on the requirements, to look beyond what is written down. This is understandable, as many project managers are even more reluctant to extend project scope and many developers are wary of what they see as tester-driven development; we are not all used to having our opinions listened to and respected.
There are many places to look in order to flesh out the requirements and expose the implicit ones; this article will look at these additional sources of testing information. At the top of the list of considerations when it comes to testing for quality is the user group. There are two things you really need to understand:
If the SUT is to be used by 100,000 users it will be stressed far more than if the user base consisted of just 10,000 people. The more users the greater the chance of a mistake, of pasting into an EditNum box only to find you had a sound file on the clipboard, of hitting [Esc] because the printing did not start quickly enough. More users mean more trouble. More users should require us to be ever more creative in designing tests and ever more persuasive in presenting the defects to the team.
When you have a large user base, it makes a lot of sense to try and test for the unexpected. No matter what the arguments of the designers, sooner or later a large user group will make every mistake it is possible to make, so you need to be sure the system will survive. Consider Robot Monkey Testing at this point.
If all the users are pretty much the same, all trained in the same way, selected (or self-selected) from the same social group then they will use the delivered system in pretty much the same way. If, however, the user group is diverse, you can expect them to interact with the system in a wide variety of ways. Think of NHS time recording systems which will be used by cleaners, often with English as a second language, and surgeons, often busy and frequently armed with a sense of empowerment that means they want tools to work their way.
A diverse user base means that it is more likely that all of the features of the SUT will be used, there will be a huge number of use cases describing end-to-end behavior, and there is a much higher chance of the SUT being made to work as part of a process alongside other systems. Your system may well produce reports, but can you edit those reports in Excel? What about Lotus 1-2-3? What does Crystal Reports make of it? Can others import it into rival products? More on this in a minute.
When a product is aimed at an international market then the need to understand diversity becomes paramount. Even beyond the obvious things like character sets and Unicode and right-to-left languages, you have the shocking fact that people from different cultures work quite differently and they use their tools differently; they might even have very different ideas about standards. Anyone who has been involved in selling systems into the German market will confirm this, as Germans challenge the Microsoft dominance at every turn. If the SUT is to be used internationally then you had better understand date formats, decimal separators, number formats and how the cursor keys should work when you mix Hebrew and English on the same line. When you are working in international markets you will need to consider the most popular operating systems, report generators, word processors, and graphical viewers in all your target countries. You may need to look into standards for dates, shutter speeds, electrical connections, and documentation; German law takes a dim view of unsubstantiated product claims, something which can cause consternation in a British marketing department.
If the test team is demographically different from the main user group this might also cause problems. Young testers might need to cross a cultural divide in understanding the needs of older users. Texting, Instant Messaging, Tweeting are not universally used across generations.
Appreciating that these differences exist is half the battle; the next step is including consideration of it in all of your planning. Try this example: If you were to work on a cheap FreeView set-top-box to be sold through the Co-op, what would be the likely age profile of the users? What if it was sold through Marks & Spencer? If it turns out that the average user is likely to be 65 years old, what differences will that make to your testing?
Users may be hostile or cooperative; they may have actively sought out your product and be “raging fans,” or they may despise everything you stand for and are using the system under sufferance. Your testing had better understand which they are.
There are many reasons why an individual user is hostile to your product, but we are more concerned with whole communities.
Leaving aside the Microsoft effect – where users resent the success of the software giant and whatever it took the company to get so big – the main reasons for hostility are:
This is often the case with bespoke systems or large enterprise systems, as the people who pay for the system are not the people who use the system. Your system may be replacing a loved or at least familiar old system, and no matter how good it is it will always suffer from adverse comparisons. If the end users were not consulted, it could well be that the requirements do not capture their needs. Unfortunately, it is often too late to correct these problems during test and the best you can hope for is to document your concerns and make sure you are around for version 2.
I was once told of a problem Sun Microsystems had spent months tracking down at one of their installations, where servers failed for no good reason. A bit of skullduggery and a hidden video camera revealed the problem to be the systems operator who would pull a plug and force a restart when he wanted to go home early. In the past, when we worked on systems used by Customer Service Representatives (CSR), we were briefed to consider that some CSR want the system to fail so that they get a break from talking to irate customers. If a flaw causes such a failure, it is believed that some CSR will exploit the flaw deliberately and repeatedly.
If you work for HMRC or Microsoft or GCHQ, expect your website to be attacked – not just misused, but actively assaulted either with a view to causing harm or in the hope of gaining advantage. Financial systems will be targeted by would-be embezzlers, licensing systems will be hacked by thieves, and Digital Rights will be undermined by pirates. It’s a tough world; your systems will need to be tough too.
Wouldn’t you expect this to be in the requirements? You would expect the business analyst to have collected, refined, reviewed and presented all the users expectations and needs of the system. If so, maybe you are also looking forward to the Easter Bunny.
Analysts are human too. They may not have much time on the project, they may not be familiar with the problem domain- there are many reasons why the requirements are not complete or correct, but it has been estimated that imperfect requirements are responsible for 75-80% of all project rework costs (Leffingwell, Dean. 1997. “Calculating the Return on Investment from More Effective Requirements Management.” American Programmer 10(4): 13–16) so it is quite reasonable of testers to want to go further.
Establishing an improved understanding might involve site visits, attending user group meetings, reading the right books, shadowing users and, hardest of all, keeping an open mind.
Things to watch out for include:
Requirements will only take you so far; if you want to be sure your carefully-crafted new computer system doesn’t fail the real user acceptance test, do what you can to understand the users and appreciate the diversity in the user community.