The Testing Show: In Search of Requirements

The Testing Show: In Search of Requirements

Cassandra Leung and Pete Walen join us today in a discussion about Requirements. What are they? Do we get enough of them? Do we understand the ones we do get? Can we make them better, and if so, how can we help that process? If you’ve ever struggled with trying to make sense of a story, or fear that programmers are just implementing things for the sake of implementing them, and there’s no rhyme or reason, you may not be alone. It’s possible that you really are dealing with a severe case of Requirements Deficiency. Fortunately, we are here to help, or at the very least, give it a spirited try.

Also, in the news, Unified Windows Platform and Software Verification Competition. Yes, apparently, these are both “things”, and we pontificate on both of them.

itunesrss

Panelists:

 

 

 

 

 

 


References:


Any questions or suggested topics?
Let us know!


Transcript:

MICHAEL LARSEN: Hello, and welcome to The Testing Show. Thank you for joining us today. I’m Michael Larsen, your show producer, and we have a full house this go around. Let’s give a welcome to our regulars. Jessica Ingrasselino?

JESSICA INGRASELLINO: Hey there!

MICHAEL LARSEN: Perze Ababa?

PERZE ABABA: Hello, everyone!

MICHAEL LARSEN: Justin Rohrman?

JUSTIN ROHRMAN: Good morning.

MICHAEL LARSEN: Matt Heusser?

MATTHEW HEUSSER: Welcome back.

MICHAEL LARSEN: And we have two guests today. Let’s welcome Pete Walen?

PETE WALEN: Hello.

MICHAEL LARSEN: And also, please welcome Cassandra Leung.

CASSANDRA LEUNG: Yay! Welcome, thank you [laughter].

MICHAEL LARSEN: And Matt, it’s your show. I’m turning it over to you.

MATTHEW HEUSSER: Thanks Michael. So, we know most of the regulars, but you might not know Cassandra, who I’ve met recently through Twitter, and Pete… well, let’s just say we go way back. Cassandra, I know you’re a quality specialist in the UK with an interest in usability. Tell us a little bit about how you think about testing and how you work.

CASSANDRA LEUNG: Yes, so I just actually finished up a role with my official job title of “UX Ninja” and the way that came about is because there were a lot of different things that I did within my role but always had a focus on user experience. As we’re recording, I’m about t start a new role in Germany next week. The official job title there is Test Engineer, so not quite so exciting, but I’m definitely still planning to work on that focus in User Experience. One of the things that I always kind of think about is “just because it works, doesn’t mean that it’s good”. That anyone is going to want to use it, or that it’s easy to use, etc. For me, User Experience is always a big focus in testing. I think it’s one of those “non-functional requirements” but for me, you know, if people can’t use it, it’s not really functioning.

MATTHEW HEUSSER: All right. Now, Pete’s at Salesforce. He’s been there a couple of years now. You seem to be having fun.

PETE WALEN: Yeah. Doing loads of API testing, getting more into the underlying code. C#, Java, all kinds of cool stuff. Loads of SQL.

MATTHEW HEUSSER: Pete was a former board member of the Association for Software Testing. We do a lot of conference-y stuff together. Active in helping keep Grand Rapids Testers alive and as an organizer for several years. You kind of backed off from that. I guess you’re just having too much fun doing testing.

PETE WALEN: Well, and sometimes schedules collide.

MATTHEW HEUSSER: Also, you’re in a pipe band, right? You’re a drummer.

PETE WALEN: Yes. I hit things, yeah, currently I’m playing in a competition band from Windsor, Ontario and I’m also teaching the Muskegon Police Pipes and Drums from Muskegon, Michigan.

MATTHEW HEUSSER: that’s awesome. So, let’s talk about what’s new and interesting in the world of software testing. I’m going to start with Microsoft… has anyone here heard of the Unified Windows Platform? UWP?

JUSTIN ROHRMAN: No.

JESSICA INGRASELLINO: Can’t say that I have.

MATTHEW HEUSSER: I hadn’t either. Microsoft is apparently creating a Unified Windows Platform that is going to be phones, tablets, laptops, desktops… and holographic lenses?

JESSICA INGRASELLINO: Whaaaat?!

MATTHEW HEUSSER: I didn’t realize that. Yeah [laughter].

PS: Well, thy do have this product called the HoloLens, which is a really decent Augmented Reality tool. I visited a vendor/partner with a company at the beginning of this year and they had a demo in one of their R&D labs that showed… I think it looked like a nuclear pipeline. Aside from having the ability to zoom in and out of what was outside and inside of the buildings (you wear this thing on your head like an Oculus), so you scroll up with your hand so you can dig deeper and look at what’s happening 200 feet below the ground. It’s really cool. It’s a pretty interesting way to either play games if you like Minecraft or just look into architecture. There’ a ton of applications in medicine when they improve that platform. Yeah, it’s a very interesting piece of technology.

MICHAEL LARSEN: So, one of the things that would probably be helpful with this is that Windows 10, at least at this point, determines things based on Classes of Devices, or what they call Device Families. The idea here is that they ant to be able to say “you have one Windows platform” and that Windows Platform can be tailored for a number of Device Families. The Universal Device Family consists of desktop, mobile, Xbox, IoT, IoT headless, and HoloLens. That’s the full suite of Devices families that you can play with in the Windows 10 sphere.

MATTHEW HEUSSER: We’ll see how it turns out. Windows 8 was “let’s make one unified Windows Operating System and it made your laptop feel like a tablet and it just didn’t work. Now they’re saying “we’re going to do it again!” I realize it is modularized, compartmentalized, customized and supposed to be all different, but it’s sharing the same codebase.

MICHAEL LARSEN: It also looks like the Universal Device Family is special. It is not directly the foundation of any OS. Instead the set of API’s in the Universal Device Family id inherited by the Child Device Family. Universal Device Family APIs are thus guaranteed to be present in every OS and in every device, so take that for what it’s worth.

PERZE ABABA: So what’s interesting to me is the availability of an Issues List that they are going to open up to the public, and I think… in the past year or so, I believe, they put out a public issues list for the browser that they are developing, which is the Edge. It’s not just that, but they actually give you access to the features that they’re developing for that particular technology, so the Edge HTML Rendering Engine, for example, which is exactly the same way that the folks from WebKit or the folks from Google or Opera have done is they push new versions every six weeks. It’s pretty interesting. From the old model where you’re just looking at what bugs they are going to fix during “Patch Tuesdays” compared to seeing what’s happening, whether something is fixed, whether it is something they decide not to fix. I think it’s good for the public.

JESSICA INGRASELLINO: Yeah, as a person who works in a open product that’s built on top of a closed product, we open source all of our code and we open source our automated tests; like everything is open, people can go in, and we have a lot of community members that push issues, but then we have a community manager who is technical enough to understand the product, she knows the documentation and she steps in to talk to the community and walk them through that stuff, so hopefully they’re doing their due diligence and saying “if we’re going to open up this bug tracker, then we’re going to make sure that we are engaging with people and only pushing actual issues into some kind of a work flow.

MATTHEW HEUSSER: So speaking of “complete, consistent, correct and ambiguous”, Perze pointed us to this… software verification competition?

PERZE ABABA: Right! So there is… apparently this happens yearly and it’s held in Uppsala, Sweden. The key motivation that they have for this competition is the invention of new methods, technologies and tools. Just looks into vary specific areas of software development where they can perform verification of whether something within the software can happen or not. They’re actually looking for very distinct verification tasks. They’re looking for, if you have a function or a method, is there a way for a particular error statement that the system would never hit that error statement. The other assumption that they are looking into is whether they can model atomic execution of a sequence of statements and whether it a single threaded or multi-threaded runs. It’s pretty academic for me. A lot of it really goes beyond my head, but nevertheless, it was pretty interesting. The other thing that I found pretty interesting too is that they actually declared what their assumptions are. A lot of these are using Java and C as the codebase for the competition and they are assuming that the memory allocation command will always return a valid pointer. Based on my personal experience that is never the case. There is always weird stuff happening there. The other assumption that they have is that the free command always deallocates any memory they want to be able to deallocate, so it’s pretty interesting. Very heavy participation within the universities. The one that won this year, it’s a professor out of Queen Mary University in London.

 

JUSTIN ROHRMAN: That reminds me a little bit of the DeepSpec event I went to last summer. What they were trying to do there was create proofs in the mathematical sense. They would create these proofs that software does something. A lot of the time what they were doing was taking a programming language like C, and then taking the specification for C and writing a proof in these very low level other programming languages to see that the language was doing what the C specification said. So like you said, it’s actually verification of a specification, it’s not testing. They’re not going around exploring what happens. They’re actually verifying the specification. It was also mostly academic practitioners and then some very low level libraries. Embedded device companies. There were some people from Intel. There were people from Google working on SSH libraries and things like that. So if there was a very, very, very detailed spec, they could verify that the software was adhering tothat specification.

 

MATTHEW HEUSSER: Yeah, like very straightforward input/output text based stuff that can be described in an algorithm.

 

JUSTIN ROHRMAN: That could be described in something that looks like a geometry proof.

 

MATTHEW HEUSSER: And then you say “does it correspond to that?”

 

JUSTIN ROHRMAN: Yep.

 

MATTHEW HEUSSER: Which, in theory, some dialog boxes could do that.

 

JUSTIN ROHRMAN: Yeah, yeah, it’s not going to be applicable for a web app. This is for like very low level software like the software that runs airplanes or cars or medical devices. Things like that. Stuff that’s small enough to be embedded on one chip.

 

MATTHEW HEUSSER: Yep. We have many problems with more human interface complex software. One being there’s no way we’re going to get the spec written as a geometry proof. That’s just not going to happen. Instead, we’re going to get stories, which is a placeholder for a conversation or use cases or some companies still do specs, but they look more like a bunch of screenshots. “Make it look like this.” That puts the tester in the position of saying “this is a bug” and someone else saying “no it’s not”. Both these news stories circle around that, whereas automated specification stuff simply doesn’t know it’s a problem if it exists outside the spec.

 

JUSTIN ROHRMAN: Right.

 

MATTHEW HEUSSER: And most of us work with specs that we do that anyway.

 

JUSTIN ROHRMAN: Yeah, I’m so used to the detailed specification people sort of worked their way around that, like they were aware that specifications were inherently wrong and incomplete and easy to misinterpret and all of this good stuff, so they would do what they call “fuzz testing” which is basically blasting an input variable with loads of data constantly just hoping to magically find bugs.

 

MICHAEL LARSEN: Then of course you also run into the situation, whether or not this is an intended consequence of Agile tends to be a consequence of Agile in which there really is no spec. You kind of get some hand-wavey “well here’s what we’d like to do based on some output detail and let’s go”. It’s good for incremental programming but it can be very lousy for putting together test ideas and test approaches because sometimes the functionality is so vaguely described that anything that you do that goes alter to that can totally throw a project into pandemonium, and I’ve had recent experience with that; that’s why I’m saying it. [laughter].

 

MATTHEW HEUSSER: I think Cassie was the one that originally proposed this topics. Tell us about some of your adventures with not quite detailed enough specs.

 

JL: Yeah, I think what Michael was saying about you don’t really get a lot of specification; the requirements are just kind of you know “put it together and see what you get”. Someone else mentioned that. Someone else mentioned this also that there’s all sorts of things. It’s even just screen shot of “we want it to look like this”, without any actual more detailed information about “well, OK, you’ve got this button here. What is that supposed to do? What’s the intent behind that? How does that relate to the story? I was kind of thinking, it’s almost all about trying to solve some sort of cultural problem, ideally what we would end up with but what are the pieces? What are the X,Y and Z that led you to get there? I think from a testing perspective when you only have the user story to test with, that’s the problem that you’re trying to solve, but there could be many possible solutions, and it might not be very clear what solution the business was intending to go with. If there was a particular solution or even just a feature of a specific part that they rejected that they specifically decided not to do for a particular reason, and you don’t know what the background is, I guess with that algebra thing I was kind of thinking of in school you get points for the right answer but you also get points for showing your working. It’s that showing your working that I think really helps me as a tester just to try and understand that, the path of how you got there, someone might see “X is a bug”, but someone else says “Oh no , that’s fine” or “why is it built that way?” Again, what benefit does that have to your user story? So what I often find myself doing when I don’t have so called complete requirements (when are requirements ever complete?), think about “What is the problem trying to solve” and as a user “what would I expect? What would I like? What is annoying me that, what I’ve got in front of me just now, that comes from experience of your user base, even as we always say “You are not your user, but you can try hard to put yourself in their shoes” and I think with that experience from what your user base is like currently, what the target market is that you’re looking for… maybe it’s not your current user base. Maybe this is a completely different product that you’re trying to work on and to reach a different part of the market that you haven’t tapped into yet. Just think of that and “if I was this person, if I was using this, would that be a solution to me? What’s this thing over here? Am I being distracted?” or may even be something like “oh, it was great once I worked out what on earth I was doing. I was really, really hard to use.” We can ask who was it that put together whatever it is that you’ve got in front of you, whether it’s a spec requirement, a user story, a screen shot. Who provided that to you? Do you have access to them? Have you got the time to go and speak to them, have a conversation about it? Have you got access to the developer, who maybe has a lot more information that’s not written down? I think there’s a lot that we can do to look outside of what has just been handed to us.

 

MICHAEL LARSEN: Very often, when I’ll have a conversation about requirements or lack thereof or vagueness about it, I will point out and say “here are some things we can do, here are some things that we can test, here’s the time that we have, now realize, I’m not in a position to give the ultimate decision of what we want to do here, but I want to give you as much information as I can, and then I will let you all decide how we want to work with this.” We’re not here to babysit your product. We’re here to ask questions, get information, and provide you with avenues to consider to make your decision. It’s also up to us to push back and say “here are some questions that I have, and from past experience, I think we have some area here that we need to investigate that we haven’t or we haven’t defined or we don’t even really realize that this is going to be a problem.” Based on other stories that we have dealt with, where we thought it was going to be so simple, a little introduction, a little new feature, and it took us months to test and, ultimately, a story broke down into twenty-four stories, for example, just to be able to get our head around everything. That’s a reality, and it happens to even the best and most diligent companies, so I think it’s important for us as testers to be on the lookout to say “I don’t think this is well defined” and “I’m not trying to be a jerk about this or get a defensive reaction. I want you to realize that, through experience, we’ve been here before and we don’t have to be here.”

 

CASSANDRA LEUNG: Yeah I think you’re absolutely right. It’s important for us to be able to provide the decision makers with as much information as is reasonable to make that informed decision, but it’s also to keep in mind that they are the decision makers. As much as we want to be able to work towards a good quality product, we don’t want to become what is known as the goalkeepers, the safety net. We don’t want people to just think that “cobble together whatever and the testers will sort it out”. That’s not what we’re here for. What we are here for is to take a recipes approach to take a look at what we have in front of us to gather some information and, if possible, what I think is really important for testers who want to go beyond the activity of testing is to be able to provide some insight from that. What is the consequence of leaving something the way that it is just now or what will we gain from mixing this particular issue. That’s the sort of information that, I think, decision makers will be able to do something with, as opposed to “I found X number of things in this part of the system”. What does that mean? Give me something more than that.

 

PETE WALEN: I think that’s really an important concept and I think a lot of people aren’t as aware of it as they could be. The idea of asking questions to gain information for yourself but also to help enlighten other people, the stakeholders, whoever may be involved, around “what happens? What’s the implication? What’s the result of doing this?” I think that a lot of stories get written with a great deal of tribal knowledge where it’s implied but not necessarily explicitly stated. If you’re dealing with stuff that you as a tester may not be that familiar with, you’re going to look at it and go “there’s twenty words here, I’ve got no idea what they mean”, and so you may find yourself at a point where you need to dig and start asking questions and start asking “what if”, as you pointed out. The greater question that we have as a tester is “how do we do those conversations. It’s great to say “yeah, you need to have the conversation” but I think the challenge for a lot of people, and sometimes myself included, is to say “how do we do the conversation? How do we start them? How do we begin that question?” Because that’s where a lot of people will almost automatically say “well, it’s in the story” or “it’s over here”. Getting beyond those first knee-jerk reactions is sometimes for a lot of testers, especially junior testers, extremely hard.

 

CASSANDRA LEUNG: They are really tough conversations to have and I don’t want to go to vague on this, but I think it’s one of these areas where your personal relationships, they really come into play to be able to know the individual that you are needing to approach. What’s the experience for them been like in the past? How do they tend to take these conversations a little bit better? Some people really like it if you just walk up to them at their desk and start having a conversation. For other people that really knocks them of their flow and that’s a couple of hours lost where they’re trying to get back in the zone, if you’d like and they’d prefer like an email or a meeting or something like that, and I think if you know the people working with over time build those relationships and try to get personal with them, then it can be easier to know how to adapt your approach to each person.

 

MATTHEW HEUSSER: This thing keeps coming up with some of our clients where some of the stories are just horribly defined. It’s hard for me not to get judgmental. Literally I have asked a programmer “what is the software supposed to do? What is the purpose of the software?” and usually I’m in sort of a tangential role, consultant role, account management role. It’s not my business, but it’s not going well, and the programmer has said “I don’t know, I just implement the stories.” That’s really bad, right? So [laugher]… maybe we could talk about two things. One, how does this happen? Is it just me? Is there some root cause? And then what can tester do to help even in that tough situation to do better?

 

MICHAEL LARSEN: We have an interesting model in that we have a distributed organization and in some ways I think that Socialtext struggles with the fact that we also have to fit into a larger corporate culture now that we’re no longer the masters of our own domain because of being acquired, and it just seems like what made sense just a short while ago that was easy t get your head around is now wrapped up into dependencies and aspects that you think you understand your requirements, you think you understand what you’re programming, but you’re now dependent on feature and sets for products you don’t even have control over. This is your own company and their peripheral unit. You’re trying to talk to each other and you’re using totally different languages. You’re using totally different tools. Totally different development techniques, but you have to interoperate and interact. And I think this is where sometimes you can get into that situation of “there is so much overhead just to do the most simple thing” that it can be a knee-jerk reaction to just say “I’m going to focus on what I have some control over and that’s it” and then later on we as testers as we start to ask questions, “uhh, that’s that group” or “you need to talk to this person. No, you need to talk to this person. No, there’s this other group that handles that” and [sigh]… it just becomes very tiresome and very frustrating. That’s a real question that I have. Not just “how do you define your immediate product, but how do you define the greater whole so that it makes sense?”

 

JESSICA INGRASELLINO: I’m going to jump in here and be my research social scientist self because I really believe something that I’ve had success with as a tester is communicating with the rest of the organization. I’ve not experienced problems communicating with devs. I’m very strategic. To do that, I’ve read Critical Conversations which is a book about how to approach difficult situations, Stephen Brookfields Discussion as a Way of Teaching to look into how to address discussions with adults in a way that’s informative rather than instructive, and I’ve used other techniques which I’ve talked about before on the show but I do think that there is… not necessarily solution; that would imply something very pointed. I do think there are strategies that can come from the social sciences and psychology that, when applied in these situations can yield results that are positive for everybody involved, as long as the parties are willing to talk.

 

MATTHEW HEUSSER: I agree that talking helps in getting to “who is the end customer?” I think there’s also a place for leadership. One thing I think testers can do in the face of a vague spec is say “this is what it should do” and there’s two things that will happen. Either that will be accepted, that will be fine, that’s what it does, or someone with authority will say “no, tester, you’re wrong. It should do this other thing” and you say “great, so now we have defined what it should do, so now I can know if things are bugs or not, now I can know what correct behavior is”. So the very act of speaking, “this is what the software should do” can force later decisions, and for some reason lots of people are afraid to do that, but that’s one way to deal with “is it a bug or not” problems.

 

JESSICA INGRASELLINO: I think people fear responsibility. You know, if you claim that’s what it should do, and somebody comes back and says “you’re totally wrong and that’s messed up”, people don’t really… they’re not dying to put themselves in that situation, even if it’s important to do so.

 

JUSTIN ROHRMAN: I think when you make those strong claims, though, that it forces other people to have these visceral reactions and then you find out the truth. Maybe you look kind of dumb in the moment because they think this person obviously doesn’t know what they are talking about, but you finally get to an answer by doing that, by saying something that’s just obviously wrong.

 

MATTHEW HEUSSER: Isn’t that part of the beauty of being a tester, though? You go “I’m just a tester. I don’t know. Look at me I’m just the tester. Like, I’m not very good, I’m… but I just, I thought it should be that. I guess you’re right. Gosh, you’re so smart!” I think you’ve got to do that in a non ironic, not sarcastic way, which as you can tell I’m not very good at, but that tends to be accepted. The other possibility is that you start a war. You lose by 49%, and some big powerfule people come in and say “no, it’s like this” and most of the technical staff is with you, but for whatever reason, you waste some time fighting a stupid argument, but at least that demonstrates that this is a thing that reasonable people could disagree about, and you’re not crazy.

 

PERZE ABABA: I think you mentioned it earlier, Matt, specifically on the idea of very subjective things that you can have in a given team. When you ask a question, it boils down to either ignorance, apathy, or they don’t care because they need to get their job done. I guess from a subjective perspective it depends on the culture of your team whether those types of conversations are going to be helpful, or are they going to bring harm to your relationship with the developer. That also depends on your product knowledge, your skill as a tester, among all of these things. I definitely agree with what everyone has been saying. You just don’t take a requirement face value.

 

JESSICA LEUNG: I would agree with that. I’ve definitely got experiences with that where making a conscious effort to be really clear about what I’m trying to portray, but always got that bias of we know what we’re trying to say, so then when someone else, most likely the developer, comes and says “what is this? Do you understand what this is?” or alternatively, if you get somebody and it’s “gosh why have they done that, that’s not what I asked for at all” and then by having a conversation they help me understand what it was that I did that caused a misunderstanding, then you’re putting the onus on yourself and you’re accepting that you have a part to play in something not quite going right and then it’s also an opportunity for you to understand from someone else’s perspective that “oh yeah, what I said there was really ambiguous, yeah I can really understand why that’s confused you”, and then to also ask for feedback and say “ok, how can I have put that better for you?” and then that way hopefully sets you up in other people’s eyes as someone who can be given feedback, someone who can look at themselves critically.

 

MATTHEW HEUSSER: OK, thank you, Cassie. Unfortunately we’re running tight on time. I’ll start with Pete and Cassie. Where can people go to learn more about you and what you’re working on and what are you working on these days that’s public and sharable?

 

PETE WALEN: I’m on Twitter. You can follow me on Twitter and I tweet a bunch of weird combination of things. Everything from software testing to facts, which are kind of rare, apparently. Things like “This Day in History” because there always tends to be a cycle. Also pipe band stuff, so my pipe band followers get really confused when I’m tweeting about Software Testing, but anyway. You can find me on Twitter, @petewalen, that’s me. You can also find my blog, rhythmoftesting.blogspot.com, and of course at GR Testers. Come on by.

 

CASSANDRA LEUNG: So I am pretty active on Twitter, @tweet_cassandra. I usually take part in discussions I see on there but also to vet some of my own clouds that sort of flutter around and see what other people think about them and then sometimes I turn those ideas and thoughts into blogs as well. I blog at www.cassandrahl.com. Stating a new job soon as well. Also recently found out that I’ll be speaking at Agile Testing Days and I believe Pete will be speaking there as well, so it will be great to meet more people from the community.

 

MATTHEW HEUSSER: What are you talking about, Cassie?

 

CASSANDRA LEUNG: I’m talking about “Five Ways Automation is Like Sex”… and it’s not because sex sells, I really do see a comparison there [laughter].

 

MATTHEW HEUSSER: OK, then, so… [laughter] My talk, I’m also going to be at Agile Testing Days, is “Do you even (need) to automate (the GUI)?” I expect to be booed and have things thrown at me… ehhh, but we’ll see.

 

JESSICA INGRASELLINO: So just quickly I’m doing a virtual talk for PythonDay in Mexico. They’re working up toward having a full PyCon conference but this year they’re just doing one day, so I will be talking about using Python along with Behave, which is a Cucumber like tool and how to do the things that Matt says we don’t need to do, so that should be interesting [laughter].

PERZE ABABA: Well for me, you can find me at a local NYC Tester’s Meetup so check us out. We try to have one every month.

JUSTIN ROHRMAN: I’m continuing to work on CAST 2017. That will be in Nashville, Tennesse August 16, 17 and 18. Tickets are on sale now.

MATTHEW HEUSSER: And TestRetreat is the day after CAST.

JUSTIN ROHRMAN: TestRetreat is Saturday, August 19. A full day unconference.

MATTHEW HEUSSER: Question? You want to talk to us? Email “thetestingshow at qualitestgroup dot com” and we’ll be here again in two more weeks. Thanks for listening.

MICHAEL LARSEN: Thank you.

JUSTIN ROHRMAN: See ya.

JESSICA INGRASELLINO: Bye.

CASSANDRA LEUNG: Bye.

PERZE ABABA: Bye.

PETE WALEN: Bye.