March 30, 2016
In this episode, recorded February 10, 2016, The Testing Show talks Super Bowl aftermath, analytics in sports and how accurate they are (or not), expectations and wild guesses, and the benefits of having a beginner’s mind to a situation and how that may give a person a better chance at seeing how things will turn out (it also shows pretty clearly who on the panel actually pays attention to football). We talk a bit about legacy systems and what happens when critical systems go down (the IRS being a prime example), and the fact that organizations with legacy apps are reorganizing around Scrum and Agile, not so much for new software development, but to help maintain and continue development on legacy systems. Can we do better? We think “yes”!
- Why the Carolina Panthers Will Win Super Bowl 50
- Suzuki, Shunryu, Zen Mind, Beginner’s Mind
- Survivorship Bias
- IRS resumes tax return processing after computer outage
- Heusser, Matt & Gärtner, Markus, Save Our Scrum
- Applitools Eyes
MICHAEL LARSEN: This episode of The Testing Show is sponsored by QualiTest. “Deploy software that you and your customers trust with QualiTest Testing and Business Assurance Services.” Learn more about QualiTest at https://www.qualitestgroup.com.
Welcome to The Testing Show. I’m Michael Larsen, the show producer. We’d like to welcome Perze Ababa.
PERZE ABABA: Hi. Good morning, everyone.
MICHAEL LARSEN: Justin Rohrman.
JUSTIN ROHRMAN: Hey there. Good morning.
MICHAEL LARSEN: Brian Van Stone.
BRIAN VAN STONE: Hey, everybody.
MICHAEL LARSEN: And, of course, our host and moderator, Matthew Heusser.
MATTHEW HEUSSER: Welcome back, everybody. Thanks for coming. Let’s just dive into it. So, for us, on the show, right now, the Super Bowl just happened three days ago. Of course, if you’re in the audience, you know the outcome. I wanted to start out talking about analytics in sports for just a minute. I don’t know anything about sports. My family watched Downton Abbey during the Super Bowl, but I knew it was the Broncos versus the Panthers. Or, as I prefer to think of them, the “horses” and the “kittens.” I knew that Peyton Manning—I knew the Manning Family—was on the Broncos, so I actually bet Duane Green a nice meal at a conference that the Broncos would win, simply on that tiny bit of information, which I won the bet. So, my question that I want to start with is: Can being a little ignorant be helpful? Are there places where coming in as an outsider, as a tester, is good, and are there times when having too much knowledge of the process actually creates bias in us?
MICHAEL LARSEN: I think the answer is, “Yes.” I’m not much of a football fan myself. I watched a bit of the game and enjoyed watching a few of the plays. What was very interesting, of course, was that, if you listen to all the pundits and everybody who was talking about how “this game was going to go down,” everybody said, “Oh, this is going to be Carolina all the way.” So, it was really interesting to watch that the Broncos came out ahead and they kept it that way. It may help to just say, “I don’t really know all that’s going on here. I haven’t heard all of what’s happening. Yeah. I think I’m going to go with this team as the victor;” and, in your case, it turned out to be right. A chance interception at an opportune moment can completely turn the tide of the game. You know, when you get right down to it, unless the teams are completely mismatched, you honestly just don’t know how a game is going to turn out. I mean, you hear it all the time. The whole point of punditry is that everybody says that something’s going to happen and then they spend the next two weeks afterwards explaining exactly why it didn’t happen the way they would predict it.
JUSTIN ROHRMAN: Taking a look at a new product, like, if I walk in without any background information, it sort of reminds me of taking like an old-school car road trip as kid. Whenever you get in the car, on that first leg of the trip you’re leaving, you’re checking maps constantly, you’re looking at landmarks constantly, trying to figure out if you’re in the right place, and just sort of seeing what’s around you. And, it just feels like the trip is just taking forever—right?—like hours seem like days. But, on the way back, after you’ve experienced it already, it just zooms by. You ignore all the stuff you were trying to pay attention to initially. All that stuff just doesn’t seem meaningful anymore. You kind of experience that in software, right? Once you’ve seen it, you become blind to all these things that jump out at you at first. There’s this magic of newness.
PERZE ABABA: It seems like it’s a classic case of survivorship bias where we only tend to focus on what survived or what succeeded instead of the possible failures that will happen. Take, for example, the regular season, Cam Newton, he was really good. In all the games that he’s played on, he’s never been hit more than like 6 times, or maybe, I think, 7 is the number, in any game; but, during the Super Bowl, he got hit 13 times. So, that must be, [LAUGHTER], something pretty new for him. It’s the same thing too, if you’re looking at this from the outside in and you’re focused on the numbers, I think he was a pretty electrified player on the field. Whatever he brought to the field, during the regular season, is something that he would bring during the Super Bowl, but as it happened, Denver’s Defense stepped up and that really changed everything.
MATTHEW HEUSSER: So, both you and Michael have touched on this idea that, one, there’s an element of randomness, that thing are not like they used to be. And, two, change can blind you. This player, who was fantastic in the regular season, had never really been under this amount of pressure before, and it’s really, kind of, Monday morning quarterback, “I’ve got it all figured out.” You know, he just cracked under the pressure. Like, I’ve never been under that much pressure. I have no idea what I would do in that situation, but it seems like, for his first—and I’m sure he’ll have many more—opportunity at this magnitude, playing against a team with that skill level, for whatever reason, he buckled. That can happen to any of us when we’re on a project that is different in any way.
BRIAN VAN STONE: Yeah. I think we want to listen to those voices when somebody walks into a room and says, “You know, that seems silly to me. What if your users want to do this?” Or, you know, “I know I don’t know a lot about the product right now, but this just isn’t the way I expected it to behave.” Those are the voices we really want to listen to in these kind of scenarios, because I think there’s a definite bias that can be created by preexisting priorities or expectations or history that we have with something that we want to be very, very careful not to put the wrong kind of weight into. You know, the experience we have with the product is certainly valuable, but we want to be careful not to “keep those blinders on,” so to speak.
MATTHEW HEUSSER: Thanks, Brian. A couple of other bits of interesting news this week. I read that the IRS had a glitch and it was unable to process tax returns. That seems like a problem.
PERZE ABABA: Yeah. So, like, the funny thing was I’m a very early tax filer. The day that I was, [LAUGHTER], able to confirm what little bit refund that I got, was the same day when the systems went down. So, it was a big moment for me and saying, “A good thing I submitted my, [LAUGHTER], tax returns last Friday,” but I think they were pretty fast in bouncing back. So, yeah, that was really a challenge. And, when things like this happen, you kind of want them to happen early on and not on April 15th, I guess, and that would really be a big problems.
JUSTIN ROHRMAN: Do we have information on what the outage actually was?
PERZE ABABA: What the news lines pretty much were just saying is that “there was a computer crash or a hardware problem,” but they never said anything beyond that.
MATTHEW HEUSSER: My impression is that the IRS is running a mainframe from IBM that they paid bajillions of dollars for, except for maybe the hard drive does not have redundancy. It’s a massive piece of iron that is just supposed to never go down. So, a lot of companies pursue this very small meantime between failures. I interviewed at Perdue Chicken, and that had done math, in the late 1990s, for every minute the assembly line was down and they couldn’t print barcode scanners to put and wrap the chicken in. They lost so much money. So, they had a really big mainframe that was just supposed to never go down. I think that, if you’re the IRS and you’re processing tax returns 24 hours a day for 20, 30, 40 years, you’re eventually going to go down anyway. So, they might be better off trying to pursue low meantime‑to‑recovery.
JUSTIN ROHRMAN: I worked in hospital in the early 2000. It was sort of my first foot into the technology world. We were doing, like, desktop support and light network administration, and we had to support at least 20 Legacy Platforms that the doctors used, the nurses used, the admitters used. So, one of my first tasks was walking around to these computers in, like, 2003 and upgrading them to Windows 98 by hand. There was just no way to move beyond it, because we had to maintain these old Legacy Systems.
MATTHEW HEUSSER: When I was at Socialtext, we actually had a dashboard with widgets in it, and one of the widgets was the YouTube Widget. So, management could push out that, “Everybody needs to get this YouTube Widget that has this HR thing you’re supposed to watch.” It would be on your dashboard. We had to support old versions of Internet Explorer. People would say, “Our old versions of Internet Explorer aren’t working with your Dashboard Widget. It’s terrible. Fix it.” And, it was a third-party app plugin. We asked them to go to https://www.youtube.com and https://www.youtube.com didn’t work for them either because their versions of IE were that old. I’m like, “Yeah. If YouTube doesn’t work, then the widget’s not,” and that people seemed to understand that. Well, it seems to me there’s two pieces there. There’s, “We have this Legacy App that we need to keep running and maintaining,” most Legacy Apps still run because they generate money every single day. And then there’s, “We need to integrate with somebody else’s Legacy Apps.” I think those are both painful. But, it might be a good time to transition to talk about, Testing in Scrum, because what I’m seeing is organizations that have these Legacy Apps that are reorganizing as not test teams, not test departments, not development teams, not delivery teams, but actually as integrated Scrum teams trying to deliver software on these Legacy Platforms. Most people that I see converting to Scrum these days, they’re not making new software. They’re trying to figure out how to support the old software, and testers need to figure out how to do our jobs. Have you seen a lot of that, Perze?
PERZE ABABA: A lot of the experience that I have recently has really been shifting from an older version of how you build software baby steps into what is considered now as the Agile Development Approach. Scrum definitely comes to mind, because they’ve really made a name for themselves as the de facto project management framework. It’s one of the easier ways to switch from Waterfall into your first step into a big Agile Organization. Aside from questions that we really had as an organization in jumping into this was, “How do we get out of the trap of still looking at this from a Waterfall perspective and test a build after you have a development Sprint?” That’s one of the risks or traps that you don’t want to get into, and it does require a lot of hand holding, especially for the testers and developers to be able to just own your definition of the sense of Agile. And the challenge for us, really, has been in the roles that we’re playing. So, is not really just testers testing as a role, but it’s more of looking at, “What does the product owner do now,” especially when you’re switching from a domain expert or subject matter expert shift into a product ownership as well as your project managers that have shifted into the Scrum master role? So, it seems that, for me, that’s one of the biggest challenges we have had as a team. So, it’s not really just finding our authority as a tester and how we play that part in, kind of, this new way to build software, but everyone else is also having challenges in this.
MATTHEW HEUSSER: I totally agree. Marcus and I area writing a book on it. I see at least three different threads there that I want to pull on, if you don’t mind. One is, testers test Sprint behind, which is really just lots of Waterfalls which is probably better than what we were doing 10 years ago when we delivered all of the software in one chunk and then testers tested for three months. And then there is, testers are in the Sprint, embedded in the team, but we’re still, sort of, it’s built, if we’re lucky, one story at a time and we test it one story at a time. And, in my experience, if you have high-failure demand, if you get it and you can’t log in and then you throw it back and then you get it again, now you can log in, but your profile is all screwed up. You keep going back and forth. Right? You just can’t keep up and eventually you’re going to fall a Sprint behind. It’s not really Scrum is supposed to be. And, the third piece there that I heard was, project managers need to stop trying to manage the project. Development managers need to stop managing as a project. Stand-up meetings are not supposed to be about status. Stories are not supposed to be feature descriptions. That, if they are vague or incorrect, we have to nail down and get more correct before we hand them off from one person to another. The product owner is supposed to have authority to make decisions over what their product should be, not kind of be a runner for a Steering Committee. It’s a shift in mindset. I think I don’t hear enough people talking about that. You hear it in certain, very-geeky Scrum corners. But, in the business, I worry that a lot of people think that Scrum is something you buy and install and changes titles and move on.
JUSTIN ROHRMAN: The perception of Scrum is something that you’d do to somebody else. [LAUGHTER]. Right?
MATTHEW HEUSSER: Ouch.
JUSTIN ROHRMAN: Like, if you’re looking at it from management, they see this thing and you just plug it into a team and don’t realize that everything you do has to change. I mean, you can generalize and say that the culture changes, but you also have to change the shape of your teams. Maybe there’s not a fit on the team anymore for some people. Like, “Where does a traditional test manager go on a Scrum team, or is there actually a role for the project manager on the Scrum team?” The way people sit together changes. It’s just, it changes everything.
MICHAEL LARSEN: I’ve had the experience now with three organizations in the past 10 years. One of which was a traditional development shop that decided to go the route of Agile, and I actually left that organization while that transition was just starting. So, that’s one part. I was part of a small startup that was very strongly Scrum and had been Scrum from the very beginning. It was Scrum as far as the development side was concerned, and then test was a separate entity that existed on the side. So, sort of a Scrum-or-fall type of an environment, if you will, in the sense that very often I would love to have gone in and said, “Hey. I’d like to get involved in the testing here. What can we do?” “Oh. Well, let’s look at the story.” Oh. I’m looking at this and I would get shot back with, “Hey. I’m not done with that yet. Don’t test that.” We, oftentimes, felt like the testing was done afterwards, and it was throwing it over the hedge rather than over the wall. But still, there was a definite back-and-forth ping pong game going on. The company I’m working at now, which is Socialtext, though we’re not Scrum, we do Kanban, which is a little bit different, but it’s still an Agile Methodology and it was an outgrowth over a course of 10 years now. And, I think we do it pretty well, but yet there is still definitely a sense that there is a lot of room where the testing can be done differently, earlier. There’s different ways that we can get involved and there are new things that we learn all the time and we’re always adjusting and making changes to it. So Justin, you’re exactly right. Sometimes I think we look at Agile or Scrum or Kanban as something we “do to others.” We plug it in and we say, “Here, we’re going to turn this crank and now everything’s different,” but people still work the way that they’re used to working and they think the way that they’re used to thinking.
BRIAN VAN STONE: I think one of the most important observations that you make there and it kind of just snuck in under the radar is that you guys are constantly adjusting, because one of the things that I see a lot of organizations struggling with when they make a move from, you know, Waterfall, let’s say, to something more Agile is that you have to always be willing to reevaluate how you measure test coverage, how you prioritize your testing activities. You need to be able to take in new information and change your plan from day-to-day. I mean, that’s something that more Legacy Organizations aren’t particularly used to. One of my theories is that there’s a bit of reluctancy to become more iterative in the way that they build software, and then that kind of holds them back from reevaluating day-to-day how they need to change what they’re doing.
PERZE ABABA: I want to echo on why Michael was saying earlier, and he touched a very—I think it’s important—key word that he mentioned about teams. And, it’s really something about growth. For me, the way I’ve noticed it, if you have a team that ends up being just focused on programs and rituals and metrics that’s just being measured for the sake of getting a particular number, it really becomes a challenge. But, when your team tends to own up on your own definition of how we actually build software, growth is there. There’s essentially the idea of your team as an organism that tries to evolve over time as you deliver better software and getting the various players within the group. Whether you’re a programmer or a tester or a Scrum master or a product owner, how you interact with each other and how you communicate tends to define the culture and, most of the time, the outcome of what you’re building.
MATTHEW HEUSSER: Yeah. Let’s talk about, “service” for a minute, and this is going to be very judgmental. A couple of the companies that I worked with process improvement making my job easier and pushing work onto other people. There was a lot of, “That’s your job. That’s not my job.” An early company I worked with that was trying to do Scrum, someone would be blocked and someone else would say, “Yeah. I can find some time to help you, maybe, on my lunch hour for 15 minutes.” And then, they kind of lorded over for the next day, “What did I do yesterday? Well, I helped Matt, because Matt needed help. Yep. That Matt guy.” That’d say something like, “I’ve got my work to do. I can’t help today.” Now, I totally understand that you don’t want to do someone’s job for them, but this idea of collaboration, that we will create the software together, that the product owner doesn’t really know what they need to build until they’ve talked to the whole team, and that is how we, as testers, get involved up front. That’s really exciting to me. Like I mentioned with the “no-status meetings,” this is kind of like a breath of fresh air against command-and-control thinking, but I don’t know that it’s making headway. I worry that Justin’s right in that, “Scrum is something that you just buy and go do and not a real change.” I’m not sure what to do about that. What should a tester do about that? If they’re on the job and they’re getting this, “No. It’s just your job to test it,” how can we respond to improve outcomes?
PERZE ABABA: I mean, personally, for me, it’s been more of a communication issue rather than anything, because it does evolve into something like that, especially if you’re very used to a hand-off type of culture. The challenge really is, as a tester, in order for you to get heard, “How do you establish either your reputation, or how do you establish that trust between the people that you’re either trying to criticize or help disappoint?” And, that seems to be a pretty big challenge.
MATTHEW HEUSSER: The other thing that I don’t think I’ve heard anywhere, except for on the West Wing and Friday Night Lights is that, “You should build coalitions.” What we all want to do in our human selfishness is we all want to say, “I had this great idea. You should do my thing.” And instead, we’d be better off listening and saying, “What’s your thing? What’s your great idea? How can I help you do that?” First of all, if everybody did that, they’d be good. But then, three months, six months later, when we pitch our idea instead of, “Oh, boy. Let’s roll our eyes. Matt’s the smartest guy in the room again. Yep.” It’s, “Okay. Matt helped me. Let’s help him.” No idea is going to work perfectly. The rest of the team can take your idea and make it work or they can point to everyone else how it doesn’t work, and they have a conscious choice in that. So, we want to win them to our ideas.
So, let’s talk about, Testing in Scrum. I mean, in my experience, we typically say, “In the next two weeks, we’re going to develop these 15 features. We’re going to break them up into 15 stories. We’ll pull them across a wall with Development in Test and ready to go to column. During that time period, we’re testing each of these stories as they flow through, and we want to release every two weeks or whatever iteration we’re doing in stream programming. So, there’s two major types of testing at least. One is, “I just tested this story. The story is fine.” And, the other is, “Now that we have tested all the stories, we want to check for regressions—typically wider, typically end-to-end risk management process right before deploy.” I think “regression testing” is a fair term for that, and my experience is that teams really struggle with regression testing and my goal is to just get regression testing smaller and smaller over time.
BRIAN VAN STONE: There are teams that will fall into the trap of doing full regression, running every test, every Sprint. That just piles up over time and you end up Sprints behind. So, I think focusing in on, “What’s the important regression suite that I really need to be running,” is big.
JUSTIN ROHRMAN: I do wonder if that’s one of these holdouts from more traditional environments that just never really got flipped whenever people started moving to Agile. People have just never realized that, instead of doing full-on regression, running all the tests until you’re done running them, and then release, that you can do it in a more focused manner, selecting, what the changes were and looking at the actual risks and seeing what you need to test.
PERZE ABABA: For me, testing that happens right before a release, technically, is a completely different beast. We can call it “regression testing” or that’s where we run a boatload of our, [LAUGHTER], automated suites that we were able to run. But, I think one of the challenges really is that, if you switch your mindset from the atomic way that you’re validating or whether you’re just accepting the acceptance criteria or, “How can I refute the value of what was brought in and what other issues can I find,” it takes on a pretty different mindset. If you’re one of these companies who actually can afford to have a completely separate team that does regression/integration or just continue exploratory testing in a wider sense, in a more integrated fashion, then it’s just another step that you take, but it’s still important. Whatever it is that you were expected to deliver, in essence, was accepted by the product owner. But then, don’t stop there. Continue to look for other issues that might crop up.
MATTHEW HEUSSER: Yeah. So, I see a third category. I see regression. I see story testing, and I prefer to move regression to continuous. Regression is looking for problems introduced by change. So, if I look at the change log, if I’m talking to the product owner about their ideas at risk, if I have my own ideas at risk, if I’m talking to the developers about the ideas at risk, I could invest an hour a day just exploring the system, based on the highest risks. And, if I do that, then when it comes regression time, we will have found a lot of big ones. And, for a two-week Sprint, we could take the last two days. That’s 10 percent. That’s 20 percent of the time. And try to do some sort of regression burn-down dance and aim to get that amount continuously shrinking; because, when that amount shrinks so that it’s only an hour, we can deploy all of the time. And, if we deploy, if we decouple, Sprint from deployment, the thing that we’re deploying is very small; and so, in theory, if the change is small and we’ve covered the risks well and we can roll back quickly, the regression testing could be small. And, what I’m getting at is, Scrum can take us toward regression testing going away. It’s that crazily sensible or sensibly crazy?
MICHAEL LARSEN: I think it goes both ways. It depends a lot on how your infrastructure works. If you think about build with automated test—push, deploy, and then see what happens—we’re doing that already. We look at continuous delivery, and continuous delivery includes continuous testing. The “Holy Grail,” so to speak, is being able to have all of those regression tests as part of your primary continuous integration component. Having said that, there is still, when you get right down to it, “Hey. You need to eyeball this, and you need to go through and look and see that it actually behaves the way you expect it to for real people.” No matter how much automated testing you do, there’s going to be some stuff that may slip under the radar because you just did not actually propose a test for it. You didn’t actually create a test that would define that particular condition. And, every once in a while you just find something that smells odd and you have to investigate it, and that smell leads to things. And you discover, “Oh. Hey. We didn’t consider what would happen if we tested on these multiple versions of IE. The CSS works fine for four of them, but for one of these examples we have an issue.” You don’t really notice it because if you’re running something that looks below the UI in your test, yeah, your document object is there, but it’s not rendering correctly in one of the browsers. That’s something you’ve got to actually look at.
MATTHEW HEUSSER: I would really like to talk about—maybe, next time on the show—the kind of bugs that humans can find and the kind of bugs that computers can find, and I think that would actually require a little bit of research. Really talk about the bug databases for our clients. But, we can go back and do a little research and come in and say, “Hey. Based on empirical evidence, these are the kind of bugs that computers find. These are the kind of bugs that humans find. This is the difference,” and then we can talk about test strategy. I think that’d be a great show. Do you guys want to talk about it next time?
BRIAN VAN STONE: Yeah. And, I think what’s especially interesting about that topic is, I think, where we draw that line is actively changing. Visual testing is becoming a real thing, even using automated tools. Perze brought up, a couple of shows ago, Apple Eyes. Wonderful product that’s out in the market right now. So, where that line is drawn is a kind of a moving target at this point. So, I think it’s a really interesting one.
MATTHEW HEUSSER: All right. So, we’ve all got jobs to get back too, sadly. We literally could do this all day, but we probably shouldn’t. So, on that note, I think I’m going to call it a day. Thanks, everybody.
MICHAEL LARSEN: Thank you for listening to The Testing Show sponsored by QualiTest. For more information, please see https://www.qualitestgroup.com. Do you have questions, or is there a topic or an idea you’d like to have covered? Please e‑mail us at firstname.lastname@example.org, and we will answer questions that we receive on air.