Testing When You Don’t Have Enough Testers

February 25, 19:57 PM
Transcript

In this episode, The Testing Show crew takes a trip to New York City to participate in James Bach’s presentation to the NYC Testers Meetup.

We discuss a bit about the GitHub site outage, and the ramifications of inevitable downtime. This leads into the main topic, which is “what do we do when we don’t have enough testers?” Is testing really a bottleneck, or is it set up in a way that delays are inevitable? What can we as organizations, and as testers, do to mitigate these issues, and what means do we have to change the process?

itunesrss

Panelists:

HeusserLarsenRohrman

References:

Transcript:

MICHAEL LARSEN: This episode of The Testing Show is sponsored by QualiTest. “Deploy software that you and your customers trust with QualiTest Testing and Business Assurance Services.” Learn more about QualiTest at https://www.qualitestgroup.com.
Hello, and welcome to The Testing Show. My name is Michael Larsen. I am your producer today, and we are joined by Perze Ababa.
PERZE ABABA: Hi, everyone. Good morning.
MICHAEL LARSEN: Justin Rohrman.
JUSTIN ROHRMAN: Hey, everyone.
MICHAEL LARSEN: Brian Van Stone.
BRIAN VAN STONE: Good morning, everybody.
MICHAEL LARSEN: And, of course, our host and moderator, Mr. Matt Heusser. Take it away, Matt.
MATTHEW HEUSSER: Thanks, Michael and everyone. Welcome to the show. I just got in from New York City. I flew out there earlier this week to see a client. Well, there was a User Group afterwards, and James Bach did a presentation on, Oracles. I’m still getting over it, and I thought we’d talk about that. I should’ve brought my equipment and just done the show from there, but, you know, we wanted to have Michael and Justin too. Perze, Brian, what did you think of the presentation? I know Brian came out from Connecticut. So, was it worth the train ride?
BRIAN VAN STONE: Yeah. Absolutely. This was actually the first time I got to meet James in person. So, that alone was probably worth the train ride. It was a really solid interactive session, which is something you don’t often get at these little Meetups. It was really nice to see something as interactive as this one was. James actually designed a new exercise that he had never done with anyone, except one of his testers in India that he was working with. So, it was a nice little treat for us.
PERZE ABABA: One of the things that’s pretty interesting with the exercise was, you’re not really looking for bugs for any given application, but you’re looking for the mechanism by which you can find bugs. It was a different kind of challenge for a lot of the folks that were there. The good thing was that people were pretty interactive. And, there were a lot of questions of, “Are we looking for bugs?” Or, “What do oracles mean?” And, it was a really good opening into a deeper way to look into software testing.
MATTHEW HEUSSER: And, to give just a little more background on that, I think it’s fair to share, the exercise was to create an oracle for a piece of software. If you think about Microsoft Paint, an oracle, “How do you know there is a problem?” I was really surprised that a lot of people started listing things that we classically consider an oracle. The dictionary—things are spelled wrong. The help file—things don’t behave the way we would expect. And, what James actually wanted was, “Given this setup, when I press this button, I would expect this outcome; and, if I don’t see that outcome, then something’s wrong,” or “maybe something’s wrong,” which is our definition of an oracle. That liminal period of, “Well, I don’t know where I am and what’s going on,” that was valuable for learning too.
JUSTIN ROHRMAN: That’s interesting that, when you said, “he was looking for oracles,” it sounded sort of like he was looking for expected results, and usually when I think of oracles I think of some sort of comparison tool. Like, you have experience in another product or another part of your own product and then you’re seeing an end consistency between that experience and what you’re looking at. Instead of saying just that, “I expected this behavior at this part.”
PERZE ABABA: I think it’s exactly what you’re saying, Justin. I brought up one issue with James that I thought was a problem, and I really had difficulty in communicating what my oracle was. It turned out to be a good build inconsistency heuristic, how we declare it should work. That was essentially the meat of what he’s trying to say. It’s just difficult to communicate. Even me, I’m thinking, “Multiple VB-ST classes,” [LAUGHTER], and I know we swim in conversations about oracles. But, I had a tough time differentiating between explaining the actual bug and the actual method—how I found the bug. I guess, there needs to be some more clarity, and I agree with Matt. There should’ve been clearer explanation or, at least, examples by which my brain could’ve latched onto it better.
MATTHEW HEUSSER: Yeah. And, I think that’s not a bad thing, right? This is a new exercise. It’s new, and it was free. You get this kind of creative, contributing feeling when you’re sort of beta testing an exercise. I’m okay with that. It felt nice to be part of that. Let’s say you have vertical and horizontal margins on Microsoft Word or something, if they behave differently, and you’re like, “Well, I could see how that one could be right,” in the way its thinking how far the margins should be. And, “I could see how that one should be right.” I don’t know. One is percentage to the page and one is in inches. But, “I think they should be consistent.” That is an oracle of consistency that you could use to recognize a problem. And then, there’s the kind of hard oracle, which is the expected results, “I recognize this problem because it doesn’t do what I expect, that I had clearly defined up front.” Those are sort of different—different when you think about them. They’re different in the way that you interact with them. They’re confusing. So, that’s my big news. It was actual tester news. I know that a couple other people saw some things out in the world they wanted to share. Maybe Justin wants to talk about GitHub? I know that’s on your mind.
JUSTIN ROHRMAN: Yeah. Sure. GitHub, arguably one of the world’s most used sources or repositories now. It does not live on your own servers like Subversion or Mercurial or anything like that. So, they had a world-wide outage last week for a couple of hours, and any user of GitHub was not able to commit code to the distributed repository. And, right now, what they’re saying is that, “it was caused by a power outage,” which just blows my mind. The news reporter said, “This power outage had cascading effects and caused the entire system to go down.”
MICHAEL LARSEN: Yeah. I find that interesting, considering the whole point with distributed sites like GitHub was that you would not find yourself ever in a situation where, if 1 machine or 2 machines or even 10 or 20 machines were to go down, that the whole service would come down. When we put too much stock in one company or too many eggs in one basket, we end up very vulnerable. We use a derivative product called, “GitLab” at Socialtext. It’s basically all the features of GitHub, except for the fact that it’s behind a firewall. Of course, you could also say, “We have the same problem.” That, if GitLab has a problem, then we’re going to be toast as well.
MATTHEW HEUSSER: Well, wasn’t the Titanic supposed to be unsinkable? Wasn’t that the whole point of the Titanic?
MICHAEL LARSEN: That was the claim, yes, but I guess icebergs a little bit more destructive that we would’ve ever imagined at that point in time.
MATTHEW HEUSSER: I imagine if we got behind the scenes of the GitHub, it’d be a complex failure. There wouldn’t be a root cause. There would be root causes that, “The guy who was supposed to be on lookout didn’t have his glasses and it was a particularly foggy night and…and…and…” It’s a cascading set of circumstances that led to this, and it was only two hours. It feels bad when things are out of your control. There’s nothing you can do. GitHub is down. It’s not your fault. You were paying them for this service. But, if you calculate it as a percentage of time, GitHub’s uptime, over the past couple of years, there’s going to be a lot of nines.
MICHAEL LARSEN: Yes. Statistically speaking, two hours out of years of service is tremendous. I’m really not trying to disparage GitHub at all. I totally understand that things like this do happen. So many eggs are in the same basket, to reiterate, if something does catastrophically fail, a lot of people can be affected, and we get the perception that it’s far worse than it really is.
JUSTIN ROHRMAN: I think it’s a great example of a massive Black-Swan type bug. I’m going to take a wild guess and say that, “A distributed system like this has automated load balancing all over the place.” If one server goes down, it should automatically pick up and send that traffic to another server so nobody notices the downtime. But, somehow, in this particular case, there was a power outage at one facility and the load balancing just didn’t work, and it made everyone unable to access the system. So, this seems like something that where they were just relying on these automated tools to handle business that, maybe, they hadn’t actually tried to simulate at any point. They just expected the automation to take over and handle it.
MATTHEW HEUSSER: That’s probably true. I know that Netflix has all their various different kinds of test infrastructure tools that randomly bring down things in production and then checks to see whether production is still working. Because they have so much redundancy, they expect it to still work. And, yeah, I wonder if GitHub doesn’t have those kind of tools yet.
PERZE ABABA: Yeah. And, the other thing, too, is that the architecture of their infrastructure might have paved the way for the delay as well. Looking at the official report that they gave, status at https://github.com didn’t turn to red until eight minutes after it originally went down. So, if they had that page sitting on a CDN somewhere that just pulls data out of the origin server, that says, “Hey. Are you still there? Are you still there?” It took eight minutes to relay that information to the information page. The company that I work for, we work with a lot of third-party companies; and, when there are downtimes, we have some automated, [LAUGHTER], checks that pretty much checks for the availability of an API or just looking at the status of the servers. Once we check that it’s okay, then we continue a barrage of tests. But, there are also times when that status page is not up-to-date, even for hours, which gives us a false sense of security. It really does help to get information that “something went down” via a check. You have to confirm it.
MATTHEW HEUSSER: It just reminds me, Perze, we’re probably the closest to this. I was talking to a test architect from a Fortune 500 Company about, “How testing has changed,” and he said, “Oh. It’s the same as it’s always been. It’s all about vendor management.” Increasingly, I think, especially companies like that—like Johnson & Johnson, where you’re at now—we sliced all of the pieces in our testing very small, and a lot of it, the applications services, we’re using vendors. We’re outsourcing our servers to the Cloud. I think that creates this dependency. The big difference, I think, is not reality but psychological, “It’s out of my hands when something goes down. I can’t go into the server room and sit behind the admin.” I’m not quite sure how to feel about that. Speaking of that, organizations, there is increasing reliance on vendors and not actually having an admin to go to and see what’s going on. That’s one of the things we want to talk about today in our main segment, Testing When There Just Aren’t Enough Testers. I often say, “Was testing really the bottleneck, or is testing just at the end? Is it actually that we don’t have the capacity to do the work, if it was driveled out to us in a reasonable order, or is it that all the work gets dumped on us at the end and that causes delays?” And, I want to talk about, When testing actually is the bottleneck, there just aren’t enough testers. The ratio is off. The work effort is off. You don’t feel like you can do a reasonable job testing in the time allocated for you. One of the clients we’re going to go visit, they said, “The tickets just add up. The tickets are added to faster than we can take them away. We don’t even have time for the consulting visit.” Has anybody here ever been in that situation? What did you do, and what can you do?
MICHAEL LARSEN: Much of my career with the exception of a few times, when I’ve had the privilege of being able to work on a test team, I’ve been a lone gun—either the only QA on a project or the only tester period—and that always creates interesting challenges. Yes, you definitely run into a problem of, “You do become the bottleneck because you’re the only resource.” And, if everything is pushed to you after coding is finished, you can have an Agile development team; but, if you’re the only tester, unless you are being very creative or you are actively engaged in the process from the very beginning, yes, it is going to be a “throw over the wall, and now it’s in your hands,” and the backlog adds up. Even in the current company I’m at right now, where I have a test team, our test team is still significantly smaller than the number of programmers on the engineering side. At the moment, I would say it’s probably a ratio of about 4-to-1. We have four stories to test, for each one of us, at any given time, if workflow is constant. So, yes, we do find periods of time where we get a backlog. The only thing that we can do is either say, “Just know that the backlog is going to build up until we can get through this,” or “we can try to get creative and try to engage the programmers earlier in the process.” And, I’m personally a big proponent of engaging earlier wherever possible.
MATTHEW HEUSSER: Yeah. How do you work through this? Do you say, “The backlog is going to build up until we work through it?” It’s like, we get lucky and lots of programmers get sick or go on vacation, how are you ever going to get caught up?
MICHAEL LARSEN: In some cases what we’ve been doing is we’ve actually had some of the programmers line up and give us a hand to clear some of the backlog. They actually take on some of the testing with us so that we can go through and focus on what’s necessary. Every once in a while, we get something that’s a bigger deal. Not so much that it’s just functionality testing. What tends to get interesting is when you’re introducing something new to the equation. You introduce a new service to a product, “What does that involve and how many pieces does it touch?” It may seem like a very simple, little component, but I’ve actually had the experience where I’ve spent a fair amount of time helping just debug the install process and verify and make sure that we’re doing everything that we need to. Now, mind you, the development work has been done well in advance. It’s all finished. But, most of the issues that we’re running into aren’t functionality-related. It’s literally getting it incorporated with the product and being something that we can do a regular install or upgrade with and having to go around and fix those, and determine what the problem is or what’s happening can sometimes take several iterations. It could take days. And, that aspect of it causes a big pileup and the road just slows down. The nice thing, once you figure those out, then everybody gets the benefit of it.
JUSTIN ROHRMAN: So, when you say, “backlog,” it makes me think that you have lots of features coming in and maybe you don’t have enough time to get to them all before the release and you just don’t release the things that didn’t get tested, is that right?
MICHAEL LARSEN: That’s absolutely right.
JUSTIN ROHRMAN: So, I think in a lot of shops that’s not possible. Either it goes out with this release or the release is late. Have you seen that before?
MICHAEL LARSEN: Yeah. We aim to make sure that we get a release out once a month and typically we aim to get it out like the second week of the month. So, we try to be very regular with it so that they know when it’s coming. That gives us a window so that we can work through if there are any issues. And sometimes, yes, there is a priority that we have to make sure it gets out, and that just becomes—and sometimes we’ve said, you know, “If we only put out one feature with the release, but that one feature is this really key one that has got to get right, then that’s it. And then, the next month, we may have a release that has 10 bug fixes or has a number of smaller features that rolled relatively smoothly. For us it’s, we want to be able to be regular and consistent, rather than say, “Here’s X number of stories. Now we can release.”
MATTHEW HEUSSER: So, I’ve seen a couple of other ways to slice and dice this problem of, “Don’t have enough testers.” One I’m particularly fond of for regression testing is whole team testing. This is a, [LAUGHTER], failure example on my part. But, I worked with a company, not too far from me. So, I could be there a lot for an extended period of time, and the regression test process was taking too long. And, we came up with some innovative ways around that. But, one thing I said was, “Take the whole team and turn them sideways. Take the developers, take everybody, and you could get regression testing done in one day, unless there are show stoppers. Just go do the testing. We could slice and dice it out so that everybody has pieces assigned, and they could report it back in. Think of it like distributed computing with parallel computing. Like, we slice up the test work, everybody gets pieces, they go do it, and they come back. And, we could do that in a way that isn’t boring. If you’re a developer, you get to learn a different part of the system that you don’t usually work on. You get to see it, experience it, and it’s valuable for you.” I just didn’t do a good job selling it. And, they were like, “Yeah. What other ideas do you have?” As a consultant, you never want to hear that.
JUSTIN ROHRMAN: I think it’s hard to do a good job selling that, because other people are doing other things because that’s what they’re interested in. Because, if they wanted to do test, they would be testers. So, when you get to the end of the release and you say, “Hey, everybody, [LAUGHTER], guess what time it is?” Nobody is excited about putting down development or product planning or whatever they’re doing to start running through regression testing, especially if it’s super-boring, old-school, traditional, detailed test scripts. Nobody wants to do that.
MATTHEW HEUSSER: I think the problem was they didn’t understand the team had transitioned to something more like, “Make a list for all the risks for this release, pull them off one at a time, run through the risks. The risks should take between 10-to-30 minutes to execute. Update the Low-Tech Testing Dashboard. Go to noon. Look at the board. See what hasn’t been covered yet that’s important. Make a new list. Test all afternoon. Get done.” They thought it was still, “Pull out the Microsoft Word Document with the 27 steps to follow so you can check that, that Word Document has passed,” and we’d transitioned. But, they didn’t know that yet. So, when I said, “Test,” it was like, “Oh, my God. Shoot me now.” So, I think that was a big part of the problem. So, that’s one thing that teams can do, and I wished I had some way of explaining it. Like, “This is the percentage of first-time quality. This is the percentage of regression,” in a way that actually made any kind of sense. So, we could say, “The testers could absolutely do this if we’d stop introducing defects that are unanticipated in pieces of the product that are separate from where the change is.” But, because we’ve got these other defects in places we couldn’t predict, we’ve got to do more general coverage at the end and we just don’t have the people for that. So, we’ll continue to measure; and, as soon as we get to the point where we don’t need help anymore, that’s kind of an improved-development-quality thing. Because of poor development quality, we needed to do more mass inspection at the end. That consequence as being pushed onto the testers, and I really wish we could’ve found a way to share the pain. You don’t share the pain and it’s pushed entirely onto test, then people don’t see it and don’t want to fix it. So, that’s one thing.
MICHAEL LARSEN: I just went through this yesterday, as a matter of fact. I brought one of our programmers through exactly what it took to get everything up and ready to test. That’s always been a very interesting experience, because when they see everything that goes into it, they’re just nodding. There’s appreciation for, “Oh, wow. There’s a lot of moving parts here that I wasn’t aware of,” and it’s always great to have that conversation. Because, when have that conversation, they tend to go back and, the next thing I know, “Oh, look. There’s a couple of new jobs in Jenkins that helped clear out some of the bottlenecks that we were experiencing or that we were handling.” They basically found a solution to help us get past that problem or to speed up that process. When in doubt, you don’t necessarily have to say, “Hey, programming team, we want you to stop everything that you’re doing and come help us test.” But, having them sit down with you and just go through the process of getting to where their input is needed can often open up many doors. And they’ve, like you said, “Shared the pain.” They’ve had a chance to really feel what that’s like. And, I’ve been blessed in the sense that I have a number of programmers who have been very open to the idea of getting a jumpstart on testing. So, they’ll sit with me to make sure that things are working correctly or we’ll navigate together a process or two and I’ll learn some of their insights or some of the things that they’re thinking, and then I can ask a few questions that will lead us down a different path.
BRIAN VAN STONE: In some ways, it’s kind of an added hidden benefit, continuing to socialize our craft as testers. I think that, over the past handful of years a lot of really positive progress has been made in changing the perception of the tester in a really good way. I think we need to keep focusing on that. I think it’s part of the reason we’re here doing this kind of podcast. But, not just within our community, but within the larger community that is building software and within our own organizations personally. The more we can raise awareness of, “Who is the tester? What does he do? What kind of challenges does he face, and how does he solve them,” I think we’ll get these added perspectives and add a lot of value that way.
PERZE ABABA: That’s a really great way to put it, Brian, and I do agree with what Mike is saying. It’s coming into towards the end of the project where there’s still a lot of tickets to be tested, there is really that “hope and wish.” That, “We wish we had more people to bring into the existing members in the team to help us,” but the challenge now is that you’re having people switch their mindset from programming into testing, which gives you a little bit extra challenges. Depending on the size of the team, there’s also a certain degree of confirmation bias, especially if they’re familiar with the ticket that they’ve code reviewed or something like that. Communication when something in the process changes is definitely key. There’s also a certain degree of trust that kind of comes into play, because if I could talk to you in a very honest way without you being offended, then I can be more upfront with what I think the problem is without me hurting your feelings. Yeah, it’s definitely a challenge, which kind of makes be beg the question, “Do we actually know of a case where any company out there has had enough testers?”
MATTHEW HEUSSER: Testing is the kind of problem that you can always find more work to do. If you think of it like a dial, where 1 is, “I poked it and it didn’t fall over,” and 10 is, “Really genuine exhaustive testing,” we probably don’t usually get to 10. I have worked with companies that have brought me in. Part of the problem is that testing is, what we call, “the mathematical bottleneck.” It’s, we don’t have the bandwidth to do all of the testing as fast as the developers are writing code. So, we can’t get caught up. And, I have worked with teams where we have gotten caught up so that we’re no longer the bottleneck, as long as the work comes to us in a timely fashion. If we’re doing 2-week Sprints and you drop 15 stories on us, Friday afternoon of the 2nd week, yeah, we’re still going to look like the bottleneck, but it’s actually just that we’re getting stuff late. I’ve worked with teams that have improved capacity to the point where we could deliver, and a lot of that was the team was doing, sort of, traditional, old school, documents-centric processes that we got rid of so much that we could go a lot faster. But, you could always keep turning the dial up.
JUSTIN ROHRMAN: One strategy I like and I’ve had a lot of success with cranking that up a little bit is pairing a tester with a developer. I used to do that at the last company I worked at, and we would just sit together for about an hour, a few times a week, work on whatever feature he was working on, and we’d find bugs on his machine. And, maybe, he’d be able to fix them up right there or maybe we’d write them up for later, but, either way, he was able to see it. And, we’d get that empathy going back and forth. I would understand what it was like for a developer to have holes poked in his work all the time, and he would understand the process of what it’s like to go through and do this cognitive load process of trying to find problems all the time. We just developed this better working relationship, and we also go better first-time builds and first-time features. So, once we actually got to the part of doing regression testing, that work was much lighter. There was just less to do because the feature was already looking pretty good.
MATTHEW HEUSSER: You actually eliminated at least one, if not multiple, of those, “Wait for a build. Install it. See that it doesn’t work. Write it up. Send it back.” You eliminated cycles there.
JUSTIN ROHRMAN: Yep. Exactly.
MATTHEW HEUSSER: Because, the first-time quality was higher. Another thing that I’ve done, we talk about the “dials,” and we just test less. “We’re just going to poke it, and if it doesn’t fall over we can let it go.” There’s not much value added there. Stuff will be broken, and people will question the value of testing. So, that’s not good. Another thought experiment I’ve talked about, but I haven’t actually gotten down to the numbers with people is, “Here are the 20 stories that we’re supposed to test this week. We have 40 hours. Maybe you do 40. Maybe you do 100 points. I want you to write down how many points we should spend on all the stories.” At the end of the release, someone says, “[GASP]. There were three bugs in System Foo-Foo-Foo. I can’t believe it. It went to production. How could that happen?” And, we’ve actually gone back and said, “Well, here’s our listed priority list of,” sometimes with scores attached, “what we were supposed to test this release. Foo-Foo-Foo was at the very bottom. So, the thing that had the bugs that we didn’t find was the thing that you told us was not that important to test.” That tends to go over pretty well. And, if they say, “Oh. No. No. Everything needs to be tested. Even the low-priority things are important,” then either we can turn the dial back to 1 and not test anything very well or we need to hire people. Everything needs to be tested. We made our priority list. The things at the bottom still need to be tested. We need more staff. I’ve actually had a conversation like that. It’s part of our LST Class that we teach, and we got more people hired.
BRIAN VAN STONE: Yeah. I’ve actually dealt with a handful of organizations that have found themselves in the unfortunate situation that, “Due to political pressures within the organization or financial pressures or whatever it may be, things are going to have to go to production untested. Dates can’t be moved.” It happens. Communication and buy-in became so important to smoothing that process out so that they could clear these hurdles, identify what was really important, what the risks really were, and then get to a point where they could try and make a decision about actually hiring more testers. But, until they could clear that hurdle, they couldn’t even have that conversation. So, it was communication and buy-in. It sounds really simple; but, when you get everybody in alignment on, “What is the actual problem? What is the actual risk? What are the consequences,” then, you know, you can be a lot more productive in trying to find a solution.
MATTHEW HEUSSER: Here’s a thought, and I’m interested what you guys think about this. This could be a bad idea. There are cultures when this would not work. But, you’re in the big meeting, you’re talking about what has to get done by one week from Friday. You identify these five subsystems, five features, five changes. They’re low priority. And you say, “Okay. We, the testers, aren’t going to have time to get to those. That’s just the reality. Is everybody okay with that?” And management nods their heads and says, “We get it. You’re not going to be able to test those. It’s okay.” “Even if we magically ate Ramen Noodles at our desks and got up at 3:00 a.m., these other systems would get our attention instead.” You could ask the question to the programming staff, “Okay. These five things that aren’t tested, are they going to work?”
JUSTIN ROHRMAN: I have to actually do that a lot, and I say, “What is your biggest concern with this feature?” And, usually, that will elicit a response.
MATTHEW HEUSSER: That’s a nice non-confrontational way to do it. I’ve said, “Do you think they’re going to work? What are the odds that there’s a problem?” If you can get the programmer to say, “It’s not going to work,” then you can say, “Okay. At this point, this is not a testing problem. Management and the developers, you guys need to go talk, because I’ve done my job. I’ve already figured out if it’s going to work. The answer is, ‘No.’” If I have more than two weeks, I have less confrontational ways to get to that.
MICHAEL LARSEN: As the release manager for Socialtext, I have a “burn-down chart,” that I use. That covers big-block areas, and there are certain times where, if I find myself pressed for time, I will—in that morning standup—say, “Hey, you know what? I just want to say, here’s where we’re at. Here’s the progress that we’ve made. Here’s a couple of key areas that need my attention. Seeing as we haven’t done much work around these areas with recent product development, what’s the risk if we don’t cover them?” And the risk might be, “Oh. No. No. No. We need to cover those. It’s okay. Take the extra day.” Or, “Hmm. That’s a very good question. Maybe this time we can skip that.” And, you know, at least, that way, if I’ve had that conversation and we’ve made a decision as a team to do that, if it comes back and bites us later, “Okay. At least they said, ‘Well, we made the decision not to cover that this time.’ It came back to bite us. So, next time around, we know that the risk for this one is higher.” And so, we prioritize it that.
MATTHEW HEUSSER: Yeah. What’s the name of the Dashboard in Socialtext? Is it just the Dashboard where the widgets are?
MICHAEL LARSEN: Yeah. The Dashboard.
MATTHEW HEUSSER: When that came out, the CEO of Socialtext was the acting vice president of engineering and we talked about performance testing and how much time that would take, and he said, “Skip performance testing.” So, in the retro, when the performance was not good at scale, the CEO of the company said, “Yes. Matt informed me of that risk, and I made the decision—the calculated decision—to skip performance testing,” and that was it. Asked and answered. Done. “Next item.” And, if we hadn’t had that level of transparency, it would’ve gone very differently I think. I think making it transparent and letting your customers look at the risk is clearly powerful. That’s one piece. Another piece is that you often find stuff that’s just really bizarre. We did a system’s upgrade once for a major ERP System, for a pretty big company, “We’re not going to make it. What can we cut?” We’re going live January 1st. The Monthly Reports usually run about the 1st of the following month. So February 1st. The Quarterly Reports aren’t going to run until April 1st. We can go live without ever testing those reports. We’ll run the Monthlies for December and the Quarterlies for Fourth Quarter on the old system right after it cut over. We can go live, and we’ve got a month before we have to worry about those other reports. And, that only came out because of this kind of asking about visibility and listing all of the things and looking at transparency.
So, that’s about all the time we have today for the show. Thanks for listening. Anything else new and exciting in the world of testing before we say, “Goodbye?”
BRIAN VAN STONE: Quest is going to be happening in Chicago in April. April 18th through April 22nd. I’ll be there, and I’ll be giving a talk at that one also. So, hopefully, I will see some of you guys there.
MICHAEL LARSEN: The first week of April the Software Test Professionals Conference (STPCon) is coming to the San Francisco Bay area. It will be happening in the town of Millbrae, which is just south of the San Francisco International Airport. I will be presenting two talks there—one workshop and one track talk. Anybody wants to come to San Francisco and wants to do stuff after hours, during the week of STPCon, I can make some recommendations.
PERZE ABABA: There’s a couple of Testing Conferences that Anna Royzman is bringing to New York. So one is the Test Leadership Conference, scheduled on April 27th, and there’s a Test Master’s Academy Conference, Program Chaired by James Bach. That’s going to be around September—I believe—of this year.
JUSTIN ROHRMAN: And, I’ve got one more for you: (CAST) Conference of the Association for Software Testing will be held in Vancouver, British Columbia this year on August 8th through August 10th.
MATTHEW HEUSSER: I guess that’s all for today. Thanks for coming.
MICHAEL LARSEN: Thank you for listening to The Testing Show sponsored by QualiTest. For more information, please see https://www.qualitestgroup.com. Do you have questions, or is there a topic or an idea you’d like to have covered? Please e mail us at [email protected], and we will answer questions that we receive on air.

Recent posts

Get started with a free 30 minute consultation with an expert.