April 13, 2016
In this episode, The Testing Show talks about auto makers stepping back from the robots and giving human beings a reconsideration. Do humans suffer a skill loss when automation is too heavily relied upon?
Maybe adaptability is important after all. We are also joined by Kate Falanga of Huge, Inc. to discuss Test Management, the changing role of test mentorship in the testing team, and if there is a value to having a dedicated person to be a guide, mentor and potential BS diverter, and what happens when that person goes away?
- Mercedes-Benz swaps robots for people on its assembly lines
- Humans Replacing Robots Herald Toyota’s Vision of Future
- Carr, Nicholas, The Glass Cage: Automation and Us
- This American Life, Episode 561: NUMMI 2015
- Quality Jam 2016, QASymphony User Conference
- Test Masters Academy Spring 2016
- Call for Volunteers, AST Technology Team
MICHAEL LARSEN: Welcome to The Testing Show. I’m Michael Larsen, the show producer. This week, we have Perze Ababa.
PERZE ABABA: Good morning, everyone
MICHAEL LARSEN: Justin Rohrman.
JUSTIN ROHRMAN: Hey, there.
MICHAEL LARSEN: Our special guest, Kate Falanga.
KATE FALANGA: Hi, guys.
MICHAEL LARSEN: And, of course, our host and moderator, Mr. Matt Heusser. Take it away, Matt.
MATTHEW HEUSSER: Hello. Thank you, Michael. Welcome everyone, to the show. If you’re a long-time listener of the show, you know most of the players, but you might not have met Kate Falanga who I met at STPCon a couple of years ago and is a test manager at Huge. So, before we get started, Kate, tell us a little bit about yourself. How did you get started in testing and what you think about this strange and ill-defined thing that we call, “software testing?”
KATE FALANGA: Sure. I think, like a lot of people, I kind of got into testing from an odd angle. This is my fourth career. I actually started in television broadcasting and kind of made my way into testing through the support side of software. I managed a 24/7 support team at one point-in-time. Part of that involved UAT. I liked it. There was an opening within QA, and so I kind of moved into that direction. And then, when I came to Huge, I kind of did a lot more work trying to educate myself around that craft. Through that, I kind of promoted through managing a team and then eventually to the director title and role.
MATTHEW HEUSSER: Tell us, maybe, a little bit about Huge.
KATE FALANGA: Sure. Huge is a digital agency. It’s an advertising agency, but most of the time people think about commercials and billboards. That’s actually something we’re getting into as well, but a lot of what they do is on the digital side of things. It might just even be social. For example, we kind of are the people behind the Captain Crunch Twitter Feed. But, a lot of what I’m actually more involved in is the software development side. A lot of big names will come to us and we’ll do a lot of redesigns of existing corporate websites. We will do a lot of advertising. It might be just two-or-three page microsites. We’ve done in-store kind of experiences. We’ve done vending machines. We do a lot of apps. It’s just a lot of everything all over the place. So, there’s a lot of variety.
MATTHEW HEUSSER: Before we get to our main topic, which is Test Management, let’s talk about what’s going on in the world of software development. Does anybody have anything new and exciting that they saw on the Internet this week?
JUSTIN ROHRMAN: So, I recently saw two articles—one from Toyota and one from Mercedes-Benz—about switching over their assembly lines from mostly robotics to adding more people, and this is a massive change for an industry that’s almost wholly-based on robotic assembly lines. And, I think there’s a lot of parallels there to what we see in software testing. I read an interesting book recently by a guy named Nicholas Carr called, The Glass Cage, and The Glass Cage is about the “deskilling effect” automation has on the workforce. One of the case studies he mentions in the book is, “airline pilots who tend to rely heavily on autopilot.” They get used to the plane doing a lot of the actual flying work for them and they become more or less a monitor of the system. They watch gauges and make sure things are doing okay; but, when the time comes, when an emergency happens and they actually have to fly the plane, things get really bad. They don’t know what to do in an emergency situation, because they haven’t been hands-on flying the plane in a long time. So, I’m wondering what effect this switch back to people for Toyota and Mercedes is going to have. I think it’s going to be relearning how to build cars.
MICHAEL LARSEN: One of the things that I’ve found interesting, especially on the side of Mercedes, their explanation for why they want more people in those manufacturing roles is that, “As customers are spending more for their vehicles, they are demanding more as well.” Specifically, in the way of customizations. Customization is something robots, [LAUGHTER], don’t do very well. They’re great at handling the same tasks over and over, but customization tends to slow them down dramatically. I can also liken this to my own work in software. There’s a lot we take for granted, because a lot gets stuffed into Jenkins. We schedule those tasks. When it all works, it’s great. But, when something goes wrong, it can take hours or even days to debug everything and get us back to green. To go back to what Justin is saying, when you take people out of the flow, again, when something goes wrong, it can be hard to debug and get back to normal.
MATTHEW HEUSSER: Yeah. And, the Socialtext build— I mean, it’s wonderful software, but— How can I say this? The Socialtext build has grown over time, and it’s classic Legacy software. And, I know there’s been some refactoring involved in that, but there’s a lot going on under the hood. So, it’s super easy if it’s extracted and it’s a button push; but, when that button breaks, it’s broken. I think there’s still a lot of human processes in the assembly; and, when you have a machine station that you insert, it’s only going to do what it’s programmed to do, and then if it has to do something different, you have to stop everything and reprogram it. Certainly, the Toyota folks are saying, “Wow. Adaptability of humans is in and of itself valuable and the computers aren’t.” So, on the testing side, if your user interface is changing radically all the time and you’re driving the user interface with automated checks, it means you’re going to have to do a lot of rework. So, maybe, we wait until the user interface is stable before we get super serious about recording those automated checks. But, it seems to be like a 201‑level conversation in the auto industry happening, which really pleases me, and it seems to me the testing community is finally starting to get to 201, where the people asking at conferences are not literally asking that every single talk be on automation.
And instead, we can now say, “Since you’ve tried it and you’ve had these consequences, let’s talk about how to develop a better strategy.” At least it seems that way to me at the bigger conferences.
PERZE ABABA: With the whole advent of open-source software, a lot more teams now realize that there’s actually a lot of work that needs to be done building code to test other code. So, their automation software is software and it also needs to be tested, and I do agree with what Michael was saying earlier that, “We can rely on this when it works.” But, when it doesn’t, [LAUGHTER], we have to have that fall back into telling us, “What is this actually doing?” Because someone was doing that before the solution was in place, and in turn the solution just made it a whole lot faster because it was automated. But, of course, when it breaks, we have to understand what that piece does. With regards to the new study we’re, [LAUGHTER], talking about, I remember this episode from This American Life. It was talking about the NUMMI Plant in California that was looking into how Toyota and GM had this collaboration on building on building cars, and it was probably GM’s Reach-Out Program so that they can learn from Toyota on how to improve the quality and the efficiency of their cars. But, when push comes to shove, a human is in the middle of it all, because Toyota does look at their people as “craftsmen.” And, that’s something that we can really all learn from.
MATTHEW HEUSSER: Thanks, Perze. So, what I wanted to talk about today was, Test Management. We throw this term around like it means something. I went to my first STAR Conference 12 years ago and there was a track called, Test Management, and I thought it would be about leadership maybe. And, it was really mostly about “tracking the work of the workers.” If we had that track today, we’d call it, The JIRA Track.
KATE FALANGA: [LAUGHTER]. Probably.
MATTHEW HEUSSER: So, what is test management? Has it changed since we individually got started in this, and where is it going? Those kinds of questions. I think I’ll start with Kate, as she is our, sort of, special guest today.
KATE FALANGA: Sure. I think it’s management pretty much like any other field. So, there is that side of things where management is actually a skill when you’re managing, actually, people in career development and kind of having to work with people and mentor them and actually bring them up to the level that they want to be. There’s a lot of skill. There’s a lot of things to know in that area and certainly a lot of mistakes to be made. But, if you’re going to be in test management, then there’s also the part of knowing your actual discipline and there might also be elements of internal thought leadership around your actual discipline, if you’re the most senior person in that particular field. So, there might be an element that’s kind of tracking work if that’s something that you need to do. I just don’t think it’s the only element. So, it’s pretty much trying to drive what quality assurance actually means at Huge and working with some of the senior leadership on that as well as actual people management as well as kind of paying attention to projects and making sure that we’re delivering quality.
MICHAEL LARSEN: I think out of all of us, I’ve been in this game the longest. So, I’ve seen test management morph over the years.
Almost always, test management has been, “How much did we get done? What areas didn’t get touched, and who gets the blame when something goes wrong?” Each company has done it a bit differently. Some have done it better than others. Some have focused more on developing their people rather than focusing on the process. But, whether we like it or not, the process is what we are judged on.
MATTHEW HEUSSER: Well, let me combine both of those perspectives. I heard Kate list three specific things, what does QA mean at our company, which I would call “alignment.” So, and I would say Michael’s point just about work tracking, coverage tracking is partially traditional management and partially how we’re doing, but not the “alignment” piece. To a great extent, doesn’t Agile sort of just push that work tracking right back into the team and say, “You guys figure it out?” It distributes the work tracking into the team so the team can tell management what’s going on, do you really need a manager to do that anymore?
MICHAEL LARSEN: Speaking for Socialtext, we don’t have a test manager role any longer. We only have one manager, and that’s the director of engineering. All of us report up to that level. The programmers and testers are all peers on the same team. With that in mind, we tend to self-align, and we do a lot of information sharing with our stories. Not a lot of easily‑defined metrics, though, we do certainly report some, such as cycle time. You know, how long it takes for stories from start to finish, etcetera. But, we tend to focus less on metrics and numbers and instead we look at what information we are providing rather than trying to quantify everything. Again, I don’t know if that’s normal anywhere else.
MATTHEW HEUSSER: Right. And, I see a lot of organizations moving towards qualitative analysis of the project. How are we doing in terms of, “We found these three blockers, so it’s really hosed.” It could be much more valuable than, “We ran 175 of 215 pretty good test cases and we found 3 blockers, 7 criticals, 9 majors, and 15 minors.” The second example actually provides more data but less information. I’m curious. Perze has been in test management for a long time. He was test manager at Johnson & Johnson and he’s been at The New York Times and I think there was another company in the middle. So, he might have some insight into this: What is test management? How does it line up against what we’ve heard so far?
PERZE ABABA: I think it’s been pretty consistent with what Kate and what Michael have been talking about. We scoff at the notion of counting bugs, but the fact that someone actually spent time to look for that bug and pretty much wrote a report about the bug, that’s cost that somebody spent. It’s also time that’s not spent on something else from a cost-metric perspective. But, of course, it doesn’t stop there. There’s a whole lot more conversation that needs to happen, which is what Michael was saying and what Kate was saying, where this is still a group effort, and the testing team delivers a particular value to the team. It’s something that’s really affected by the context around it as well. It’s good to find a map of sorts of, “What is really affecting us from a team perspective on delivering?” And, looking in what areas where software testing can help improve how we deliver as a team, is probably a good way to look at it as well.
MATTHEW HEUSSER: Yeah. I would say that’s the alignment piece plus, sort of, a continuous-improvement piece. So, what I heard there, which I think is important, is, How can we use testing to accelerate the delivery process or, at the very least, tweak our testing to mitigate risks in the delivery process? One thing that I think is kind of implied is continually tweaking the test strategy to say, “These kind of testing activities provide this bang for the buck and we can cut them to create this risk, but it’ll save us this much time, which we can use to hit the deadline.” Do you see yourself, as a test manager, doing that? Do you have a mind map or a list of test ideas that you’re pruning over time?
PERZE ABABA: Yeah. That’s very true for me, especially coming from context where we have to deal with a lot of compliance. So, you’re not just dealing with the actual functionality of the product that you’re delivering. There’s other areas where you have to look at security or accessibility. There has to be a good balance on what you decide and what you do with the time that you have, and considering the people that you have makes it an even bigger challenge.
MATTHEW HEUSSER: For the most part, I hear you talking about, tweaking the test strategy for, we could stop doing this, and we could start doing this. If we do this, it’ll take this long. It’s the negotiation process at a high level. The measurements that I’m interested in right now are amount of first-time quality, the cycle time. The time it takes a delivery team to go from, “Yeah. We could probably start working on this.” It’s defined well enough to do to actually in production. The cost of bugs and our percentage of touch time. That is, “How much time are we actually working on these stories, and how much are they sitting around waiting for someone to do something?” If we can actually get down and figure out what those things are, we can figure out what the opportunity is. Here’s the low-hanging fruit; and, if we can figure out what that opportunity is and quantify it, we can figure out how much faster we can make the delivery team. And, if we have an economic model of the portfolio, we know the cost of delay of this project is however much per week and we know how many more projects we can get done in a year by improving performance, we can actually predict how much more money, how much more value, the team is going to realize through software. I think all of that falls in the test‑management bucket, but I don’t think most managers have the tools right now to measure those things or, frankly, the time. And, even if they could, the portfolio in most companies is not an economic model, so you can’t actually stand up, “Oh, if we went this much faster, how many more dollars would that get me?” All you get at most companies I work with now is, “Here’s how much your velocity would go up.” which doesn’t actually tell you anything about the economics of the work. Perze has worked at quite a few big companies. Do you have any thoughts about that before we move on?
PERZE ABABA: So, if you look at, you know, specific cost metrics as a team, not just as a software testing team, you’re looking at: Ideally, the higher velocity is, technically the cost will decrease. But then, there’s a tension metric there, because you’re actually dealing with maintenance costs and you’re also dealing with the bugs that you need to fix. The challenge there then is that, “At what level are you looking at velocity?” So, if you’re looking at the Scrum, [LAUGHTER], team level, you might be looking at two-week iteration. And then, from a more detached manager or even director levels, their velocity in delivering this piece of software that they asked for three months ago, that cycle becomes every three months. It’s definitely a challenge, [LAUGHTER], with regards to weaving this notion of velocity into cost.
When you’re jumping in as a test manager, you know, in this day and age, it’s that you’re not just looking at you’re little nook of, you know, software development, but you’re now forced to participate on a much wider scale. So, you’re coming in as more like a testing strategist or a quality strategist to look at the bigger picture and, “Where can we introduce ways to be better at delivering what we need to deliver?”
MATTHEW HEUSSER: So, we kind of skipped over this. I read a book by the Harvard Business Review on traditional management, and it was a very large sales organization where the jobs were very the same. And yet, when they interviewed the managers, they got substantially different answers to, “What is management?” So, what is traditional management, and how much of that is still relevant for testing and how should testers tweak that? How does testing make that different?
JUSTIN ROHRMAN: I have a hard time answering the question about what traditional management is. I’ve never actually held a management role in software testing, but I imagine it to be a lot of the things we’ve said so far—data collection and recreating realities for people up the chain that weren’t there to observe it and reporting on the status of the project. But, one big thing I am noticing—and this might be an artifact of the fact that I’ve been at really small companies for half my career of 20 people or less—a group of testers with 1 manager residing over them giving career advice and doing reviews, discipline, skill development, and all of that stuff, that person is just going away and now performance reviews and discipline are rolling up to the development managers or somebody higher up on the team even. Skill development and everything else is rolling down onto the team so that they have to direct themselves. I don’t know if anybody else is noticing this trend.
MICHAEL LARSEN: With the exception of my first 2 years at Socialtext, the last test manager that I had was at Cisco Systems, and that was in 2001.
JUSTIN ROHRMAN: Yeah. The last actual manager I saw was in 2007. It’s been a while, and that was a really traditionally-structured organization.
KATE FALANGA: Do you guys find that actually having a manager who is in your discipline—would you guys find that to be—valuable? Because, I think in our organization, we’ve actually found that to be valuable for career development and mentorship to have someone who’s directly managing who has a deep understanding of, kind of, what you’re day-to-day life is like.
MICHAEL LARSEN: When Ken Pier asked if I would be willing to work for him at Socialtext, I jumped at the opportunity. Matt remembers Ken well and probably understands why I’m saying this. Notice I’ve said, “Worked for,” in the past tense, because Ken passed away last year. Ken was wonderful at mentoring and protecting his team. [LAUGHTER]. I remember when I accepted the offer to work for him, he told me I “should read Tom Wolfe’s Radical Chic & Mau-Mauing the Flak Catchers.” I was a little confused at that at first, as in, “Why read that?” I realized later that it had a lot to do with the dynamics of testing teams and the pressures we face. Ken decided his primary role was to be the “Flak Catcher” for the test team.
He had both a willingness and a personality where he could tell anyone that they were “full of it.” He would push back anytime anyone tried to apply unreasonable pressure on us or to have us cut corners, regardless of who it was—whether it was from an influential customer all the way up to our CEO. He also helped us grow in skill and develop opportunities to get into new areas. More to the point, he was one of us, as in he worked side-by-side with us, tested with us, and probably worked longer hours than all of us combined at times.
MATTHEW HEUSSER: So, there’s a lot in there, Michael. Thank you. First of all, Ken was fantastic. I used to say he was “insane,” as a compliment, in that the number of hours that he worked. People say, “Software is a young man’s game.” He started at Xerox PARC in 1976 or something. He consistently worked more hours than me. The amount that he got done was amazing. The two pieces that you said—one was the alignment piece, both internal, “Hey guy, here’s how we do testing here,” and a good manager will take the best ideas of other people and accelerate them, and external. That is, “Hey management, hey customers, hey anybody that wants to know here’s how we think about testing,” which prevents this, “Why didn’t you do that?” “Well, it’s because here’s how we think about testing and that doesn’t fit in. Do you want to have a conversation?” And, it’s good to have someone who has that much time. They can be the face of testing, because most of us are heads down doing testing. The other piece of that is the BS umbrella, which I think is very valuable, “Let me go to the meeting so you don’t have to.” When the early Agile Movement came out and Scrum said, “Just produce working software and don’t worry about all the silly bits,” there was a lot of holdover from the olden days and people were asking for design documents and functional specifications and weekly meetings that were not a good use of your time, and there was a role called, “The blocker.” And, that role was to just go to ridiculous meetings and create ridiculous documentation so the team could focus on delivering software. To the extent that, that is combined with alignment, I think that can be very valuable for a test manager.
KATE FALANGA: I think that’s the switch when people are moving to Agile. Switching my approach to management is actually the service-oriented approach to management. As Michael mentioned, as you mentioned, to enable teams, to empower teams, and to mostly work in the background to make sure that they have everything they need to do good work—whether that’s the skills that they need, making sure that they aren’t bothered so they can focus on work when they need to, to make sure that they’re fed if they’re actually working late. That’s what management has become in an Agile world. It is to enable teams and not to worry as much about metrics, “Are you delivering?” Although, that’s certainly a piece as well, but it’s also more of a team-oriented approach. It’s really about, “What’s the team producing and QA as part of that team?”
MATTHEW HEUSSER: Thank you, Kate. And, where I see test management still around and thriving is organizations like Huge, like Hyland Software (with a couple-hundred testers), like TechSmith (with a couple-dozen testers), where there’s multiple teams of testers that are embedded and you need somebody with their head at a slightly-higher level to sort of communicate and collaborate. I want to talk about test management at Excelon for a minute. Justin and I have not used that term. In my role at Excelon, it’s very two-sided, but I want to accelerate our contractors, our technical staff, so they keep the customers happy and continuously add value and build relationships to generate more work.
So, we’re pretty transparent about rates. So, it’s not like this annual review where I’m trying to give you a one and you’re trying to get a five and it’s this weird back-and-forth. We know what the bill rate is; and, if that number goes up, the contractor gets more. So, the conversation is, “What can we do to provide more value and I can be a bit of a coach?” And, I think that’s two ways. So I, as the coach, can say, “What are you reading? You should read this. What about that? I think you need to work on that.” But also, the technical staff can say, “Hey Matt, we found this really interesting article. Here’s this opportunity.” It’s a two-way conversation. So, I’m doing some of the account management, which is interacting with the customer sometimes, when it’s appropriate, and I’m doing this coaching but I’m trying to shift it so the staff is doing more of the interacting with the client and I’m backing off from there. So, it’s my job to make them go faster.
JUSTIN ROHRMAN: In my role as an independent, the client has the ultimate vote about my performance. If they don’t like what I’m doing, they’re just not going to renew my contract. So, I like to make sure that they’re pretty happy. I talked to the clients every day and get feedback about that work. With Matt’s role as, I guess, somewhat of a test manager for this organization, we focus a lot on skill development—be that with him helping me get to a training class where an employer would normally just pay for all of that or by, like you mentioned, passing back and forth interesting articles or with writing articles. He’ll do a lot of content development, helping me improve my writing ability, and things like that. It’s just this never‑ending cycle of skill-development focus.
MATTHEW HEUSSER: Which almost sets up a podcast on: What is Coaching? I don’t think we came to answer. I think test management is a lot of things to a lot of people. What your director thinks it means and having that conversation and what your staff thinks it means is probably a conversation worth having. We get in trouble when we’re not aligned on the meaning of those words “test management.” Everybody expects different things, and we have shallow agreement, which leads to conflict later. And that sounds like a good podcast, but unfortunately it’s another podcast. Then, we’re out of time. A couple of things I wanted to mention that I think are interesting. I just got back from Columbus, Ohio for QA or the Highway, and we did a section on, Making Testing Strategic, which was 1 hour long which was really just a lot of fun. We’ve got the audio recording. So, if Michael can find enough good material, we’re going to make it a podcast, and possibly have those speakers back on the podcast with us to talk about it some more. Upcoming, I’ve got the Agile Conference in Atlanta, Georgia in July. So, I’ll be there the whole week, if you’re coming to that, and that’s about all the news that I have right now. Anybody else?
JUSTIN ROHRMAN: The Quality Jam is going to happen in April. That’s the QASymphony User Conference. Also in Atlanta, Georgia.
KATE FALANGA: I want to bring everyone’s attention, especially since we’re on topic for test management, Anna Royzman has actually brought two upcoming conferences as well as Master Classes on this particular topic. One in April, and another is coming up in the fall. So, I do encourage people to check out www.testmastersacademy.org, and I’m sure that link can actually be included in the podcast as well. If you want to dig down deep into a little bit more about what, kind of, test management means, if you would like to attend the conferences, this is a great way to learn from experts in the field, specifically about this topic, as well as discuss it with peers. So, I encourage people to come and take a look and join us.
PERZE ABABA: All right. For the past eight weeks or so, I’ve been a volunteer for The Association for Software Testing Technology Team, and we’re still looking for more people to form the core of this team. And, we’re dealing with some technologies that can keep up the operations of the AST Site among other things. If you have knowledge and you’re interested in website development and hosting e-mails or wikis and e-learning systems, if this is up your alley, please reach out to me or Eric Proegler or you can send an e-mail to: firstname.lastname@example.org.
MATTHEW HEUSSER: And, we’ll put all this contact info in the show notes just so they have a direct e-mail they can cut and paste. Thanks everybody, for coming. Thanks for listening.
MICHAEL LARSEN: Thanks for having us.
KATE FALANGA: Thanks guys.
PERZE ABABA: Thanks, Matt.