Episode 57: When Complete Automation Seems Impossible with Graeme Harvey

Episode 57: When Complete Automation Seems Impossible with Graeme Harvey

It seems that "Automating Everything" is an implicit of not an explicit goal in many organizations. In the process of trying, many projects fail in accomplishing this goal. Our guest Graeme Harvey gives a popular talk about this and takes an interesting journey on approaching and discussing how to do this via comparisons to the book and movie "The Martian". Graeme joins Perze Ababa, Matt Heusser, Michael Larsen, Gerie Owen and Peter Varhol to riff on ideas from the book and movie and how to use these principles in our everyday automating experiences.

Also, in our news segment, it seems Elon Musk is realizing that "automating all the things" can actually be a detriment as it has been a major cause of impact to the delayed delivery of the Tesla Model 3.

itunesrss

Panelists:


References:


Any questions or suggested topics?
Let us know!


Transcript:

MICHAEL LARSEN: Hello and welcome to the Testing Show, Episode… 57 .
[Begin Intro Music]
This show is sponsored by QualiTest. QualiTest Software Testing and Business Assurance solutions offer an alternative by leveraging deep technology, business and industry-specific understanding to deliver solutions that align with client’s business context. Clients also comment that QualiTest’s solutions have increased their trust in the software they release. Please visit QualiTestGroup.com to test beyond the obvious!
[End Intro]
MICHAEL LARSEN: Hello, and welcome to The Testing Show. We wish a happy June to everybody. Good to see you again. I’m Michael Larson. I’m your show producer and somewhat timekeeper. Let’s welcome our regulars. Matt Heusser.
MATTHEW HEUSSER: Hey, good time zone. Welcome.
MICHAEL LARSEN: Returning to the show again, Peter Varhol.
PETER VARHOL: Thanks, Mike. This is Peter Varhol, in the Boston area, international speaker and writer on technology topics, looking forward to participating again.
MICHAEL LARSEN: Also, I guess now we can just basically say that she’s a regular on the show, welcome back Geri Owen.
GERI OWEN: Hi, everybody. I’m Geri Owen. I’m VP of Knowledge and Innovation at QualiTest Group for the U.S. I’m glad to be back again.
MICHAEL LARSEN: We do have a special guest today, maybe new to a few of you out there, those who have been at various conferences are probably familiar with him, especially if are you from – well, I’ll let him talk about himself in that regard. I’d like to welcome Graeme Harvey to the show.
GRAEME HARVEY: Hey, thanks for having me. For anyone who doesn’t know me, I’m Graeme Harvey, formerly of the Kitchener-Waterloo area in Canada, but now coming to you from San Francisco.
MICHAEL LARSEN: Well, let’s go ahead and let’s get the show on the road, and Matt that means it’s time for you to do your thing.
MATTHEW HEUSSER: Thanks, Michael. We start with the News Segment. Today’s News Segment is about Tesla Motors, and it seemed very familiar to me. The argument is that Tesla decided, after first claiming they were automate everything and robots for the future and they were going to have factories in which there were no humans, they seem to have shut their plant down and are retooling and are injecting more people, and I think we talked about a very similar story from Toyota on this show before. Am I right, Michael?
MICHAEL LARSEN: We had a chat about something akin to this, (I want to say early on in the show) about the NUMMI Plant. It does seem like history is rhyming a bit.
MATTHEW HEUSSER: Yeah, that’s a good way to put it. Right, it’s like poetry. But, I had wanted to get the reaction from the panel before I went further. Let’s start with Graeme, who is new to the show, but a long-time friend. How did that article speak to you?
GRAEME HARVEY: Yeah. I definitely found it interesting. I mean, when people think of Tesla, they think of them kind of being on the leading edge of, you know, new technologies and new processes and ways of approaching this stuff. So, in some ways, they are experimenting, but they’re experimenting with a production item. They’re very much automated in the way we do in software, kind of hand-in-hand with new developments. It’s just really interesting to hear Delong, who is usually really ahead on this stuff, actually back-peddling the whole automation thing.
MICHAEL LARSEN: Officially, the company is describing the week-long shutdown of the Model-3 line as “planned downtime to improve automation and systematically address bottlenecks in order to increase production rates.” What I found interesting about this on my end actually was, there are two levels of automation we’re talking about here. There’s the fabrication automation—the really small components and getting the components together and the welding and all that—but the area that they’re struggling with is later one down the road physically putting the car together. That’s where it seems they’re having the biggest amount of problems to where actual human intuition about, “Wait, something is not quite lining up here.” The errors that are baked into the robotics just become more compounded at that point because you’re dealing with a more abstract. You need more spatial method for that.
PETER VARHOL: Yeah. I have two thoughts here. One of which concerns the relationship between Tesla and other auto companies. If we look at traditional auto companies such as General Motors, Ford, and so on, they also have experimented with automation over the past two-or-three decades and they pretty much meet the state of the art in automation rate. Now, one of the things that Tesla is having problems with is the final welding of the frame. This is something that traditional automakers have had down for a number of years now and generally don’t have problems with. So, I’d say there’s a level of inexperience with Tesla here. The second thing is, as you pointed out, they’re trying to fully automate the factory and make it an assembly line, workerless factory, so to speak. In doing so, they attempted a huge leap of faith here, and it turns out it’s something even the traditional automakers have not. It turns out that leap of faith probably wasn’t justified (A) on their experience level and (B) on the state of robotics and intelligent technology today.
MATTHEW HEUSSER: There’s an interesting corollary, for the folks who haven’t read the Toyota article. The robots worked well for Toyota. They had the whole thing working, but they needed to change it. So when you change up your assembly line with humans you say, “Well, this part goes over here and you’re going to stand over there now,” you’re good. With the robots when you say, “We need to change the way the robots run this line because we’re doing the 2017 models and not the 2016,” you’ve got to reprogram them, and the reprogramming means your factory might be out for a week or you might have to move things around or you might need entirely different robots that are configured entirely differently. The downtime, the retraining time, actually exceeded the value that they got. That was the Toyota. I didn’t know the little bits about Tesla.
PETER VARHOL: It seems to be that they’re having problems, as I said, in two separate areas, one that traditional automakers seem to have largely resolved years ago, and also because they’re trying to “automate too much,” so to speak. I will emphasize also, “too quickly.” Perhaps, if they took a more gradual approach toward automation, over a period of years, once again, they may find out it will work out better for them in the long run.
GERI OWENS: I think that’s a really good point, Peter. I almost wonder if they’ve put a lot of thought into their automation strategy. I think you have to plan a good strategy and it has to develop over time. It seems like maybe they did try to do too much too fast.
GRAEME HARVEY: Actually, I read a quote from Elon Musk saying something about they had too many complicated conveyor systems moving things around to too many different places, and that was one of the things causing problems for them. Maybe that was part of their not having fully planned and just putting things in as quickly as they could.
GERI OWENS: I think automation, whether it’s software test automation, automation of an auto plant, it really needs a lot of up-front planning and strategy development.
PETER VARHOL: It’s funny that you mention that. I seem to remember about three decades ago, during the construction of the Denver International Airport, which was supposed to be the most modern automated airport in the world at the time, they had a completely automated system for collecting bags, distributing bags to various airplanes, collecting bags from the airplanes after they’ve landed, and distributing them to the proper carousels in the baggage claim area, and the whole thing didn’t work. The whole airport project was delayed for several months while they ripped this out and went back to a manual system.
MICHAEL LARSEN: That was one of the comments that I also saw referring to this, when this first broke and Elon Musk was referring to this, but he said something to the effect of, “People are underrated.” [LAUGHTER]. I thought that comment was just both shockingly obtuse and yet, at the same time, very brilliant.
MATTHEW HEUSSER: You have a Silicon Valley guy who made billions of dollars by writing code to automate stuff, right? That’s what PayPal is. PayPal is the best fraud detection engine. Anybody could send money. The problem was, you’d send money and those guys in Russia would figure out how to scam and hack your account. So, the power in PayPal was in its fraud detection engine. So, the solution to every problem is writing code. I think Geri’s comment about strategy is prescient. As a little anecdote, frequently when I work with companies and they want to automate, they get a tool, Selenium, SmartBear has got a tool. There are a bunch of tools out there. Grok has a tool. They write all this automation, and they can press a button and the screen flashes. Everybody says, “That’s really cool.” I say, “Well, wait a minute. Do you have to set up a database?” “Yes.” “Did you tear it down automatically?” “No.” “So, if you run it again, will it pass?” “No, because, now it’s got bad data in it.” “So, you have the manually set up and tear down the database?” “Yes.” “Is this hooked into CI?” “Well, no.” “So, you’re going to have to get the code out. Are you going to have a server?” “Well, we have one test server.” “So, you have to get the code out, put it on the test server, get out your automation code, set up the database, run it one time to get test results?” “Yes.” “So, you pressed a button and it took 10 minutes, but there’s a lot more happening behind the scenes?” “Yes.” “Maybe you should automate all the other stuff first.” They look at me like I’m crazy and say, “It’s just a concept.” Eighteen months later, they call me up and say, “That guy that wrote all that junky automation is gone and we threw it all away and want to start all over again, can you help us?” So I think that when I’m getting at in that little story is that the strategy was wrong. They were working on the wrong pieces to start out with and created something that was unsustainable. What we want ideally is, I think, to come up with a system that’s going to work. So Graeme Harvey has a talk on, Automation Lessons From The Martian. Did I get that title right there? It’s close right?
GRAEME HARVEY: Yeah. It’s close. The title changes all the time, so that’s good enough. [LAUGHTER].
MATTHEW HEUSSER: I thought maybe you could give us an intro to it, and we could kick around some of those ideas.
GRAEME HARVEY: Yeah. I’d be happy to. I think it does warrant a little bit of clarification. I think in a conversation with Michael actually I was explaining what the talk was about and he was like, “Oh, I thought it was like looking at automation from the view of like coming from a different planet,” which I think is another great angle to take.
MICHAEL LARSEN: [LAUGHTER].
GRAEME HARVEY: But, not this particular one. [LAUGHTER].
MICHAEL LARSEN: Yeah. That’s right, because when I first saw it, I wasn’t thinking the movie. I was thinking of it from, “What if you were looking at automation completely removed from even being part of the company?” The interesting thing was though that was not what you were originally looking to say, it’s still held, if you were somebody coming into this equation not knowing anything about the history of the company, not knowing anything about the context, not knowing anything about what had gone on before and you were asked to automate all the things, what would you do?
GRAEME HARVEY: Yeah. I’m definitely going to take some of that, [LAUGHTER], and run with it later. But, for this particular one, I guess, I was watching the movie and I also read the book by Andy Weir, for anyone who knows the story. If you don’t know the story, it doesn’t matter, the lessons actually still translate quite well. The story boils to an astronaut being stuck on Mars but is alive and trying to get home. He has a really big problem to tackle and he has to tackle it very practically and very pragmatically. As I was watching the movie, there were a lot of good quotes and themes where he’s learning lessons. He doesn’t really describe the lessons, but you see them. They’re really good at just kind of iterative problem solving. Since my domain is automation and testing, I kind of applied it out. I yanked out nine lessons from the movie that I thought were interesting and could be applied and they’re all about this kind of iterative practical approach to automation instead of trying to solve the problem in one big flashy way, you’ve got to do what you’ve got to do to survive.
PETER VARHOL: I’d be interesting in hearing those lessons.
GRAEME HARVEY: I’ll just capture the nine that I talk about and then we can discuss any of them that we want. When we through the story of the Martian, you know, he’s got a lot of little problems to solve with a big goal of getting home. When you think automation, I have many lessons learned about times that we went ahead and implemented a solution with this end goal in mind, and we never delivered to other teams or put into the CI system steps along the way because we weren’t thinking of it as done or complete, and therefore we weren’t thinking of the system as being able to provide feedback when really it could have. In the case of Mark Watney, the astronaut on Mars, the first thing he does is he starts in an area that he knows very well. He actually has enough food to get him to survive for some number of weeks or months, not enough obviously to solve his problem, but he doesn’t need food right away. He starts by figuring out how to grow food, because he’s a botanist. It’s interesting that of all the problems that he’s facing stuck on this planet, the one that he starts solving is not necessarily the one that appears to be most needed, but it was the one that he could get the quickest wins in because that’s where his knowledge lied. For me, that was a pretty big lesson, because I’m always thinking about automation like, “Oh, how do we solve the biggest problems facing the company right now,” and always kind of trying to look at what that means and “What do I have to go learn” to tackle those. But, actually, my area of expertise, start there, build something, accrue some value is actually a very approach.
PETER VARHOL: Graeme, since you start with that particular problem, I’d also like to say that problems can be layered. For example, he had the soil to be able to grow crops, but that soil was not fertilized. He had to figure out a way on how to fertilize that soil before he could actually use that soil to grow. So, there were multiple layered problems there.
GRAEME HARVEY: Yeah. This actually comes up a lot. The quote from the story, he’s talking about how he got home, and he says, “You’re going to be sitting there thinking, this is the end, but you can either accept it or you can get to work.” The way he describes it is he says, “You solve one and then you solve the next problem and you solve the next, and you solve enough problems you get home.” This is kind of that common theme through the story that I’m always trying to point people at. The one example I use is when he’s got his rover, which is not designed for long distance, and he is trying to experiment with how he can get further and further away his basecamp to where he needs to get to and he starts trying different things. He’s experimenting. One of the experiments he does is he turns off the heater in his rover and that allows him to get significantly farther distance out of the machine, but he’s too cold. But, in this mind, “Okay. I’ve figured out the distance problem, and now I have to solve the heating problem. In the same way, I know how to grow the plants, but I have to figure out the fertilization problem, the water problem. Well, let’s start with what we know and solve that area and then that will clearly uncover other problems which will become the next pressing problem.” Well, you couldn’t have known that if you just started tackling it from that big picture high level.
MICHAEL LARSEN: I see parallels to this. As testers, especially with automating all the things, very often I think we tend to have this big high-level picture and we have to put together this crazy abstraction layer to do certain things. It’s the paradigm that we’ve been given. You need to get a bunch of users into the system, it’s kind of natural to just say, “Well, okay. I’ll go ahead and I’ll make a modification to a script that I have, and I’ll log in multiple times. I’ll go to the control panel, and I’ll add users that way. It’s horribly inefficient, but if that’s what you know how to do, I guess you can start with that. As you do that and you realize how much time it takes, you say, “Oh, come on. There’s got to be a better way to do this,” which will lead you down looking at another avenue. For me personally one of the avenues I look at is to say, “How can I work with the system without actually having to touch GUI at all?” So, I want to look at those areas first, because those can actually be automated with a Bash shell, if they need be. They don’t look as cool, but for just genuinely getting stuff done, they go a long way.
GERI OWEN: I think that goes to, like Matt said, you go buy a nice, beautiful, shiny new tool and then you’re planning, “Okay. So, now we want to use this tool,” rather than look at, “What can we get for quick wins, and what’s the best way to approach them?”
MATTHEW HEUSSER: Well, I don’t think many people actually do any kind of activity analysis. Of the things that they do a whole lot, how many of those things can be defined by an algorithm that’s almost as good or comparable to what the human does? How would the computer know that those things are wrong? If you were to have those things done much more quickly, how would that free up the rest of the organization because they’re getting what they need sooner? If you do that sort of analysis, most of the customers I’ve done that with, we look at setup. [LAUGHTER]. We look at CI. We look at unit tests. Usually, you start there. Maybe it’s just some weird, hunky FTP process that all runs all the time, that you need to write a Batch script for. It’s usually what I find, and yet if you Google, “Automation, what should I do,” that’s not what you’re going to find. You’re going to find “test execution.”
GRAEME HARVEY: From what I have experienced, people want to jump to automating what they think testers do. It’s actually very interesting. I think that’s one of the reasons why UI automation is so prevalent in big automation tools that help automate at the UI layer because it’s an easy mental transition for a tester to think, “Click through these steps or I do these workflows and we can just automate that.” It’s easy for other people to see that and go, “Yep. I know testers click through workflows, let’s get a tool that clicks their workflows.” But, it’s often forgotten all the time testers spend on setup and tedious tasks that aren’t necessarily the checking but they’re parting of testing and they take a lot of time. It’s not where we mentally go to, to start.
PETER VARHOL: Can’t help but think back to the keynote speaker of your CAST Conference from 2016, 2 years ago, Nicholas Carr, where he said, very emphatically, “You can’t automate what you can’t do yourself.”
MATTHEW HEUSSER: One thing that I wanted to pick up on, Graeme makes a good point, he mentioned that hero character says, “I’ll do this and that creates another problem. I’ll do that, and then I’ll try to solve that other problem.” I think there’s an interesting fine line between breaking a problem down into its component parts and solving each of them and yak shaving, where you’re just never going to get done because you keep finding there’s one more reason there’s a hole in your bucket. It’s seems pretty intuitive to me when you realize you’re yak shaving, often it’s in hindsight. I don’t know if Graeme has any insights into the difference between those two and how to recognize when you’re in one of those two.
GRAEME HARVEY: Well, I wish I had a magical solution to it for sure. [LAUGHTER]. One of the things that I talk about when I say these things is, you have to constantly be reassessing. All things in moderation. I propose people continually bolt things on and solve problem after problem. For example, when you solve a problem but that creates another problem that you have to solve, I don’t necessarily believe the newly-created problem has to be the most important thing. It might be a new problem, but you really don’t care about it right now. I think it’s important at least for people to remember that so that they’re not doing what you’re talking about where they’re chasing holes in the bucket and really not getting to that final solution. The new things that come up have to be reassessed, and this is why most of the teams I’ve worked on in an automation service capacity don’t often work in Scrum because we can’t do sprints because we’re always, [LAUGHTER], reprioritizing the things that are coming up.
MATTHEW HEUSSER: So, do you not do sprints because the feedback loop for a sprint is too wide, is too long?
GRAEME HARVEY: Yeah. It takes a little bit of understanding of what my team looks like. My team is purely a service team providing tools and automation to other people. We balance a backlog of two different types of work. There’s our roadmap to work that we’re trying to accomplish, that we think people need or people have told us that they need. Then, there’s the very urgent, “Something’s broken or something’s blocking. We can’t release because automation is failing,” either for legitimate reasons or flaky reason. I struggle with doing sprints because I struggle with telling people, “It’s going to take at least a sprint to get to your work once it’s been prioritized,” as opposed to being able to say, “Well, that does seem like the next priority and our cycle time is known to be this long, so your ticket will be started in the next X number of days or hours or whatever that cycle time looks like.”
MICHAEL LARSEN: Just wanted to jump in here real fast and welcome Perze Ababa to the call. Glad you could join us Perze.
PERZE ABABA: Hi, everyone. Thank you for having me. All right. I’m going to open with the biggest ones that I’ve kind of seen so far. Just like any automation program, I think context is definitely king, “What are the things that actually confirm the project, that can make or break it?” When you jump into an automation program, the goal that was given to me was, “Automate everything,” and I was able to talk them down to changing that goal into “automate what we can automate. If there’s a problem that you guys have encountered and solving that problem opens up other new problems, where does it essentially end?” These new problems that we’re trying to solve are still connected to the initial. For me, that’s going to be a pretty key thing. When we first started, it used to take us roughly 29 days to perform an upgrade. That’s really just to test that particular upgrade before we make the decision in pushing that out. Three years later, we’re able to bring that down, a single site, to like 6.8 hours. But, the challenge that we have now is, “Are we actually automating too much of this that we’re missing the important part? Automation will definitely make your build faster, as long as you don’t introduce anymore changes into your system. How do we then define effective automation, especially when it’s really a lagging indicator to what the automation problem is?”
GRAEME HARVEY: That’s interesting. One of the projects that I worked on that taught me a lot of these lessons, it wasn’t necessarily a failure, but it didn’t go very smoothly. I think we were trying to automate the testing of all of the API’s in the company I was in at the time, supposedly to save people time testing all these API’s, testing regressions, I guess. We were thinking about that framework we were providing. We were thinking of it as complete when the framework existed and was running and was testing all of the API’s. We had this end goal, much like the Martian trying to be back on Earth. If that was the problem you were trying to tackle, it would be a little bit absurd. For us, we were trying to automate the testing of a very, very high number of API’s. Had we thought, “Let’s pick the most important ones, like user signup or user login,” and built a framework that made us the ability just to do that, I don’t think we would’ve hit as many roadblocks up front and we would’ve been able to provide some tests that were running and prove some value. One of the things I talk about in my presentation is, in my experience, proving some value in a short period of time wins you more time and resources for management because they see the value. But, when you’re creating this nebulous framework, “We’re doing automation and someday we’ll provide it to you” and that goes on for extended amounts of time, they start to wonder, “What are you doing, and is it providing value?” The second lesson I picked up on was bolting things on. I take the rover that he has and I show, by the end of the story, all the extra things he’s added to it. The nuclear reactor as a heater. The water purifier had solar panels. An additional kind of room he’s built on top. Just all things that he took something from its original purpose and built on it to be able to help him solve other problems. Then there’s a really big lesson here about breaking the rules. I talk about this one with a pretty close-to-home example. I say, “Sometimes you just have to do things that people tell you are anti-patterns, if you’re trying to get to some end goal.” The example from the movie is he’s digging up a nuclear reactor and he says, “There’s a section of the manual dedicated to do not dig up the nuclear reactor.” But, he has to do it to survive in his case. So, I relate that to when we’re doing UI automation and we actually add PARD sleeps 1 second/2 seconds. Everybody who has ever done automation reels in their seat like, “That is something you don’t do.” I always say, “Well, we had some tests that were failing and by adding that in it allowed the tests to pass.” Now, if we stop there, we’ve done something stupid. But, if we were able to unblock the tests knowing full well that our next problem to solve is, “Okay. What’s causing this issue that only sleeps presumably our solving,” and we dig into it a big deeper, but that first step is unblocking even if it means breaking the rules.
PETER VARHOL: Graeme, I don’t know if this is one of your lessons. One of the big lessons that I took away from the Martian was, “You are not alone.”
GRAEME HARVEY: Yeah. That’s a real good one. Maybe I should move up to 10 lessons. [LAUGHTER].
PETER VARHOL: [LAUGHTER]. Applying that to software automation, you always find somebody who has advice, expertise, and help, and it may not be in the place you’re necessarily looking for it. In his case, he had to go way out into the Martian landscape to be able to find a long-last crashed satellite to be able to salvage from the communications approach.
GRAEME HARVEY: Right. Related to this topic of communication, I do touch on also the communication between the people implementing and the people asking for the implementation. I refer to it as the “execs,” it’s not always the executives. But, one of the things that I find interesting about this story is that back on Earth they’re talking about rescuing Mark from Mars and they’re coming up with their plans for how they’re going to do this. It’s funny, because they’re plans are very much in isolation of him. They kind of try and keep him shielded. The people who were ultimately affected by the decision, both Mark and the other astronauts that were already in space that were part of his crew, they didn’t give the option to the people that were up there. They made the choice from the ground. When the alternate riskier option was presented to the astronauts in space, they chose that one. It’s interesting to me, the people who are knee deep in the problem, they know what’s important. They know what needs to be solved. They know their biggest issues that they’re facing and where automation might help them, and sometimes what you see is organizations in isolation decided, “We need to do automation. We’ve heard it’s good or helpful or fast.” Though they say, “We should do automation,” and no one really knows what that means. Just, “We should just do it.” So, they go about telling the people what to do when those testers on the ground doing testing every day might know better.
PETER VARHOL: So Graeme, what are the rest of your lessons? [LAUGHTER].
GRAEME HARVEY: Yeah. I’ll dive into the last ones. I think we’ve touched on a lot of them anyways. It’s important to not abandon things too quickly. So, we see where Mark is making water and he blows himself up. He admits he made a mistake, but he still needs the water. So, he’s not going to abandon it. There’s actually a quote in the book that Andy wrote where it says, “They say no plan survives first contact with implementation.” I think that’s—
PETER VARHOL: [LAUGHTER].
GRAEME HARVEY: —really important, [LAUGHTER], for people to remember. Celebrating small wins is kind of what I was touching on earlier, about not trying to build the entire solution before we present it to people, but show when you’re able to accomplish things. You know, “Hey, we are able to find this ability to log into the API, and now we’ve saved 1 minute’s worth of login on all our tests” or whatever those small wins are. Then, finally, the last one I touch on, I say how, “We might have to fly like ironman.” If you remember the story, everything goes to hell at the end, and they’re going to miss each other by so close but just far enough that they can’t make it. All is for naught if he can’t make it back to the space ship. It needed kind of a last-minute heroic solution to kind of solve the problem. Talk about often in engineering not rewarding heroes for going and doing heroic things all time, but there are times where you just need someone to see a problem and just kind of fly like ironman, [LAUGHTER], and do what they’ve got to do to kind of see it through to completion.
MATTHEW HEUSSER: So, if I can, in writing, I get into this problem. We’ve realized, “Oh, man. We are being hired by someone who doesn’t understand what their software does in order to try to influence someone who also doesn’t understand what the software.” It’s like a marketer talking to an executive. So, they hire us because we actually understand what the thing does, and that entire interaction is wonky and hard to navigate and sometimes we’ve just turned the work down on the writing side. It feels similar. So, the role of product owner for automation is one that I think is critical and often doesn’t exist. No one even thinks about it. The other option that I have preferred is the craft approach where we say, “You don’t know what you want. That’s okay. I’m going to come in and ask you a bunch of questions and then figure out what you need and reflect it back to you,” and you’ll say, “Nope. That’s not it.” Then, we do it again and do it again. Each time it gets more refined until we have some idea of what to build. I’ve seen more people be more successful with that model. You’ve got to get a model that’s in [UNINTELLIGIBLE] to your tool smith. You can’t just say, “Here is the manual of tests. We need you to automate them using a tool.” It’s going to be a lot more nuance than that. So, with that, final thoughts, I will shut up and let Peter have his.
PETER VARHOL: Thanks, Matt. I got an awful lot out of both the book and the movie, The Martian. I got out of it the ingenuity of the human spirit. The fact that Mark Watney did not give up no matter how dreadful it looked from his point of view. He just kept plugging away, and I think there’s a lot to be said for “life is a marathon, it’s not a sprint.” You’re not going to get from one end to the other really quickly. You need to measure yourself. You need to solve problems as they arise, and you need to make sure that you have enough energy left to solve the next problem too.
MICHAEL LARSEN: OK, so at this point, we would like to give everyone a chance to do a little bit of “Shameless Self Promotion”. Graeme, of course, you’re are special guest so if you want people to be ale to get in touch with you, where you’re doing your next speaking gig or some other thing that’s happening or, oh, I don’t know, maybe telling people that you’re looking for people?

GRAEME HARVEY: Yeah, I’ll take that opportunity, for sure! I’ll start by saying I’m always happy to talk about this stuff and hear people’s opinions and learn more. I can be reached a couple of ways. The easiest and testers seem to love the platform so people can find me on Twitter. It’s “graemeRharvey”, all one word. Always happy to chat there or by email or LinkedIn or any of the traditional things. The company I work for currently in San Francisco is called PlanGrid. We make construction software and we are definitely staffing up on the testing side and the automation side so I personally am managing an automation team and we are always looking for people who can contribute. We have multiple platforms we are trying to write automation for; iOS, Android, web, Windows. It’s a lot of stuff talked about today, it’s not just writing end-to-end testing platforms but it’s solving testing problems in CI and in scripting, in building frameworks and all that stuff. So if anybody’s up for doing that, definitely get in contact with me. We can talk more about that.

MICHAEL LARSEN: Awesome! Perze, you got anything going on?

PERZE ABABA: You know, I actually have nothing going on right now. There’s a bunch of internal stuff that I’m participating but none of them are public but if there’s anyone out there who wants to reach out to me, I’m on Twitter, I’m on LinkedIn. My first name is unique enough for you to search and you can definitely find me. In relation to the topic that we’ve talked about today, thank you, Graeme, for bringing that in. Maybe I should dig deeper. Hopefully I could add some more stuff into you list. For me, it’s a very interesting place to be able to add your lessons learned from that. I can tell you right now that part of my role is actually automation framework product owner. If there’s one advice that I can give, you know what, just get started. It’s nothing scary at all; it’s just something you need to do and learn from. That’s it!

MICHAEL LARSEN: All right. Gerie, ho about you?

GERIE OWEN: I really enjoyed this topic. I think the lessons from “The Martian” are absolutely wonderful and Graeme that’s a great presentation. I’d love to actually see that presentation. I think it so applies to automation and automation strategies and, just a reminder that Qualitest, we have some great solution architects as we as our automation experts for any kind of projects you might need. It was great working with everyone today.

MICHAEL LARSEN: Excellent! And Matt, what are you up to?

MATTHEW HEUSSER: Oh, just a lot of work. I’m planning on being at the Conference for the Association for Software Testing in August. Doing a lot of consulting work for a client in the western midwest. A lot of that is quality gap improvement, agile adoption improvement for larger organizations, got some papers that will be coming out in the next few months. A lot of writing. I’ll be in Germany in November for Agile Testing Days. I’m kind of slowing down the conference circuit as other types of travel for work ae heating up. Working with a lot of companies. Working with a lot of difference with the way the system is supposed to be and the way it actually is. So that’s enough for me.

MICHAEL LARSEN: All right, so I will close this out on my end. Graeme has signed on with me to be co-sponsor of the Bay Area Software Tester’s meetup. We’ve had a lot of interaction and that’s how we got to the pint of him presenting this talk and me asking him to be on the podcast to begin with. I definitely want to give a plug for Bay Area Software Testers because we’ve got grand pans and we want to be doing a lot more. PlanGrid has been very gracious and has been willing to step up and be our location sponsor for at least recently. I hope we don’t wear out our welcome. Also, I want to definitely make a quick mention. I will be doing two sessions at the Pacific Northwest Software Quality Conference. I’v been asked to be one of their invited speakers this year and I’ll be talking about “Future Proofing Your Software” which actually has more to do with designing your software as you age, no so much as platforms age, in keeping with my Accessibility and Inclusive Design focus and I’m also working with my friend Bill Opsal and we are putting together a workshop, kind of a “choose your own adventure” as far as developing a testing framework and it’s really a coverage of all of the things you don’t think about when you are trying to put together a testing framework. And, that’s all I’ve got.

MATTHEW HEUSSER: Thanks everybody for coming to this episode of “The Testing Show”. LEt’s keep in touch and keep things rolling positive.

MICHAEL LARSEN: Excellent!

MATTHEW HEUSSER: Appreciate it!

PETER VARHOL: OK! Thanks, Mike. Thanks Graeme!

GRAEME HARVEY: Yeah, thanks for having me today.

GERIE OWEN: Thanks, everybody!

MICHEL LARSEN: Thanks much!

MICHAEL LARSEN: That concludes this episode of The Testing Show. We also want encourage you, our listeners, to give us a rating and a review on Apple Podcasts. Those ratings and reviews help raise the visibility of the show and let more people find us.

Also, we want to invite you to come join us on The Testing Show Slack Channel as a way to communicate about the show, talk to us about what you like and what you’d like to hear, and also to help us shape future shows. Please email us at TheTestingShow(at)QualitestGroup(dot)com and we will send you an invite to join the group.

The Testing Show is produced and edited by Michael Larsen, Moderated by Matt Heusser, with frequent contributions from Perze Ababa, Jessica Ingrassellino and Justin Rohrman as well as our many featured guests who bring the topics and expertise to make the show happen.

Additionally, if you have questions you’d like to see addressed on The Testing Show, or if you would like to BE a guest on the podcast, please email us at TheTestingShow(at)qualitestgroup(dot)com.

Thanks for listening and we will see you again in July 2018.

[End Outro]

[END OF TRANSCRIPT]