Tooling and Automation

February 25, 04:04 AM
Transcript

From the idea of automated trucking to the notion that testing will all be automated “at some point in time”, we thought it would make sense to bring in someone who has been part of this challenge for many years.

Paul Grizzaffi joins us to give us his take on the promise of automation, the realities of tooling that go into those processes, and what the future might hold for the testing role as well as the possibility of “automated everything”.

itunesrss

Panelists:

HeusserLarsen

Paul

References:

Transcript:

MICHAEL LARSEN: Welcome to The Testing Show.  I’m Michael Larsen, the show producer.  This week, we are joined by Justin Rohrman.

JUSTIN ROHRMAN: Good morning.

MICHAEL LARSEN: Our special guest, Paul Grizzaffi.

MICHAEL LARSEN: And, of course, our master of ceremonies, Mr. Matt Heusser.  Take it away, Matt.

MATTHEW HEUSSER: Thanks, Michael.  It’s great to be here.  When we started thinking about this episode, we wanted to talk about, Tooling and Automation, in preferably, a tutorial—not introductory, not one-on-one level, but really years of experience using these tools in a wider perspective on it.  So, we had to bring in Paul, who I met a couple of years ago now in San Diego.  I think, actually, we’ve been to San Diego twice—impressed me every time.  I thought, “Automation, we’ve got to bring him on the show,” but our audience should get to know you.  So, Paul, tell us a little bit about your background and what you’ve worked on.

PAUL GRIZZAFFI: Sure.  Currently, my title, at a company called MedAssets-Precyse, is Director of QA and Automation Program Architect.  When I’m asked, what is that I really do, is I lead a team, a small business inside of MedAssets, where what we’re doing is we’re providing tooling and coaches coaching around tooling for judicious use of automation and technology to help the testers out when they do their job.  In my new role, I actually have the Source Control Management Team under me as well as the Tech Doc Writers.  So, it’s all rolled up into this thing called, “Development Shared Services.”  As far as automation, prior to this recent job change, I have spent my entire career in automation, including 16 years with a company called Nortel.  They’re a telecommunications company where I really stumbled into being in testing and automation there, building proprietary tools for telecommunications testing.  I also did a couple of years at GameStop with their Digital and E-Commerce Team, and now I’m with MedAssets-Precyse.  MedAssets-Precyse is a healthcare performance improvement company, where what we do is we help large hospital chains become more revenue centric and increase their revenue by providing tools and services to help them automate a certain amount of their work.  I’m also and advisor to Software Test Professions and STPCon.  They don’t really have a track chair for automation, but the term they like to throw around is, “automation track designer.”  So, I guess I’m the automation track designer for that.  I’m also a member of the Advisory Board for Industry with the Advanced Research Center for Software Testing and Quality Assurance at the University of Texas at Dallas under Dr. Wong.  It’s kind of a fancy title, but what it really means is that I go in once a semester and I give a guest lecture around, what’s going on in the automation world, in the real world, outside of academia, to his graduate‑level students.

MATTHEW HEUSSER: That’s fantastic, Paul, and it’s impressive.  So, we’d like to get into the details and talk about the impacts of some of these tooling strategies in the long term on companies.  I like to talk to long-term employees.  Sometimes the consultants just don’t have the long-term viewpoint.  Before we get there, let’s talk about, what’s new and exciting in the world of software testing, our news segment, and Justin sent this story about “the driverless truck.”  It’s going to automate millions of jobs.  Specifically, there a 1.6 million truck drivers in the United States today; and, if this works, it’s going to displace a lot of people who might not have other specific skills.  So, tell us about that article.  Is it right?  Does it have an impact on testing?  Should that worry us?

JUSTIN ROHRMAN: I think it really hits us from an emotional point of view.  In the U.S. especially, trucking is one of the largest employers, just hundreds-of-thousands of truck drivers crossing the United States all the time.  If large-scale, self-driving trucks are to become a thing, that would put all of these people out of work immediately, and, at least, they would have to find a new job.  In software testing, there is this always sort of driving background voice that says, “Automation and code and programming tools are going to just completely decimate the need for software testers and all of these people will have to find other rules in the organization or just other types of work in general.”  I think is very striking, from an emotional claim.  It was Albert Savoia who did his, Test is Dead Conference Talk.  Test still isn’t dead.  It’s alive for sure.  These automated tools are certainly changing the role significantly.  What do you guys think about that?

MICHAEL LARSEN: I can see the simple argument on this, in the sense that you could take a vehicle, start at point A and get it to point B without an individual’s involvement in it.  Theoretically, sure, but then “the devil is always in the details.”  How well do these trucks do in rain and snow and adverse road conditions?  If you have a convoy of trucks together, on one getting these vehicles together and being able to use the drafting capabilities of each for fuel efficiency, that’s great, but what happen when the com link breaks?  Who’s going to be driving those vehicles, and what level of redundancy are you going to need to carry for this?  Suppose a police officer wants a convoy to stop, who’s going to stop it?  Where?  Under what circumstance?  Can they even stop it?  So, on one side, I find it intriguing that we have these opportunities, and I find it interesting that we’ve had all sorts of situations over history where jobs were operated by humans.  Let’s think about mill work.  Let’s think about lumber.  Let’s think about agriculture.  Are we going to basically say that, “We don’t want to have International Harvesters out there, because that takes away people’s jobs?”  I think, for most people, they would not argue that, that was a “bad thing.”  Of course, there is going to be disruption.  It’s going to be problematic for certain people in the sense that I don’t think anybody wants to be thrown out of work, but I don’t think that throwing people out of work is in the immediate future because I think there’s a whole lot of stuff that needs to be ironed out on this.  But, I do find the idea that, considering how many road fatalities there are, I think, there’s a benefit to being able to automate some of that.  How much of it really does get automated over the next—I don’t know—10, 20, 30 years, I’ll be curious to see what happens.

JUSTIN ROHRMAN: Would you say that there’s a parallel to that and what’s happening in software testing right now, in the sense that maybe in 30 years there will be no testers, only programmers that write test code, who test a little bit using unit test and stuff like?

MICHAEL LARSEN: It seems that that’s been the promise of computer programming since its inception, and it hasn’t happened yet.  So, I’m not convinced 100 percent that “everything’s going to go to code” and that “testing is all going to be coded and automated.”  I believe that there’s going to be a human element to all of it; and, generally speaking, that is what we’ve seen when we have seen automation take a foothold, is that the humans don’t necessarily disappear.  It’s that the humans focus their attention to more interesting problems.  Now, it is entirely possible that a dedicated tester in the classic way that we’re used to thinking about a tester isn’t going to be the same role in as many organizations.  In fact, that’s the case right now, but it doesn’t mean that the tester goes away.  It means that the tester focuses their attention on different things or they pick up different jobs or they pick up different skills and responsibilities, and I’m seeing that right now.  I think I spend more time testing Jenkins than I do my own products sometimes.

MATTHEW HEUSSER: There’s a couple of things that I really want to key in on here.  We’ve covered this ground before, and yet it keeps coming back up.  We did a whole podcast on, Test is Dead, and yet it comes back on automation.  The same kind of themes, and I think Yaron was the one who said, “Look man, test, as an industry, is growing.  It’s just not growing as fast as general IT,” which I thought was pretty insightful.  I wonder if there’s a parallel here.  In the early 20th Century you had the invention of the vacuum cleaner, air conditioner.  You have the furnace so you don’t have a fire.  You have electricity.  You have the dishwasher, clothes washers.  So, all these kind of laundry, scullery-maid type of jobs are literally automated away.  It’s happened before and it did disrupt large segments of society, but we kind of got over it.  People had plenty of times to adjust.  The only people that really got disrupted were the die-hard people who said, “No.  No.  No.  This is what I’ve always done, and I’m always going to do it.”  My lesson from there is, “I don’t want to be one of those guys.”  I think Michael’s advice on “expanding your role to cover new things” is relevant and something that we should dive into today.

JUSTIN ROHRMAN: In 20 years, we’ll have truck programmers instead of truck drivers.

MICHAEL LARSEN: It’s entirely possible.  Think about this point in time when we’re dealing with drones, aircraft.  We have drone aircraft that are flown for surveillance purposes all the time.  We still have pilots.  The pilots just aren’t in the cockpits.

MATTHEW HEUSSER:     I’ve actually got a client that does logistics that has a huge number who are basically configuration analysts, and their job is to figure out how to route the truck from A to B, and that’s probably not going to go away even if we had autonomous trucks.  Then, someone’s going to write code to do that job.  It’s just up the abstraction chain.  Speaking of, and I think this is where we segue from news to conversation, James Bach and Michael Bolton, they have a paper on, Test Tooling, that we’ll link to, that covers a comprehensive philosophy of tooling in a way that is different than the sort of loudest voices in the room and it is comprehensive.  It’s hard to misunderstand it, and they preemptively address the various objections in a way that, if you do a blog post, it’s easy to misunderstand or get wrong or say, “What about this or what about that?”  And, that’s available for free PDF, if you want to dive into it.  Maybe Justin wants to summarize it.  I’ve been talking a lot.

JUSTIN ROHRMAN: I think the article summarizes a lot of points that Bach and Bolton have been making over the past 10 years about automation.  There is a significant impression out there that the testing community, supposed the context-driven testing community is anti‑automation and anti-tooling, but I think it would be more appropriate to say that, “We choose our tools and automation techniques carefully to fit the scenario,” instead of just blasting every product with “Test Pro” or something like that.  So, they approach the topic from that and then they also hit it with a little bit of social science.  If you’re familiar with the work of Harry Collins, they talk about the specific things that automation is good for and the specific things that people are good for and how that overlaps and how you braid that into a better testing approach rather than just picking people or automation.

PAUL GRIZZAFFI: As sort of a casual follower of context-driven testing in James and Michael, it’s easy to get the impression that context-driven testing is anti-tooling and anti‑automation by some of the statements that get made in Twitter with it being such a lossy medium by some of the questions; but, if you read the blogs and you dig a little deeper, it’s very evident to see what their position is.  I am glad that they’ve distilled it down into this digestible paper that really explains the position; and, in my view, based on what I have done throughout my almost 25-year career in automation now, it’s a really healthy way to look at.  It’s really back to the right tool for the job.  If you don’t have a nail, then don’t use a hammer.  It sets a much better impression of what these guys actually believe as opposed to some of the lossy things you get on Twitter.

MATTHEW HEUSSER: Yeah, and that brings up an interesting challenge in summarizing these ideas, boiling it down anymore, and you’re going to lose some.  To use a metaphor, a lot of the automation philosophy is, “We’re going to build a literal army of robots that look just like humans to go and cut down all the wood and then reassemble it and stack it.  There’s no human thought involved.  We’re going to code it all up, just because it’s going to give us a green bar.”  A lot of the examples and ideas in the paper are more like, “Uh.  What if we just started with a chainsaw?”  So, the human is still in control and the human is still making decisions, but he can move a lot faster.

JUSTIN ROHRMAN: That, “no human thought” aspect is really interesting to me.  I just struggle to understand where it comes from; because, even if you do have a massive automation suite that’s running with every build and you just click the button and get test results, all wrapped around that is people.  You can’t get that without an arm of people designing tests and writing code and analyzing the results and figuring out if they’re important or not.  Even with automation, there’s still a people thought-and-judgment-heavy process.

PAUL GRIZZAFFI: Unfortunately, that’s not the way things are often marketed.  Things are often marketed as, “Hey.  All you need to do is drag and drop, copy and paste, click‑click‑click.  Now you have this entire suite that’s going to do your testing for you.”  That’s not true.  The focus there, in my opinion, number one, is wrong, because the bulk of the effort in an automation initiative is not in script creation.  It’s in maintenance.  So, if you put something together that is easier to maintain, even if it’s a little slower for you to actually create the initial script, that’s going to pay dividends down the line; because, when your next release comes along and you have to twiddle 80 percent of those 5,000 test cases that you created, that’s not going to scale well and they’re going to ride the automation manager out on a rail.

MICHAEL LARSEN: And also, it’s important to realize that, when we talk about “automated tests” of any kind, they don’t really “test” per se.  They’re not alerting you to new bugs.  They focus on the tried-and-true things that you’ve already determined that you want to take a look at and what they’re helpful with is determining if something has changed.  That’s important to know.

PAUL GRIZZAFFI: And, that is a problem.  Well, it’s a problem, or it’s not.  Now, you have a red light that has come on that says, “I think something that you told me to check for has changed.  Is it an important change?”  Maybe the right answer has changed the script.  Maybe the right answer has changed the product back, because you did something that was unaccounted for.  But, that’s where your people come in.  The human beings have to come in and assess that situation.

MATTHEW HEUSSER: You can’t cut the human out of the conversation.  I think, to restate Michael’s point, if we’re talking about traditional, “Does the software work?”  Question answering.  Software testing.  Automation.  All that does is, after the story testing is done and we’ve already got all our story tests to pass, so we already thing it’s good.  We can record this, because we can’t record it until it works.  Then, lock it up in a repository and periodically run it to say, “It still does what we knew that it did the first time that I recorded this test.”  That’s all that does.  So, it can catch regression where this thing that used to be very predictable now does something different, and it can also test.  We added a required text field which you didn’t fill in because your script, which you recorded before the text field existed, didn’t use to fill it in and now we get an error message instead of the next page and got to go fix it up.  The expression that I use is that, even if that was magically free and you had this button you could push and you got a green bar, within 10 seconds, that gives you fully-documented test results, bug would just automatically get filed in the system magically, they’re in JIRA, and we can just go work them, that might not be the long pole in the tent.  Because we might have to wait for fixes or argue about whether or not they should be fixed or need decisions or that might cause design problems and we may need to re-architect the system because of these bugs we found.  That might actually be a very small part of the find, fix, retest loop.  So, we’ve got this work to do that is very expensive, our time might be better spent automating other parts of the process.

PAUL GRIZZAFFI: And, that’s what I like right there, what you just said.  It’s looking at automation not as, “I have a test case and I must automate that.”  It’s really more about, “Where can I judiciously apply technology to help a human out?  Where can I get some value?”  Sometimes value might be write a little Smoke Suite of 10 or 15 or 20 scripts that are going to do some checking for me on every check-in or maybe it’s writing some sort of proprietary context-sensitive DIFF where you can take these two weird log files that is difficult for humans to match up and give you some heuristics around, “Eh.  They’re probably the same or you might want to go and look at these particular sections of the file.”  The value for that might be a lot higher, so you probably should spend a little time doing that first.

MATTHEW HEUSSER: My particular favorite example is DevOps.  When people talk about DevOps, they usually start saying, “Let’s write these GUI driving tests on them.  It’s the first thing we should do.”  I worked with a company, not too long ago, that did that and brought me in to do that.  I kind of pushed back against that a little bit, but not hard enough.  They said, “Yeah.  We’d love to have you help us with strategy as we go down this road we’ve already determined up front.”  So, we couldn’t set up the server.  You’d have to get the server in the right condition, with the right build, and then you can kick off the automation.  That had to be done manually.  So, even if we finished everything, we said, “Great.  Run the automation. Let’s put this in production.”  It would be, “Yeah.  We need to get the build on staging.  Yeah.  We need to press the button.  Okay.  We’ll let you know in 5 hours.”  I call that, “automation delay.”  It’s not what people think of when they were sold on this idea of, “Let’s automate the process.”  There’s something funny about that, and it gets worse over time.

MICHAEL LARSEN: There’s some interesting elements to this that I think are important and what needs to be addressed, when we discuss what DevOps actually means and how often we do it.  Right now, I think there’s a big push to talk about continuous delivery.  I believe there are some companies that are actively, “Yes.  We push to production ten times a day.”  But, I think very often if you were to actually ask people, “In reality, how much of what you do is continuous delivery?  How often do you really push to production?”  They’d probably hem and haw a bit, because what they really think about pushing to production—and this is reality in my environment—pushing to production is pushing to staging.  In our staging environment, at least as far as my immediate work is concerned, that is our production environment, and we push to staging two-to-three times a day.  Basically, every time a story is done and we signed it off, we push it to staging and we generate a build, but we still have to have a person at this stage to make sure to go in and push this build to staging because we don’t want to just inadvertently throw people off the server and go, “Oh, hey.  Sorry.  Hope you’re not doing anything important.  Bye.”  Somebody has to be there to, at least, alert, “We’re going to do a push.  So, save your work and get off the system.”  Then, we do our push.  We check it out and make sure everything is okay.  String enough of those together and then you put a final release out the door, is that continuous delivery?

MATTHEW HEUSSER: Yeah.  So, what people mean when they say, “continuous delivery,” is just huge and, “How do you get to continuous delivery if you’re already in flight?”  I think automating the setup of the virtual servers and the database cleans and the refreshes is usually the way to start.  Not with downloading the Record/Playback Tool with the 30-day trial, which is the easy way to get started, because you don’t have to go have a conversation with the architects and you don’t have to go talk about how things work.  You can just do it as a tester on your lonesome.  In the long-term, that can get you in trouble.

PAUL GRIZZAFFI: I also think you have to balance these things.  Automation is very much a, “What have you done for me lately, kind of endeavor, particularly when you a separate team like I do.  So, you have to balance hidden infrastructure, non-sexy piece of this with some stuff that actually is a visible where people can go, “Oh, okay.  I see what that’s doing.  That’s cool.”  And then, the third part of that is, “Okay.  Where’s the value?  What can we start showing value early?”  Because, at some point, some money guy is paying for this.  Some product owner says, “I have an amount of money.  I’m going to give some of it to automate some of this work because we determined that there’s some value in this.”  If they don’t see some amount of value being put back into the coffers early enough, they’re going to get disillusioned with this whole process.  So, it’s really this three-pronged initiative.  Dorothy Graham has a quote, and I don’t have the exact quote memorized, but the paraphrase is, “The less you know about test automation, the more excited you are about it.”

MATTHEW HEUSSER: [LAUGHTER]

PAUL GRIZZAFFI: Because you look at GUIs and mice moving and stuff running across the screen, and it’s exciting.  It’s flashy.  It’s slick.  But then, you have to put together this DevOps-related environment where you’ve got a server that you can bring up and run all this stuff.  That’s the non-exciting part.  That’s the stuff that people go, “Oh.  I didn’t know I had to do that,” and I’ve seen this before.  People get very excited about it; and then, the first time you have to do some maintenance on your framework or you have to add another server in there, you have to do it off of your desktop, people go, “Oh.  Yeah.  I’m going to go back to what I was doing before,” and we start losing momentum.  So, we have to do this balancing act, where some stuff happens behind the scenes and other stuff is still pretty and shiny that people can go, “Oh.  Yeah.  Yeah.  Yeah.  I like that.  I want to keep doing that.”

JUSTIN ROHRMAN: I think there’s a really interesting point embedded in there, and maybe it’s a completely different podcast, but the idea that we need to learn how to communicate the value of these things to your nontechnical people and just in general talk about testing at different levels across the organization.  The value of automation isn’t completely apparent sometimes, because it just shows up as a green bar in the build system, and talking about that is really difficult whenever you’re talking to the people that make decisions about where the money goes.

PAUL GRIZZAFFI: That’s a good point.  I’m actually doing, what I’m calling, “A road show” to the development VPs here at MedAssets-Precyse, starting today, where it’s not a demo.  It’s, “Here’s what you’ve been paying.  Here’s what you’ve been getting back for this.  Do we need to do something different?”  “Oh, you know what?  That’s an expectation we didn’t know about.  We’re going to bake that into our going-forward plan.”  “Oh, that expectation, yeah, that’s not realistic, and let me explain to you why.”  So, we’re working with them at their level, using the vocabulary that they understand, which is typically money, value, effort saved, risk avoided, risk lowered, anything along those lines that helps them feel better about, “We’re going to put something out that’s not going to embarrass us.”  [LAUGHTER].  Those are the things that appeal to them, and we need to speak to them in that particular language.

MATTHEW HEUSSER: I think that’s fantastic.  Years ago, I automated Microsoft Paint for a proof-of-concept, and all I was doing was I was doing model-driven testing.  So, I was doing “file new” and then “one-one” and then “two-two” and a “three-three.”  So, you could see this white space expand and contract of the actual main image, and then I would save it and compare that to the previous one and make sure that they were the same size, same image, but the capture was capturing bitmaps.  There must have been a date, timestamp embedded in it, or something. it wasn’t predictive.  Maybe it was jpeg, and it wasn’t lossy compression and it wasn’t predictive.  Maybe 1 out of every 30 of those checks would fail in a way that wasn’t predictable, and a little popup would come up on the screen, “You failed the test,” or whatever.  Or, I had the counter.  I could do it as a counter.  You get 20 fails and, “Why did it fail?”  I don’t know.  If you DIFF it, it’s just a binary file.  If you bring it up in paint, it looks right.  So, I just turned the checking off.  My wife walks by and she sees the screen go, “Ba-buh-ba-buh-ba-buh,” and she goes, “Oh.  That’s awesome.  It’s great.  This is amazing.  You’re going to do really well with this,” and the whole thing was a lie.  I think that is a fundamental problem.  Our society has this bias for automation and flashy things, which maybe the flashy things go back to the “lion in the jungle,” and it’s a problem.  What I’ve had some success with, let’s do an analysis and come in, and, “Gee.  We’re spending this much time actually doing set up—manual set up.”  They’re actually clicking on all of the different things to set up the test scenario and then do the run—80 percent of the time was in the set up.  If we could save that setup to a tarball and then bring it in between runs, we could probably save you, for one tester, two-to-three business days a week.  Not only that, it’s work that isn’t very fun, that isn’t really very value add.  Paul, do you think people will respond if you can actually quantify the value of the automation?  Does that help people make better decisions about how to do tooling?

PAUL GRIZZAFFI: It can.  It’s something that needs to happen whether it helps them make better decisions by itself or it’s a piece to the bigger decision-making process.  That explanation of value really needs to happen.  The explanation of opportunity cost needs to happen, and the opportunity cost goes multiple different ways with, “Hey.  If we have something that fails 1 and 30, well then, you’re going to have to spend 1 in 30 time looking at this failure that might be a real failure or it might be some sort of weird tooling automation artifact.  Is that a good candidate for some unattended operation?”  Perhaps it is.  Perhaps it’s not.  “How much is the value to that particular thing, and how much value are we losing by spending time on that as opposed to spending time on something else?”

MATTHEW HEUSSER: Yeah.  That’s a good question, “How much effort would it take for us to do that tooling?  How do we calculate the ROI?”

PAUL GRIZZAFFI: And, the other interesting thing there is, “Now I don’t have any metrics for you.  I don’t have number of test cases I automated.  I don’t have percent pass/fail,” if I’m doing some of that “other” stuff.  So, in that case, with the 1 in 30, we get a 1 in 30, we get, “I just automated 10 test cases.  Hooray.  T-shirts for everyone.”  But, on the thin where we’re spending the time on something that’s more like a context-sensitive DIFF or some of these high‑volume automated testing things that Harry Robinson and Cem Kaner have out there, that I’m very interested in, but they don’t map to how much of your regression suite has been automated.  And, it breaks some of these metrics guys’ brains, and we have to work with them going, “No.  We’re not going to deliver that type of information to you.  We’re going to deliver value information to you.  How much have we reduced risk?  How many additional pieces of coverage do we have now that we were able to get to before?”  And then, let them make better decisions on how and when to make product releases, based on the risk, the information that we’re delivering to them.

MATTHEW HEUSSER: You bring up an interesting point about the metrics weenies, and try not to be too overly critical on the show.  But, companies that are run by metrics weenies, are unlikely to hire me.  So, I don’t run into that problem much.  I’m curious if you have advice for people in that situation where the “the metrics” are actually blinding them to the productivity opportunities in the system.  How do you overcome that socially?

PAUL GRIZZAFFI: It is difficult, and it takes time.  You have to sometimes do the less‑valuable metrics to buy yourself some credibility to get to the actual valuable ones.  The valuable ones are around, “What have I saved?  What have I added?”  Really around value.  “Yeah.  You know what?  We’ve only got 10 percent of our regression suite automated, whatever that even means, but look at what we no longer have to do.”  Now, this person that, once a week was spending four hours doing this thing where they type a bunch of stuff in, we have a script that does that for them now.  They just got a half-a-day back every week to test your product.  How do you quantify that?  You can put money on that.  You can put additional coverage.  Start bringing other types of metrics things that they’re into, but it really does go to that whole, “The number by itself doesn’t mean that much.”  There needs to be some words, some stories, behind that to actually convey the message in the right way.  Because when people say, “How many test scripts do you have?”  I ask, “Why do you care?  Who cares?  I’ve got one.  I’ve got 1,000.”  I can make both of those true if that’s the right answer, but the reality is that’s not what they really want to know.  So, we need to peel that onion and get to the heart of, “What is it that you’re really trying to do?”  And, most of the time, these people are interested in something specific around the risk, around the value, “Am I going to deliver on time?  Is this release going to be risky when I roll it out to my customers?  What is it we don’t think we tested very well, and why?”

MATTHEW HEUSSER: Yeah.  I’m totally with you there.  The metrics that I’m interested in are things like, “How long does it take to go from the last story to on-staging?  How long does it take to go from staging to production?  How often do we release to production?”  To the extent that it’s possible to measure, “How much of our time are we spending on test-related and fix‑related activities versus actual forward progress?”  There are other activities like set up and tear down and documenting.  So, we actually get to something closer to productivity measures.  So, when I say, “metrics weenies,” I mean people trapped in a certain kind of test metrics.  I love metrics.  I think we have more tools now than we did 20 years ago to measure those things.  Unfortunately, we’re getting kind of tight on time.  A reference I want to send out:  There’s a QualiTest DevOps Webinar, which is about 1 hour long.  It’s a video.  It’s free.  There’s no registration required, and it covers some of these other DevOps ideas.  We talked about things like, “automating the continuous delivery process, what all the different pieces of that are, how source code control and CI fits in,” and it’s just a pretty good tutorial.  Final thoughts.  Where can people go to hear more about you and what’s coming up new for you?  Maybe we should start with Paul?

PAUL GRIZZAFFI: For me, personally, you can find me on Twitter.  My handle is:  @pgrizzaffi.  You can find me on LinkedIn, and my e-mail is out there on my LinkedIn as well.  As far as what I’m up to, right now I’m getting acclimated to my, [LAUGHTER], new role as director, and I’m spending a lot of time learning, sort of, the ropes there and learning the new, sort of, Judo moves to actually be successful at that job.  I am speculating that I’m going to be at STPCon, since it will be in the Dallas area, which is where I live.  So, if you’re out there, I will probably be there to see you.

MATTHEW HEUSSER: Great.  Thanks.  I’m going to try to make it.  Justin?

JUSTIN ROHRMAN: If anybody’s planning to come to Nashville around mid-August, I will be speaking at Music City Agile.

MATTHEW HEUSSER: That’s awesome.  Any final thoughts on, Tooling and Automation?

PAUL GRIZZAFFI: Be smart about it.  Look at the value.  Don’t start with, “I have this Smoke Suite.  I have this Regression Suite.”  Start at, “What part of my job sucks because I just have to turn this crank or keep pressing this red button or the reactor blows up?”  And then, figure out something that frees you up from that mundane automatable activity so you can go and find something else to do that is more appropriate for a human mind.

MICHAEL LARSEN: And, to emphasize on that, the favorite phrase I always like to bring up anytime a conversation starts around automation is, “Forget the tool, start with the problem,” which I think Paul also just, [LAUGHTER], basically said.

PAUL GRIZZAFFI: Amen.

MATTHEW HEUSSER: Yeah.  Yeah.  So, one of the things we’re starting to do at Excelon is more activity-based cost analysis, which is a super-fancy way of saying, “Find the expensive, painful part of the process that makes sense to automate.”

PAUL GRIZZAFFI: Right.

MATTHEW HEUSSER: [LAUGHTER].  So, I think we’re all on the same page there.  I’m going to be at Test Retreat and CAST in August, and I think I’m going to let Michael close out the episode.

[END OF TRANSCRIPT]

Recent posts

Get started with a free 30 minute consultation with an expert.