Testing in the Broader Picture

Testing in the Broader Picture

As part of a follow on to the “Making Testing Strategic” discussion that happened at QA or the Highway, Jared Small joins us to talk about ways that software testing can add value to the software development process, and ways that we can extend the strategy conversation and help make sure that we can be both helpful and impactful to the organization.

Additionally, we talk about the idea that Scrum can get us 250% better quality (by some definition) and the persuasion of Trump, though we promise, this not a political show.

itunesrss

Panelists:

HeusserLarsen

RohrmanRohrmanJared

 


References:


Any questions or suggested topics?
Let us know!


Transcript:

MICHAEL LARSEN: Welcome to The Testing Show.  I’m Michael Larsen, your show producer, and today, we are joined by Perze Ababa.

PERZE ABABA: Good morning, everyone

MICHAEL LARSEN: Justin Rohrman.

JUSTIN ROHRMAN: Good morning.

MICHAEL LARSEN: Our special guest, Jared Small.

JARED SMALL: Hi.

MICHAEL LARSEN: And, of course, our ringmaster and leader, Matt Heusser.  Take it away, Matt.

MATTHEW HEUSSER: Good morning, Michael.  Thank you.  Good to have everybody here.  For those who don’t know Jared Small, I met Jared at QA or the Highway two-or-three years ago.  Jared’s a test manager at Blackberry and has quietly been doing excellent work up in Canada for 10‑to‑15 years doing software testing while the company grew from a couple‑hundred employees to tens-of-thousands.  So, tell us a little bit about Jared.

JARED SMALL: Hi.  Thanks, Matt.  It’s great to be here with your today.  My role, as you mentioned, is a test manager with Blackberry.  I’ve been there for almost ten years now.  I haven’t been a test manager for that long.  I’ve been a team lead there for the last six-or-seven years and recently became a test manager.  Obviously, mobile testing is kind of my area.  I test on the software level, on the device itself.  So, less into the software, into mobile device management, but more on to the on-device experience for end-users.  I’m part of the Kitchener Waterloo Software Quality Association.  I’m on the Board there as well.  We host a yearly conference, Targeting Quality, which you may have heard of, and we try to be as involved in the testing community as possible, reaching out to local conferences like QA or the Highway and trying to be involved in the community and in podcasts, like this today.  So, very happy to here.

MATTHEW HEUSSER: Thanks, Jared.  Super excited that you could be here.  This year at QA or the Highway, we did a panel on, Making QA Strategic:

How QA can communicate its value?

How we can add more value?

How we can have a seat at the strategy table?

It kind of ties into the trusted advisory work that we’ve done on this podcast a few episodes back, but it’s also about:  Defining and communicating the value of test strategically.  I want to talk about that today.  Before we do that, though, why not, sort of, open the news bag for what’s going on in the industry, and I think I should start with this little Twitter kerfuffle over the weekend where Martin Burns was talking about:  The differences between No Estimates and Scrum, and he said that, “Teams that did Scrum, had 250 percent higher quality.”  I don’t know if Jared caught that.  I’m kind of curious.  If you heard that in a meeting, what do you think that means?

JARED SMALL: Sorry.  I missed the conversation that Martin was having.  What was the gist?

MATTHEW HEUSSER: Exactly.  That’s why I’m doing it on purpose.  So, you’re in a meeting and someone throws that out, how do you respond?  Like, what does that even mean?  What do you think it means?

JARED SMALL: It’s inconceivable to make blanket statements like that for starters.  I mean, unless you like to cause controversy, which I’m not sure what Martin’s objective is.  So, if he’s on social media, he may be wanting to engage in a conversation, but certainly, to say that there’s a one-size-fits-all for any methodology, just to indicate that there’s a percentage like that, is a risky road to go down for sure.  So, it seems like to me he’s obviously wanting to initiate conversation.  But, do I agree with that?  No.  Definitely not.

MATTHEW HEUSSER: So, it turns out that he linked to a survey by Rally that we’re going to put in the show notes.  I guess it’s not a survey.  It’s, they studied their own data and they found that these teams that were doing Scrum, which was basically using all the Rally features and the tools, including story pointing and breaking into tasks, they had significantly lower bugs reported in production per person-day of the project.  So, for a team of 10 people, for a project that’s 1 year long, you have 2,200 person-days, and then they divided the number of bugs by the number of person-days, and they found that Scrum had significantly lower.  So, my first thought on strategy obviously is, “If we don’t provide information to decision makers, someone else will and we might not like what they have to say.”  But, I should let you guys do a little bit of analysis first.  So, thoughts on that?

JUSTIN ROHRMAN: My first reaction, when I saw the quote that, “You could improve quality by 250 percent,” was just, “Oh, my gosh.  This is just plain dumb.”  Right?  “Who would make that claim?” So, ignoring the measurement side of it, you have to ask the question, “How are they defining these teams?  What is the Scrum Team?  What is a No Measurements Team, and how are they actually working?”  A lot of the time on Scrum Teams, you’ll see people not even document bugs.  The definition of quality on this white paper was, “Bugs found downstream.”  So, if people aren’t documenting bugs, customers aren’t submitting bugs, maybe stuff gets passed over e-mail and just fixed. What are you actually measuring?  Like, half of your measurement of quality is just being swept under the rug, you know?

MICHAEL LARSEN: I think I kind of like this comment.  So, I’m borrowing a bit from Justin’s answer to this about the idea of “downstream bugs being a quality measure.” I think your key point here, though, is the point that:  Measurements are used to construct a reality for people that were not there to observe it.  Whose reality are you constructing?  I think that’s very instructive.  [LAUGHTER],

MATTHEW HEUSSER: Wait.  Didn’t Leonardo DiCaprio say that in Inception?

JUSTIN ROHRMAN: [LAUGHTER].

MICHAEL LARSEN: Probably.  [LAUGHTER].  But that’s—

JUSTIN ROHRMAN: No.  That was from a man with a very unfortunate name, Edward Boring.  He wrote a very dense textbook on, A History of Experimental Psychology, and it just so happens, there was a lot of failings that they went through about “measurement” that I’ve used to learn about how software testers measure stuff.

MATTHEW HEUSSER: I do want to do a podcast on Measurement.  I think that’s going to be valuable.  For now, I think we can all, like, “This is kind of a silly claim,” but it’s taken seriously by people who are looking for answers.  So, what kind of better answers can testing provide, and that’s what I want to talk about during the “strategy” part of the show?  The other thing I wanted to mention before we get on to strategy is this whole Donald Trump invasion.  It seems almost like the British Invasion with the Beatles.  It just came out of nowhere, super popular, and there’s a lot of reasons for that.  This is not a political show.  There’s a lot of reason for that.  People are disappointed with the establishment.  They want a political outsider.  It doesn’t matter.  Scott Adams has been doing some commentary on Trump, and specifically Donald Trump is using classic persuasion tactics where he finds an emotional trigger and then he tells people, “It’s not your fault.  Other people have inflicted it on you.  They are your enemies.  I’m on your side.”  And then, he says, “It’s okay to be able to say, these people are your enemies.”  So, he’s doing this empathy building, where he’s basically telling that, “I’m one of you,” and the political issues melt away as we just sort of greet him as this person who’s one of us against the other guy.  I think, to some extent, that actually is happening.  Whether you agree or disagree with him on the issues, he’s effective at persuasion.  So, first question:  Am I wrong at that?

PERZE ABABA: I don’t think you’re wrong at that, Matt.  I think it was Aristotle that coined the three artistic proofs in persuasion, which are:

Logos.

Pathos.

Ethos.

And, I think, from the “pathos” perspective, where you are appealing with a person’s, I think that’s where he’s really good.  I don’t agree with his ethical appeal, which is the “ethos,” and as well as, the “logos” side of things, and I think the way he presents things from logic and reasoning is a very shallow agreement that you can easily push over his logic with a “proverbial” finger.  So, that’s really where, I mean, if it’s a three-legged bench, it’s only one leg that’s standing.  That’s what he’s really standing on, and that’s what really is the problem that I have with his platform.

MICHAEL LARSEN: One of the things I did find interesting was just a message that somebody had posted on a board a while back—this was directed primarily to the Republican Establishment.  It said, “Rather than being nervous or anxious about how you are going to defeat Trump, perhaps the bigger question should be:  How has the leadership so failed its constituents that Trump is seen as a viable option?”  What is it that it makes so that so many people are inspired by rhetoric and by this emotional appeal?  Are we so intimidated, are we so anxious about our future—and I’m using a collective “we” future here—that this appeals to a broad number of people, and the real question is, “How real is this broad appeal?”  And, I know this is not a political podcast.  That’s why I’m going to shut up about this right now.

MATTHEW HEUSSER: I think there’s a direct tie back into testing, and I think that Perze nailed it in that you have this logos, pathos, and ethos.  Historically, testers are pretty darn bad at connecting at the emotional, pathos, level.  I mean, we’re really bad at it.  We come in the room and we say, “You’re screwing up.  It’s all crappy.  It’s terrible, and you did a crappy job.  I hate you.”  We don’t actually say, “I hate you,” but what the other person hears is, “I hate you.”  And now, you’ve created a situation where, in order for them to feel a sense of self-worth and respect and feel good about themselves, we have to be wrong, and then we, surprisingly, get overruled, exaggerating for effect.  But, for some testers, that interaction is literally true, and for others it’s just not quite that bad.  Understanding how emotions impact decisions, shouldn’t that be part of our job, and why are we so bad at it?  Or, maybe I’m wrong.

JUSTIN ROHRMAN: I think it’s a massive part.  If I can lean on the classes that I’ve taught for AST for a long time, in BBST Class Series, we have a class called, Bug Advocacy, what that basically is, is rhetoric for testers.

PERZE ABABA: Justin, you just reminded me of the R-I-M-G-E-A mnemonic, which what we teach in that class as part of, The Six Approaches to Bug Reporting.  The funny thing is, the first five connects directly to fear, because you’re not just “replicating it,” you’re trying to “isolate it.”  You’re “maximizing it.”  You’re “generalizing it.”  You’re “externalizing it.”  And then, a sixth one, which is the A part, which is, “And, you say it clearly and dispassionately,” which kind of tones the things down.  But, I must agree, fear is there, but I don’t think we, as testers, use that primarily.  We are actually using logic and even defining the culture of, “Why will this particular issue show up?”  But then, we have to be very clear of, “How will this actually affect the value of the product that we’re delivering?’

MATTHEW HEUSSER: That’s fantastic.  I did not know that mnemonic.  One more piece of news.  The folks at QualiTest have a couple of articles up, and we’ll link to them.  One is on how to test a laser gun.  In our classes, we try to cover this kind of high-level strategy on physical objects, and it’s a pretty good example.  The other one is this idea of quantifying the cost of technical debt.  The example they use is the dollar cost to fix bugs and saying, “If we continue to defer these bugs and not fix them, these are the costs that we are incurring.”  There’s a strategy table somewhere where people are planning the next project, and testing isn’t there.  What can we do to explain our value more clearly to get in the room; and, if you were in the room, what would you say?

MICHAEL LARSEN: I’ve been trying to discuss this with my team in a bunch of different ways.  It’s the whole idea of shifting testing leftward, getting testing involved earlier in the process, making it so that it’s not this back-and-forth.  Trying to convince an organization that you’re doing something and getting buy-in up front on it, is very difficult.  Even with smaller teams, it can be difficult, because you have to get everybody on the same page.  My best approach is dealing with individual programmers one-on-one.  We have this current set of stories that we’re working on that are all tied together.  I just sat down with this one programmer and suggested that, “Maybe a better approach would be both of us just sit down and do some back‑and‑forth.  Show me your concept.  Show me some of the things you want to working on, and let me ask questions before we even get to the point of doing the kickoff.”  This particular programmer was, “I’m all for it.”  We say down.  We started working through some of the examples, and we realized, if we found holes just in the 20-minute discussion, it would’ve taken us 5-to-6 times longer.  So, there was that once chance to persuade and show that, by having this discussion in a way that might not be natural or normal, we were able to save a considerable amount of time and, I would say, affect the overall quality of those features significantly.

JUSTIN ROHRMAN: I think it’s interesting that you say that type may “not be natural.”  There have been phases—I like to think of it—in the way we’ve shaped software development teams over time.  Back in the day of Jerry Weinberg, when he was actively programming and writing production code, there were no “pieces of a team.”  There were programmers, and they tested each other’s code.  They were officially shifted left.  They were looking at requirements, testing, programming, all within the same cycle.  At some point, during the 1970s or 1980s, the groups got split apart.  Testing became testing, development became development, and they started happening at different points in time.  So, it became controversial for the two to mix and for testers to sit with development and pair test, and I think it did a really big disservice to the industry as a whole.

MATTHEW HEUSSER: I think you’re talking about Bill Hetzel, 1973, in the Chapel Hill Conference.

JUSTIN ROHRMAN: That’s the one.

MATTHEW HEUSSER: The book that came out of that was, Programme Test Methods.  So Weinberg, 1958, Project Mercury, he has the first independent test team, first known documented.  It was a time-sharing system.  I think they were using punch cards, and the only way to test it was to write code.  So, their testers were senior programmer dudes to the extent that such a thing existed, because there were no CS Programs.  It was 1973.  The outcome of that was, “Testing should be a simple, straightforward, predictable, repeatable business process that anybody can do if we just define everything up front and sort of run all the Things and then test automation is taking all of the Things that we have just written down and then having the computer do exactly the same thing.”  After that, you still see Jerry Weinberg and James Bach saying, “Hey.  You know, we don’t have to do it that way.  We can be humans,” and the voice of Modernism was just much louder than them for the next 20 years or so.  But, I don’t think we have to do it that way.

JARED SMALL: I think that what we’re seeing is the convergence back to testers becoming a greater part of the team, and I think Agile and Scrum are helping to facilitate that in general.  If testers see themselves as part of the team, then they’ll be more apt to be involved in other activities that are going on within an Agile Team or within any project team.  We need to stop defining ourselves simply as “testers,” and we need to expand and grow beyond that.

JUSTIN ROHRMAN: What?  What do you mean by that, “Stop defining ourselves as testers?”

JARED SMALL: “As testers only,” is what I should say.  I do define myself as a tester, [LAUGHTER], in almost every conversation I have about my career, but certainly going beyond just the role of critiquing software and providing input on defects or bugs in the system, we have a much greater role to play beyond that.  That’s what we’re talking about today as well.  We can insert ourselves into the greater process of software development, and I think developers can be surprised sometimes by our inputs and by volunteering to do simple things like running project meetings, understanding the code that you’re testing, and things like that as well.  So, definitely add more to the process overall and become more strategic.

MATTHEW HEUSSER: Let’s get specific about “strategic.”  I’m interested in getting in the room for the strategy discussion.  When I was a project manager, I would frequently have these conversations about shifting left, and the developers would say, “We need to be in the room when senior management makes their decisions.”  And, I would say, “You don’t want to be in that room.  I’m in that room, and they’re throwing darts to create the pile of Jell-O with deadlines attached that they throw to you.  You’re complaining that stuff is a mess now.  If you were involved earlier, you would see that it is even messier.  You don’t want to be there.”  And often, testers feel the same way—that a project is inflicted on them and they’re involved there.  But, let’s say, we get in earlier in the story.  We get in the room when they’re having this conversation about, what to build, how long it will take, how much money it’ll make, what value can testing add to that conversation?

JUSTIN ROHRMAN: I think it would be the same value that testing always adds, which is shining a light on the project, exposing things that people didn’t already know.  Testers have a way of finding out where the project might be late, what bugs customers might care about that aren’t being addressed.  It’s information that people at that higher level might not have access to because they’re not dealing with the day-to-day aspects of the project.

MATTHEW HEUSSER: I’ll give you an example and you tell me how I should’ve done it.  I was in a meeting eight years ago as the technical representative.  It was an insurance company, and they wanted to move to real time claims adjudication, which is a really fancy way to say, “You swipe my membership card in the health plan, it sends a data file back to the insurance company.  It calculates what the adjusted payment amount would be, calculates whether or not it fit my deductible and my copay and tells me how much to pay at the counter for my doctor’s visit, and then I’m done.”

It’s done.  I give them my $20.00.  I walk away, and all the paperwork is processed.  There’s no letter in the mail.  There’s no billing and collections agency involved.  It’s all done right there, because we could do the claim right there.  Instead, what happens at most U.S. insurance agencies is that, they electronically submit the claim and then overnight a batch runs and then you say, “Oh.  They owe $20.00,” and you send the e-mail or a letter to the doctor, and then he puts a balance.  And, the doctor then has to then send to their collections agency, who has to send you a letter in the mail, so you can write a check for $20.00 that they get 2 months later.  It’s just much better.  We’re in the meeting.  We’re talking about how, “It’s going to be fantastic, and everything’s great.”

And, I say, “You do know that the claims processing system that we’re in the middle of converting to, the whole conversion probably cost around $20 million and probably took 50-to-75 percent of IT’s focus for 4 years to convert to.  You do know that, that claims processing system only processes claims in batch, right?”  “What do you mean?”  “Well, if you want to process a claim, you have to take customer service offline.  That’s why we do it overnight, because we don’t have customer service run overnight.  If you want to process a claim, then you couldn’t answer the phone and do a lookup on anyone else, because it’s trying to do transaction processing.  It locks the database up.  You can’t do select all.”  I didn’t quite say it that articulately, and the meeting was abruptly ended.

It had three directors in it, maybe a vice president, me, and the project manager, and then we were meeting weekly to talk about the ongoing work.  In the next week, and I saved the company a ton of wasted time.  They would have talked about this forever, until eventually someone asked for an estimate, and they answer would have come back, “We can’t do it.”  The next day, the project manager came to my desk and said, “Matt, you don’t need to come to the next meeting.”  “Well, of course I don’t.  It’s great that I’m an optional invitee and I get to decide if I’m coming to the meetings.”  “No, Matt.  We’re not sure it’s the best use of your time.”  Slowly it dawned on me.  I’m like, “You’re uninviting me from the meeting?”  And, she said, “Yep.  That’s what we’re doing.”  The next week, that group met, she came by my cube and said, “Senior management has decided not to pursue real time claims adjudication at this time,” and I never went to those meetings anymore.  I was trying to be helpful.  [LAUGHTER].  I thought I was doing my job.  So, give me some examples of what test can say that don’t get us kicked out.

MICHAEL LARSEN: Well, that certainly does open up a bigger question.  If we want to be part of the discussion, we have to be able to discuss it in a way to where we’re not the bearers always of bad news, if they don’t see us as adding value or we’re asking too many questions that stall what people really want to do.

MATTHEW HEUSSER: Yeah.  Totally.  We do that.  Yeah.

MICHAEL LARSEN: I don’t know that there is really a good answer there short of, if somebody really wants to do something and we’re seen as the killjoy to make sure that their pet project doesn’t get done, then I don’t know that we necessarily do provide a value there, other than to say, we might be saving the organization a lot of time and wasted energy because we’re pointing out the obvious details that they don’t want to consider.

It’s just when somebody says, “Hey.  We’ve got a new tool we want to put into place,” and somebody then has to bring up, “Have you really considered all the ramifications of what putting that tool in place means?”

JUSTIN ROHRMAN: It’s a good reminder of something Jared said just a little bit ago, “extending a little bit past the role of a tester,” and my personal view of the role of a tester is to explore and report discoveries that are important to the customers.  In this meeting, that’s exactly what you were doing.  You found a problem, and you said, “Hey.  This isn’t going to work and it’s going to cost the company lots of money because we’re going to spend time working on it.  It’s just not going to happen.”  So, if you extend that role a little bit, you could’ve added a “but” to the end and “some other potentials that might work.”  Like, “But, we can do this.”  Adding potential alternatives, speaking to the financial aspects of the business, would move that conversation a little bit further than dropping bombs about how “this will never work” and “we’re just going to fail.”

MATTHEW HEUSSER: That’s a great point.  Same company, I was in a meeting where they were inflicting process on us, “How could we possibly make this ridiculous deadline?”  And, I said, “If you can get me out of the process-process, the template-template, and the document-document, I could hit your deadline,” and people were like, “Is it going to work?  What’s my track record?  Would I say that, if I couldn’t?”  I don’t think I was quite that brave.  I mean, it was a long time ago.  But, we were allowed an exception.  The vice president of IS sent an e-mail saying, “This project is accepted from our standard practice,” which no one had never done before.  We had no idea if it would work, but it had become our standard practice the day before yesterday and I think that I was really providing a lot of good information there.

MICHAEL LARSEN: This is an interesting side conversation, and I actually had this talk a few weeks ago with, of all people, the guitarist in my band who also happens to be a director of security for a company.  That’s one of the neat things about being older now is that you can actually be in a rock and roll band and you can have real jobs too.  [LAUGHTER].  We had this discussion that, “Sometimes we get processes inflicted upon us, and there are different ways that those processes that are inflicted and how we can offer to be of help, sometimes those opportunities can go into very strange situations.”  And, I was describing some of the ways that I had dealt with these situations in the past, and he started laughing.  He says, “You know what?  You are this classic type.  Bless you for being it, but you’re going to drive a lot of people crazy.  You are what we call ‘the overly-compliant guy.’”  Sometimes you’re going to find people that are going to subvert processes completely and they’re going to be vocal about it.  You’re going to find people who are going to not subvert the processes outwardly but they’re going to just, under the surface, they’re going to do what they’re going to do.  And then, there is the person that sometimes been set up, you’re the type of a person who says, “Okay.  Here’s the strategy that we’ve put into place.  This is what we’re going to use.  I’m going to follow it to the point that I’m going to demonstrate to you how ridiculous it is that you’ve put us in this situation.”  And Matt, you’re whole, “Yeah.  We’re not going to have you in the room anymore,” because you’re pointing out the most basic of tenants.  You’re being overly compliant; and, by being overly compliant, you’re showing them how obnoxious and how odious the tasks are, [LAUGHTER], that they’re asking you to do.  The problem is, is that it doesn’t make friends because they look like idiots because we’ve basically put up front every bad thing that they’ve done, but we’re doing it in a way that highlights the idiocy for everybody.  [LAUGHTER].

MATTHEW HEUSSER: You know, that actually reminds me, I have a friend who’s a behavioral psychologist, and he was brought in because, “Help Desk is screwing up, and Help Desk is terrible.  It’s a big company.  A bunch of idiots,” and all this kind of stuff.  And, he went to two of the vice presidents and he said, “Can you sit down with me at the Help Desk for just half a day and watch what they do?  Let’s look at the system they are put into and the tasks that we ask them to do and observe it.”  Of course, they got down there and saw that there was 50 different categories and each category has 50 different queues and many of the queues are literally redundant or not watched.  So, when they take the call, they can’t figure out who to route it to, and then the person doesn’t get it.  And, the person that calls doesn’t get service and they have to call back and they have to find the ticket and they have to reroute it or it’s routed to the place.  The software is so painful, the user interface is bad, and you can’t do your job.  It’s not the “Help Desk people are unintelligent” or “we’re hiring the wrong quality of people.”  It’s the system that they’re put into, but the executives never knew that until they actually sat down and saw the work.

PERZE ABABA: The funny thing is:  So Matt was invited to a meeting because they thought he was a project manager, and then when Matt started talking, he started talking as if he’s a tester, and then he got kicked out.  [LAUGHTER].  It seems pretty interesting to me that, when you look at specific mindsets where it’s a question really of, “Who are we talking to, and what is it that they want to hear?”  And, “Is it the right time to tell them what they’re thinking about might be problematic as a solution?”  The tricky thing when you’re now part of a strategic review or an architectural review for example where you have to take a look and understand, “Why did they even think that this might work in the first place?”  “We think this is cool and awesome, someone else did it,” or “Is there actually a reason behind that?”  For me, right now, my main context is, “We’re trying to apply, as best as we can, how to turn around a Waterfall organization into an Agile organization.”  The challenge really for me, for a tester, is not just shifting left because there’s so much more that we can provide an impact for—like, conversations on automation, test data management, and continuous delivery.  I think you won’t achieve that by just shifting left.  I call this, “The Konami approach to software testing,” but we need to shift up-up, down-down, left-left, right-right, select-start, B-A, [LAUGHTER], type of approach, but that requires a lot of skill, which ties back to what we were talking about in the beginning, Matt, where the expectation of testers nowadays is just an operational state.  And, as testers, we are so much more beyond that.  My encouragement to organizations or test mangers who are trying to look at, “How can we be better at our testing practice,” is really to look at, “Where else can we contribute, aside from what we’re doing now or at least looking into our internal processes and how to be better at what we do?”

MICHAEL LARSEN: I am just going to have to, at this point in time, interject and give a high five to Perze for throwing in the Konami Code—30 lives in Contra.  High five, Perze.  Thank you for that.  [LAUGHTER].

MATTHEW HEUSSER: And, I think that it feels like we just barely scratched the surface.  We’re just starting to get going.  We want to respect the audience’s time.  I want to respect the show members’ time.  I should probably cut it here, but it seems like we’ve got a lot of work to do.  So, thanks for taking this journey with us.

MICHAEL LARSEN: Thanks for having us.

JUSTIN ROHRMAN: Thanks a lot.

PERZE ABABA:  Thanks, everyone.

[END OF TRANSCRIPT]