We all know that what we measure is something we can improve, right? We can measure anything and everything, and way too often, organizations attempt to do exactly that. The net result is we measure stuff that is not important to try to inform us of things that absolutely are. Mike Lyles joins us for a spirited talk about measurement and metrics. We can’t escape metrics completely, but we can be a lot smarter about the metrics we do use.
Also, how would you feel if your software update destroyed the product you were working on? What if it was a multi-million dollar satellite? Yep, that happened, and The Testing Show panel gets into it!
Panelists:
References:
- SOFTWARE UPDATE DESTROYS $286 MILLION JAPANESE SATELLITE
- Soap Opera Tests
- Hawthorne Effect
- Perverse Incentive
- Michael G. Vann, “Of Rats, Rice, and Race: The Great Hanoi Rat Massacre, an Episode in French Colonial History,” French Colonial History Society, May, 2003
- Note to Self: Quantified Bodies
- Model-based testing
- Cem Kaner, Nawwar Khabbani: Software Metrics, Threats to Validity
- Balle, Freddy & Michael: The Gold Mine: A Novel of Lean Turnaround
- Justin Rohrman: “Looking into Social Science for help with Measurement”
- Gustav Fechner
- CAST 2016, Vancouver, BC
- Agile 2016
- Kitchener-Waterloo Software Quality Conference 2016
- REINVENTING TESTERS WEEK: New Testing Conference and Masterclasses September 26th-27th, 2016
- Qualitest Webinar: Roaming Assurance – Guarantee customer call usage and experience
Transcript:
Michael Larsen: Welcome to The Testing Show. I’m Michael Larsen, the show producer, and today we have on our panel Justin Rohrman.
Justin Rohrman: Hello.
Michael Larsen: Perze Ababa
Perze Ababa: Hi, everyone.
Michael Larsen: Our special guest, Mike Lyles.
Mike Lyles: Hello, everyone!
Michael Larsen: and, of course our M.C., Mr. Matt Heusser. Take it away, Matt.
Matthew Heusser: Thanks, Michael. It’s a very special “Testing Show” tonight. Which we are actually recording about 9 O’clock at night, just so we could get Mike Lyles. Now Mike is not only a QE Program Manager for Belk, and that he was at Lowe’s, he’s also the Director of Strategy for the Software Test Professional’s conference. So Mike, tell us a little bit more about how you fell into QA and where your interests are.
Mike Lyles: Thank you so much, Matt. I got into software development long, long ago and started in ’93, doing software development, went to Lowe’s, was there for several years. I just happened upon a software testing course in the early 2000’s. As I’m sitting there, I’m the only development person in the class, and so they’re talking about development people. I’m thinking “Man, I took the wrong class”, but it was a five day class, I learned a lot about software testing, got really interested in it, went back to Lowe’s where I was working at the time, we didn’t have a testing group there, and I started building some plans around “how do we do testing better within our development team, knowing we don’t have a QA group?”. A couple of years later Lowe’s said “we need that”, and I got to be part of an organization, building that company up as far as building their testing group, and then I got to work through all facets of testing, whether it be functional testing, regression, automation, performance, data management, so I got to look at it from every angle.
Matthew Heusser: Thanks, Mike. The topic that we wanted to pick this week was Metrics, but before we get there, it’s time for our “In the News” segment. There’s a Japanese satellite that crashed almost immediately after… and by crash I mean disintegrated… after receiving a software update. Let’s talk about that.
Michael Larsen: The best way to describe what happened is that a software package was pushed, after an instrument probe had been extended from the rear of the satellite, and either something happened during the update itself, or some of the data that was contained had bad coordinates, and it literally put the satellite into a position where it lost orbit.
Perze Ababa: There are some aerospace engineers that are commenting on when that data package, when the software update was pushed, something happened with the attitude control system. The attitude control system is what gives your satellite an idea of a certain frame of reference. Pretty much what they are saying is that it could possible that it’s a missing or an extra minus sign on a thrust vector specification. So, wherever you think you are and you’re trying to correct your position towards a given celestial object, it just gave an extra push and it started spinning out of orbit, which is pretty much consistent to what Michael was saying.
Michael Larsen: They were passing what’s referred to as the South Atlantic anomaly, which as far as the Japanese ground control was concerned, was a communications blackout. There was no active monitoring of the situation. Human intervention could’ve possibly prevented the problem, but there was no way for human interaction to happen at that particular point in time.
Perze Ababa: It’s a question of risk, too. You’re passing through a place where you have a complete communication blackout. Question then is “Why push the data then?”
Justin Rohrman: So what was the cause of this failure? Was it a testing problem? Did they think they were done testing totally and just missed a scenario? What it a development problem, was it missed functionality? Where did this failure come from?
Matthew Heusser: It’s really hard. A lot of these things… I actually wrote an article about Knight Capital a few years ago as a journalist trying to figure out what the heck happened, and it was just too early, and there was a lot of fear and uncertainty, and people just didn’t want to comment and speculation, and this article seems like the journalist is trying to wade through mud to figure out what the heck happened and, as outsiders, we’re going to have even less insight. I’m seeing a comment that says that the sun sensors hadn’t been working correctly since they extended the optical bench, so it fell back to using that data, and that data was bad, which, as I want to get to Justin’s question, it’s a complex adaptive system. If there’s ever a testing problem where “oops, we didn’t test it, and it failed”, something else had to fail also. In this case, you have the main system failed, so they figured they’d use a failsafe and that also failed and the update is what bingo’d the failsafe.
Justin Rohrman: It’s reminding me a little bit of Google Maps at the moment, taking you to a completely different location because there’s a detour in the road.
Matthew Heusser: The system is designed with multiple levels of fail-safes; just like software systems, testing is the fail-safe for a messed up development and, sometimes, things go wrong. I think the question is “how much money are you going to spend making sure that things go wrong?” and for a two hundred and eighty million dollar rocket system that can only work once, you probably want to spend a lot.
Michael Larsen: I can only assume that they have lots of specific mock systems and individual hardware pieces that allow you to test this, including going into failsafe mode and such, or as what they refer to as “limp mode”. The problem is, one part of an emergency system wasn’t tested right. It’s just like one of the more subtle errors in programming; who hasn’t made the mistake of not checking that error handling works properly?
Matthew Heusser: When I was working at the insurance company in the early 2000s, and we used requirements documents, and we were told “just code to the requirements document”, and I would say “show me a requirements document that tells me what to do when the database connection is down”. Oh, right!
Justin Rohrman: Yeah, they failed to realize that the requirements document aren’t the requirements.
Matthew Heusser: What was it that Michael just said that “we’ve all made the mistake of not handling the exception condition?” In that particular case, there was nothing. No one made the right choice to cover the exception position. So you’ve got the main failure, you’ve got the secondary system failure, you’ve got the ‘human can just override the joystick… oh, they can’t’ failure, you’ve got the testing failure… there’s four layers of failure there.
Perze Ababa: Yeah, that really begs the question for me… “How do you test for Murphy’s Law?”
Justin Rohrman; It’s a matter of accounting for risk, and to some degree, figuring out what’s more important.
Matthew Heusser: That sounds like the next podcast. Maybe we should gather heuristics here. When I was at the survey company, we had incredibly good automation coverage. 95% of our apps had no GUI. Here’s a green bar, it works, and it would be a real pain to set up manually some of these things. Every time I said to myself “man, it’s a pain, and you know what? They already got a green bar on it, they already wrote automation against it, it already works”, every time, I would test it anyway, and I would find a bug. If it was really, really, really hard to test manually, with all the real systems, the programmer would just mock everything and wouldn’t actually be testing it. So, when I had that little voice in my head saying “this is really hard, it’s a lot of work, I don’t know if I wanna’ do it”, nobody else did it, either, and that, actually, was Murphy Law bait. I need to do the exact opposite of that little voice; I need to test it anyway. I would really like to explore that, how we set up things like “soap opera tests”.
Mike Lyles: I did the same when I used to have more spare time in my life. I used to be a beta tester for the new iOS versions for iPhone and I was going down the road, using GPS, and just put it in airplane mode, assuming they didn’t think of that in the beta version. You’re using GPS, you’re traveling somewhere, why would anyone want to put it in airplane mode… but I did, and I took it out of airplane mode, and everything went berserk. It was really interesting, I could recall it and repeat it and cause it to happen, logged it in their system and they said “could you send us pictures? We want to see how you did this”, and I think you’re right, not thinking about those “what if” conditions really puts you in a bind.
Matthew Heusser: One of the things that I think, when we start describing, when we start looking at what happened on that Japanese satellite, it’s got a lot of components that are clearly playing with each other. They’re not isolated, managed by a manager, that only has to deal with two or three at a time, layer upon layer. So cyclomatic complexity, for instance, is a measure of relative complexity of components of code. If you try to pursue a very low cyclomatic complexity, you separate components and they become very, very small little pieces. It’s possible if someone were looking at that code, measuring with cyclomatic complexity, they might say “I see a problem here”. We use metric is software everyday. We use “how old are you? How many years have you been with the company?” The objection to metrics is that, when they are used for control, we want productivity, we can’t really figure out productivity, so we use some proxy like “lines of code” and the results are ridiculous.
Mike Lyles: I just spoke at STP-Con a couple of weeks back on “Metrics: Tell a Story, Not a Number”, and I know there’s some debate over whether numbers are good or stories are good or neither are good.
Matthew Heusser: Is anybody saying numbers are bad? Like “how old are you? Stop! [ Laughter] Don’t answer that, it’s got a number in it!”
Mike Lyles: I think one of the things we’ll talk about today is Kaner’s presentation, and I think one of the things he mentions is how people will sometimes behave differently in a good way because of measuring people and their productivity, but usually, it can cause people to do some really crazy things. So I think, if you want to talk about metrics causing people to behave differently, most of you probably know the Hawthorne Effect. Factory workers were measured on productivity as lighting was increased, and they were told that was going to be done, and they noticed that people’s productivity increased as they increased the lighting. Then they said “OK, that’s good, so why don’t we decrease it and see what happens there?”, and people’s productivity continued to increase. So, I think it’s a matter of people will change their behavior when they think you’re watching. We were told “you need to write X number of test cases per day” and have those built, and then “you need to execute this many per day”, and what we found was that people were getting to the situation where they wanted to meet that quota, people were going along pretty well, and then all of a sudden you start seeing a situation where someone is testing something that is needing an external third-party application to work for them, that application’s off line, they run the test, it fails, they know it’s going to fail every time they run every other test, but they need to get those twenty-five test cases done for the day, so they run them all, logged the defect that this third-party system was down. Horrible experience, where you’ve got people wasting time knowing they’re wasting time.
Matthew Heusser: Yeah, going back to what I was doing earlier, nobody told me to do those really complex setup hard things. They occurred to me, and they took a long time, and if I was being measured by the number of scenarios that I ran, that would not be rewarding me for coming up with those test ideas. This is the bad news. So before we get to the good news, we’ve got some really neat ideas to talk about, does anyone else have a story they want to share that are still in this space?
Michael Larsen: For anyone who has listened to this podcast a few times, you know that I can’t go into this show without once or twice throwing in one of my mentor’s comments or some of the things he stood for. Matt, I’m throwing another Ken Pier lob in here and you’re just going to have to deal with it [laughter]. If we were gonna’ really care about a metric related to a tester, specifically, the one metric that he ever felt mattered was not how many bugs a tester filed, or how many bugs a tester found. His idea of a successful tester was somebody who found bugs, and then worked and turned those bugs around to get them fixed.
Mike Lyles: Right!
Michael Larsen: That was, in his world view, the metric that mattered.
Justin Rohrman: It’s like he was summoning the spirit of Cem Kaner in there [laughter] in the BBST Bug Advocacy class. He says “the best tester is not the one that reports the most bugs, but the one that gets the most bugs fixed.
Michael Larsen: Exactly! And so that’s the importance and one of the things we want to get to with this, and I’m probably stealing somebody’s thunder, so I apologize ahead of time, but by focusing on how many bugs get fixed, you’re actually focusing on you’re finding issues, you’re finding issues that are related to genuine risks, risks that are important enough that they matter to key people and stakeholders, that are going to genuinely care about them enough to make sure that they get fixed. If you can convince enough people that the problem you found meets that criteria, then you are going to improve the odds that it gets fixed. If you give me a tester that reports fifty bugs, and five of them get fixed, and then you get me another tester that reports ten bugs, and all ten of them get fixed, hey, it’s really easy for me to figure out who’s the more successful and effective tester here.
Matthew Heusser: I think that’s a pretty good model. There certainly might be exceptions to that. I might be a fantastic advocate for my bugs and make people feel guilty and argue the company into fixing bugs that don’t matter. I might not be fantastic at finding bugs, but be particularly good at collaborating with devs so that we find process changes to sincerely prevent them and not just run around with documentation but actually entire categories of bugs disappear when I walk onto the project, so my numbers are not as good as yours, but we’re developing more software faster with less bugs. Absent other, sort of plus one-ing kind of behaviors, I think that’s a pretty good model, Michael, thank you.
Perze Ababa: There’s still a big tendency for metric manipulation, though. It also depends on the relationship I have with the developers. If I’m really friendly with a developer, and I know he’s going to get rewarded if he fixes more bugs, and I get rewarded for that, too, because I found the bugs, I harvest all the easy, minor bugs that someone can fix and give them to the people I like, and assign the really difficult one’s to the people I don’t like. There’s definitely a lot of challenges when it comes to metric manipulation. You know, they call this “The Rat Effect”. When Vietnam was under the French Colonial rule, they supposedly had a rat problem. They tried to solve it by rewarding people for killing rats. Thing is, how do you prove that you killed a rat. It becomes completely un-scalable if everybody just brings a dead rat to show the proof that they killed one. So what they did was “just show us the tail and we’ll pay you for it.” What really happened was the people ended up just cutting the tails off, and started breeding the tail-less rats. Government was reporting record progress in rat extermination, and people were also getting paid more, but did that actually solve the problem? Not really. It can be manipulated, and I think that’s one of the biggest problems with regards to metrics it’s really manipulation.
Michael Larsen: That’s a great segue. A podcast called “Note to Self”, hosted by Manoush Zomorodi, it’s all about using tech and using things in your life, and one of her series that she’s been doing has been on “The Quantified Self”. Mike Lyles and I have actually been talking about this over the past few months because I have been actively involved in losing weight, and by doing so, one of the things that I did was I started getting into the nuts and bolts of the metrics of measuring everything about myself. I wear a Fitbit, and that Fitbit calculates a lot about me. I discovered that, once you start quantifying yourself, you start looking at ways that you can manipulate that data to the best of your ability. You start playing games with yourself that you are not even aware that you are doing. One of the things I noticed that became this weird ritual for me, to the point that it was almost absurd, is the daily weigh-in. That daily weigh-in became so critical to me, that I started doing all sorts of weird things that, really, weren’t going to give me better results, or if they did, they were setting me up for bad results later because I had this great run, like “Oh, my gosh this is amazing, I lost, like, five or six pounds over the course of a few days” and it looked awesome, like “hey man, look at me, I lost this weight”. What would assuredly happen two days later is I would ricochet right back… and I’m using this just as an example, it’s a little microcosm of how you can get caught up with the numbers, and how those numbers can give you a false sense of your progress, and there’s all sorts of little things, and your numbers go haywire!
Mike Lyles: That is so true! I’ve done the exact same thing. It’s funny you say that because you’ll do that and, I used to say “I’m only going to do the scale first thing in the morning” because once I’ve had anything in me, you know, the numbers are skewed. And you don’t know where it’s going, so you’re exactly right, and when you start thinking of all those factors, the way it is with testing and metrics, I think what you find is just because something works this way today, doesn’t mean it’s going to work that way tomorrow. Your metrics have to be spot on when you’re doing things like that because you have to understand why those things are fluctuating and changing.
Matthew Heusser: In my company, we have a lot of work that is invoiced at the end of the month, and some of that impacts things like annual reporting of expenses, and I will flip out… “Oh, no! This month looks really bad! Oh my goodness, arrr, were down, what are we gonna’ do, it’s… it’s the 29th of April and ohhh, arrrr….”. It’s because there are six different assignments that just didn’t get their paperwork in, and it’s Friday and they’re all going to turn it in on Monday, and we’re going to go through it all and it’s going to be fine, but this month is going to look down… and that’s just a complete waste of time. I could have gone and done something productive. Instead I had a huge amount of angst and anxiety, and probably made some bad decisions, based on data that was wrong. We’ve got a video we’re going to link to from Dr. Kaner, where he talks about… we complain that our metrics are bad and give us bad incentives, but when you look at Wall Street, all their metrics are bad. The CEO’s are incented for short term results. Price to earnings ratios can be manipulated six ways from Sunday, and doesn’t mean anything. There’s all kinds of problems with stocks. We don’t really know what the future returns are gonna’ be. The historical returns, it’s hard to get out past a year, really, for any metric. But we do it. We figure out what stocks to buy, somehow. And there are some people, like Warren Buffett, who are pretty darn good at figuring out a company’s future growth prospects. Hopefully some of those metrics: net present value, dividend yield, have some value… so what are those for testing?
Justin Rohrman: I have a pretty good story, but it’s from a book. The book is called “The Gold Mine” and it’s about a Lean transformation. It’s a novel about a manufacturing plant that recently acquired another company. They’re struggling to meet their bills, and they’re about a month from shutting down because they can’t get enough work out of the door. So they’re measuring how long it takes for each part of the plant to do their job and productivity of each person, and they’re not getting anywhere. The main point of the book is that they’re measuring the wrong thing. What they should actually be doing is hunting for The Gold Mine, finding the value of the plant. And for this, it was shipping materials out at a certain interval every day. Tying that back into the story, I think, rather than spending our time measuring things like bug counts or test cases or whatever, maybe we should take a step back and actually look at the value we are trying to give, which is sending software out of the door.
Perze Ababa: One of the success stories that we’ve seen within my team… I have a caveat, it’s a success story for now… as of September of last year, it used to take us two hours and fifty five minutes if we run our checking suite running ten parallel threads. This is 100% of the URLs that we need to cover. We validate it based on the objects that are on that page and have full functional coverage. That’s good for up to ten sites, maybe, but the problem now that is presented before us is that we need to scale this from ten to fifty and then by the end of the year we will be up to two hundred sites that we need to monitor for certain upgrades, and one of the things that we started looking at was “hey, let’s look at user behavior from our analytics data”. So that’s definitely a metric that we can pull. So when we looked at this, we’ve realize that, from the initial two hours and fifty five minutes to run a single site, if we rely completely on the analytics data, and provide a decent enough coverage at this point (our definition of decent is around 85%), we brought that down to like just twenty three minutes. So imagine being able to do this nine times more the amount of sites we can hit. From the twenty sites, we’re getting pretty close to the two hundred sites that we want to aim for, you know, for the rest of the year, but… these are now just assumptions, but this is an example where we’re able to look at a very particular behavior, gather that metric, apply that on how we actually execute that against an upgrade strategy.
Matthew Heusser: So you’re saying that you instrumented the logs and came up with the happiest of the paths, and found out that the bell curve was actually pretty fat. It reminds me of a story that Lee Henson keynoted STP-CON, Michael was at the table, but he had been coaching at Amazon, and their goal was 100% automation at the checkout process, because you’ve gotta’ automate the checkout process because, money! The maintenance of the test was killing them, so they ran a similar process and came up with, like, 99.7% of checkouts at Amazon were these very specific set of flows that were just not that many. If you ever touch money in a way that you think you might need to check these other scenarios, then check them. By making that policy change, they were able to keep up with new development.
Perze Ababa: That’s very good that you mentioned that, because there’s definitely some points that’s really non-negotiable. For us, registration is very important, so we provide full coverage of everything we know about. Everything else, that’s something that we can cut down scope.
Mike Lyles: I like your situation on coverage. I think that’s something that became an area that I really wanted to focus on in the places that I’ve worked. When I became a manager over automation performance service virtualization, I was in an area where I knew the automation side. I was learning virtualization, and how that could support our testing group, but performance was a new area for me. It’s something I wanted to learn, but what I found was, these guys had metrics that I felt really drove what we did. They were metrics we used; it was more internal with our group than it was sharing it with our stakeholders, but the stakeholders wanted to know as well. We caught situations in performance testing and mostly in performance engineering, where someone would say “I’m seeing a response time here, or a lag in the software here, and I’m seeing some responsiveness that’s not going very well, and they would make one tweak. What they found was, there was an estimation of savings of hundreds of thousands of dollars per year on just seek time and search time and customer wait time, based on one simple metric that said “this isn’t right”. I thought that was really good from a performance standpoint.
Matthew Heusser: When you say “savings”, what exactly were the savings? Does that mean that people got faster responses to the web site, therefore, sales went up? How did they make the money?
Mike Lyles: Yes, there were calculations on, if a customer has to wait so many seconds, how many were we losing, how long will a customer wait before they give up and go to a competitor? What we saw was, when we started measuring how many customers stayed and retention and staying online, there were just major delays that were caused by some small tweak. To answer your question it’s more about “how many customers would we lose if this was having so much lag?” As the application grew and the database grew and the data grew, you saw longer and longer wait times. There was a calculation of “how many hours and hours and hours of savings did that give us in responsiveness and just overall customer satisfaction?
Matthew Heusser: When I was at the insurance company, Joel Muhlenburg was the data warehouse manager, and I went to him and said “hey, couldn’t we just do a query on price difference between generic and name brand meds, find where that difference is very high, and then find where you had a lot of people buying the name brand, and we send them a letter in the mail that says “hey, did you know if you buy the generic, you could save, like, a hundred bucks a month.” Oh by the way, of course, it’s the insurance company, we’re gonna’ save a hundred bucks a month, too.” He took my idea and ran with it, and became the Director of Medical Cost Reduction programs at Spectrum Health. Joel’s brilliant, and he actually did the hard part, which is actually doing the work. That’s just a great example of using numbers and common sense, in a way that we usually wouldn’t call metrics.
Perze Ababa: Mike brought up a really cool thing about performance. Performance is one of those metrics that I consider as a “tension metric”. Tension metrics is essentially a group of metrics that are in opposite of each other, but they work in tension, both of them together, bring better value to the customer as a whole. For example, if I am working on a new product, and I add a new feature to the website, lets say I want to add commenting and rate and review, and let’s say I look at page load time as a performance metric, if I add this feature, and that increases my page load time by five seconds, these two are now in tension with each other. There’s a bunch of tension metrics that we can actually use, so that way, we’re not at the danger of manipulating metrics, but we have multiple metrics that exist in the system that are in tension with each other, but that, in effect, gives a complete opposite effect of metric manipulation. They do end up helping each other.
Mike Lyles: Yeah, I think combining… and if you go back to Cem’s presentation, and he talks about price to book ratio when you’re talking about, if I look at assets per share versus the price per share for the stockholders and he was joking that X bank should shut down because people were more wealthy than for it to continue in existence. I think it take a Warren Buffet type or someone who really understands how to take those metrics and combine them together and then understand what they’re doing.
Matthew Heusser: Justin, what was your talk called at CAST 2012?
Justin Rohrman: It was about “Looking into Social Science for help with Measurement” a bit about the history of how measurement problems went in social science and a little bit about how Lean and Lean Measurement have helped us move forward on that. So if you’ll allow me to geek out just real brief here, there’s an experimental psychologist that is one of the founders of Experimental Psychology named Gustav Fechner. So he ran this experiment, trying to measure people’s perception, and he would poke them on the skin with calipers and ask them if it was one or two points. They’d report back, and basically it was just measuring how accurately people could predict whether they were being poked with one or two points. So he ran this experiment a bunch of times and after a couple of years of doing this he learned that he wasn’t actually measuring perception, he was measuring people’s inferences about perception, and even more than that, he was getting measurements back from the subjects that they thought the experimenter wanted to hear, which is exactly what we see when testers start being measured on bug reports because they’ll just jack the number of reports up. They modify their behavior to satisfy what they think people want to see. So the measurement is never the measurement, it’s partly what the measurer wants the story to tell, and partly what the measure wants the story to tell. There’s a weird combination of things going on in there.
Matthew Heusser: Does anybody have anything they want to talk about? What’s new and exciting in your neck of the woods that you want to share? This is our “Commercial Break” moment.
Justin Rohrman: The schedule for CAST 2016 in Vancouver this summer is published today. If you’re on the fence whether or not you should book your tickets now, you can take a look at the schedule and then of course immediately get your ticket.
Matthew Heusser: And what about, I think, the Fall STP-Con has dates and location, right?
Mike Lyles: We do, it’s gonna’ be September 19-22 in Dallas, Texas, and just now we’re doing the Call for Papers.
Matthew Heusser: I’m gonna’ be at Agile 2016, I’m gonna’ be at CAST. I’m also going to be at the Kitchener-Waterloo Software Quality Conference which is going to be in late September. Anna Royzman’s QA Conference, which I think Perze spoke at, right?
Perze Ababa: Yes, that was last Wednesday.
Matthew Heusser: Last Wednesday. The next one’s going to be in September, the exact same days as KWSQA. We’ll have more information in the show notes about these conferences so you guys can check them out.
Michael Larsen: Since Qualitest is sponsoring The Testing Show, I think it’s sporting that we mention that they will be hosting a webinar on Tuesday, June 14th, 2016 along with BlueGem on the topic of “Roaming Assurance” based around guaranteeing call usage and experience. If the topic of roaming risks interests you, and you’d like to hear more about how to develop a testing solution to address it, details are in the show notes.
Matthew Heusser: So, on a lighter note… in the United States it’s election season, and Ted Cruz has names his vice president partner who would be… Carly Fiorina, and there is… when they do the two things where they grab hands and they lift them up together, there’s a very, very awkward moment where they are fighting each other to see whose hand gets to be on top. It’s hilarious…. and I think that’s all about power. It’s possible that they are both used to being the most powerful person in the room, so it’s funny because so much of the advice about how humans interact has nothing to do with measurements or formalizations. Some people, And I’m going to say it’s usually men, when they shake your hand, they’ll put their palm on top or they’ll slide their hand over yours. This can be taken as a symbol of dominance. So, I’m not saying get into the silly dominant game, but if you notice it happening, what you can do is you can slide your hand so that it’s straight up and down. You can sort of even it up, and that can be a nice way of saying “we’re all adults here”. I dunno’. Any comments? Or should we just say good night?
Mike Lyles: One of the strangest hand holdings I’ve ever seen. It’s a very good way to end it [laughter].
Matthew Heusser: Some things just can’t be explained by logic. There’s something going on there!
Mike Lyles: Hey, well thanks for having me, Matt, I enjoyed it. All of you guys, I think a lot of you folks, and it’s always great to be on this panel with all of you.
Michael Larsen: Thanks, everybody! Have a good night!
Matthew Heusser: All right, thanks!
Perze Ababa: Thank you!
[End of Transcript]