The Testing Show: Episode 45: Machine Learning, Part 2

February 25, 12:14 PM
Transcript

We are back with Peter Varhol for Part 2 of our discussion on Machine Learning and AI. In this episode, we pick up with Peter and discuss the pitfalls of machine learning algorithms, where they can help us and where they often fail us.

Also, what is the role of the tester and testing in machine learning and AI? How intimately involved with the problem domain do testers need to be? How can they learn to understand the analytical parts and what they need to accomplish within that problem domain.

 

itunesrss

 

 

 

Panelists:

 

 

 

 

References:

 

Transcript:

MICHAEL LARSEN: Hello everyone, Michael here. What you are about to hear is Part 2 of our Episode on Machine Learning with Peter Varhol. Part 1 was in Episode 43, so if you have not heard that episode, we strongly encourage you to go back and listen to it. If, however, you do not necessarily care about order or you like things to be somewhat random, then by all means please feel free to listen to Episode 45 of The Testing Show, Machine Learning with Peter Varhol, Part 2.

MATTHEW HEUSSER: I’m sure that some of the best, smartest stuff that’s out there is Amazon’s Recommendation Engine, which brings us to the question of how to test it.

PETER VARHOL: It really all depends on your problem domain there, Matt.  I mean, you test differently depending upon your problem domain, depending on how accurate you need your results.  So, you can’t go into this blind and say, “This is right.  This is not right.”

MATTHEW HEUSSER: You can under some conditions.  I’m going to from the simplistic to the hard.  The easiest one of these—I want to be really careful—that we can’t predict the future.  So, when you start doing a regression analysis—people have been trying to do that for the stock market for years and it doesn’t work, because you can’t predict the future.  My crystal ball is broken.  It doesn’t work.  Some people that are very, very advanced, quantitative analysis folks, would argue on Wall Street that, “You can,” but those companies tend to go out of business.  Let’s say that you’re trying to predict something that is more predictable.  The demand for the shelf life for your aspirin.  Your company sells aspirin.  You sell a whole lot of it.  Historical trends where people buy a lot of aspirin on New Year’s and around Christmas because they have hangovers.  Whatever.  But, there is some predictable demand.  You can combine the trend of demand long term with seasonal trends to figure out how much stuff to put on the shelf.

That’s actually like a real thing you can actually do; and, if you do it well with ML, you can reduce your inventory without running out of stock.  Someone is writing an algorithm to do this.  Realistically, you’re an ERP company that’s going to sell the software to a bunch of manufacturers, and you’re the tester on the project.  What you’ve got is a couple of variables that you can make a graph.  So, one thing you actually could do is you could make a couple of different datasets.  You could make one that is straight linear.  You could make one that is a quadratic function that is like a curve and you could make one that is a bunch of random data, and you plug all that in and you see what the ML comes out with.  It should come up with a curve.  It should come up with a straight line.  For completely random data—like you write a little program that just generates a bunch of random dots—I would think it would come up with a flat line.  So with those three, then from there, I would plug in real data, and I would see how accurate it was and I would say, “Is it good enough?”  So, for Supervised Machine Learning, some of these things can be tested.  “So that’s great, super easy, Matt.  What do you do when you’re looking at a program that actually looks at the human click through’s?”  You’re combing variables like, “Added to shopping cart.  Rated highly.  The number of views that you had.  The number of views your friends have.  To see the keywords and SEO.”  To see all five of those things.  It’s going to look at all of them and do some fancy math and say, “I think we should recommend Machine Learning in Python.  Not Advanced Machine Learning in C++.”  It shows up on your screen.  You’re a tester, looking at the site, and it pops up.  Is it right?  How do you know?

PETER VARHOL: That’s always the rub.  We look at the answer, and that’s why I really enjoyed Nicholas Carr’s talk at CAST last year.  He said that, “You cannot automate something that you don’t understand to begin with.”

MICHAEL LARSEN: I’m going to borrow a little bit from this.  This was also in Carina’s original talk.  It was something to the effect of—it’s a quote that’s attributed to Alan Turing—that, “As long as you are 100-percent infallible, you cannot actually be intelligent.”  Because it’s basically saying that, “You can’t really learn,” because you’re not utilizing the judgment.  So that point is, no matter what we do here, human beings or somebody with discernment and intelligence and ability to realize that something might actually be wrong or might be contextually incorrect for the purpose and being able to actually use a judgment call, needs to be able to come in and actually determine if what’s happening is correct.

MATTHEW HEUSSER: I would argue, “You can test these things.”  It is possible to find some information to help you decide if this algorithm is performing better or worse.  Before I lay down my bomb, does anybody have thoughts or ideas?

PETER VARHOL: I’ll pull for Autonomous Vehicles, Self-Driving Cars, and say that, “That’s a problem domain where we need very precise results from our Machine Learning algorithms.  Recommendation Engines, not so much.  I think that someone who’s testing these things needs to have an appreciation and an intimate understanding of the problem domain to understand just what kind of error would be acceptable and then work towards trying to quantify the kind of error that would be acceptable and then look the results.  Not caring so much how the result was achieved as, “Does this match what I need in my particular problem domain?”

MATTHEW HEUSSER:  Yeah.  I think that’s true.  There’s a couple of things that we can do—right?—and it’s all black box-y.  For your vehicle, you can construct an obstacle course, and you run it through it a bunch of times.  If it uses that as training data and it gets better, the problem, of course, is that you’ve got bugs in your algorithm and that it can’t get better and you’ve figured that out because it runs into stuff.  Then you’ve got to go figure out why it ran into stuff and then you’ve got the problem of figuring out why it made decisions.  It’s really hard because it’s an oversimplification, but it is as if the code that it’s running is changing every time because it’s based on a neural network that is evolving.

PETER VARHOL:  You’re right, Matt.  At some point, rather than delving into the algorithm—I should say, “set of algorithms,” because usually there’s multiple algorithms involved, multiple layered algorithms in particular—the tester may have to go back to the data scientist and say, “You know, I’m sorry.  You have to start over again from scratch.  This just doesn’t achieve the level of accuracy and precision that we need.”

JESSICA INGRASSELLINO: So, hello everybody.

MATTHEW HEUSSER: Hey, Jess.

JESSICA INGRASSELLINO: Hey, [LAUGHTER].  This reminds me of when I worked for a company that did this kind of analytics prediction for retail stores, and they used a combination of InfoSources in order to achieve the Prediction Model.  We used the Bayesian Formula or Algorithm.

PETER VARHOL: Sure.

JESSICA INGRASSELLINO: I had some instruction on it but wasn’t 100-percent familiar, and I am not a mathematician.  So I understand a little bit how it works, just what I needed to test it.  But what is kind of interesting about this particular thing was that there was an argument between some of our customers and between them and us because what we were getting back looked correct and they were saying, “There’s no way that this is correct.”  For about three weeks or a month, it was my job to actually set up tests where I was either physically counting people or watching the overhead cameras.  You know, which is how stores keep track of who’s coming in and going out.  Door Count Technology.  I had to see why our specific technology, that was a third-party technology, was not counting people accurately.  It turned out that because of certain camera angles, shadows, setups at certain stores, that those stores were not sending accurate information, and we had no way to know that without a human eye on the product.  Because we had set it up, we trusted that the product would work.  People had done some testing, but not enough to give a broad enough sample over time that this was indeed accurate or not accurate based on a number of conditions.  Like sunlight, shadows, time of day, people walking outside but within the range of the camera.  There are so many factors that one has to consider when testing the information coming in.  That is where I kind of think the intelligence piece does come in, because all of those software pieces believed that they were working properly.  A person looking at them for 10 minutes or 15 minutes after setup believed that they had enough information to make sure they were working properly, and it turns out there were things that were not anticipated.

PETER VARHOL: Jessica, you’ve raised a very good point.  You weren’t here when I was discussing the concept of Bias in AI and Machine Learning.  If your data is bad or if your data is dirty or you’re measuring the wrong things, then you’re going to find that you’re results are biased, as people’s decisions can be biased based on poor data, incorrect data, looking at the wrong data.

JESSICA INGRASSELLINO: Indeed.

MATTHEW HEUSSER: So, let’s talk for a minute about how I would test that Recommendation Engine.  It seems to me one way is to just look at it.  So, go to the website as the user that has some history and look at the recommendations and see if they make any sense at all.  The second thing that I would do is I would test it on a small sample.  Say, all the employees of Amazon.  That’s 50,000 users.  Track if sales go up and sales go up and sales up by the amount that I care, we throw it out there.  Two ways to test it.  Are there more?

PETER VARHOL: So, once again, I think, Matt, it depends on the problem domain.  It’s funny that you bring up the Recommendation Engines, and I kind of have a funny story there.  It is that, for my running shoes I use Brooks Ghost Brand, and so I went to Amazon and looked for Brooks Ghost Running Shoes.  For the recommendation, I was shown things like, “ultraviolet lights” and “ghost hoods” and things like that, because it keyworded on the word “ghost.”

MATTHEW HEUSSER: Yep.

PETER VARHOL: [LAUGHTER].

MATTHEW HEUSSER: I don’t know how much semantic intelligence we should expect from these things, right?

PETER VARHOL: In Amazon’s case, if they’re increasing sales by a couple of percent a year, why not do it?

MATTHEW HEUSSER: I’ve said this on the show before that people are like, “They want your data.  They’re stealing your data.  They’re going to predict things that you don’t even know that you want yet in just the right time because they’re so smart.”  I’m like, “Look, man.  Most of the items that get injected into my Facebook are stuff I already have.”

PETER VARHOL: [LAUGHTER].

MATTHEW HEUSSER: They are for the vacation I already planned and bought the tickets for.  They are for the credit card I already have.

PETER VARHOL: True.

MATTHEW HEUSSER: They’re for these things.  I went to that website one time and “I’m not interested in it,” and it pops back.  They’re not that smart.

MICHAEL LARSEN: It definitely comes down to the fact that all that they can do, realistically speaking, is they can use information for things you’ve either searched for, purchased, or inquired about.  Based off of that, it really comes down to, “Hey.  You know, you showed interest in this at one point in time.  We think that these things might be relevant.”  Now, under that circumstance, I don’t really have a problem with it.  I get that.  They’re spelling it out.  Amazon does this to me all the time.  I will admit, every once in a while it feels a little bit weird and creepy when I go on to a site that has absolutely nothing to do with what I searched on.  Just because they happen to have a little Amazon Widget plugged in, I’m researching something and suddenly I’m noticing that the Boss VE-20 Vocal Processor is showing up on the side of my screen.  I’m like, “Hey.  Wait a minute.  What’s that doing here?”  [LAUGHTER].  But, of course, it makes sense.  Nothing nefarious about it.  I recently purchased one.  It’s a pretty durable unit.  Unless I’m going to be outfitting an entire choral group to be doing self‑harmonizing, I don’t have a need for another one.  So, advertising to me that they “have a special deal” on this device is kind of dumb.  I don’t need the device, and there’s no real purpose for you telling me about it.  All that it knows is, “Hey.  You purchased this at one point in time.  So if something happens to come up about it, we’ll remind you about it,” which brings it back to the whole point of, “It’s not as smart as people are putting it out there to be.”

MATTHEW HEUSSER: There is, I think, some potential there that is good.  To say, “For Facebook, for instance, I think is the best at this.”  To say, “Hey.  Wait a minute.  You’re a 40‑year‑old male and you’ve been looking at these things and your friends are these people.  So, we’re going to recommend this thing.”  So I used to be in Civil Air Patrol, and I’m still a member.  I go to Civil Air Patrol-y Facebook-y Groups.  They slice and dice the data based on demographics who are members of the Civil Air Patrol, are getting to the level where they’re earning their master rating in cadet programs or their lieutenant colonel ranks.  “So, we’re going to show you some Vanguard Lieutenant Colonel Rank Insignia, because we think you’re going to want this.”  That’d be kind of cool.  The only things that I’ve seen that are kind of in that direction, which maybe just everybody gets these ads for, are the ads for the new “Smart Mattress” or this special “Teeth Whitening” stuff.  Both of those, I was like, “Hey.  I’m actually kind of interested in that,” and I did not click on those things at all.

PETER VARHOL: So Matt, let’s contrast that with another problem domain—the self-driving car.  So, let’s say you simulate riding a self-driving car through an obstacle course 100 times.  It leaves the course and crashes five times.  Is that acceptable?  So, once again, I think that getting back to testing these things, it’s critical that the testers understand the details and the limitations of the problem domain and know what is an acceptable result before they go in and even start testing.

MATTHEW HEUSSER: Right.  So, “How good should it be?”  It’s an interesting epistemological problem.  You’re optimizing this and you’re going to say, “Our goal is to get a 3 percent increase in click through’s, which will lead to 1.5 percent increase in gross volume sales over a,” blah, “period of time.”  Where did those numbers come from?  Like many software schedules, someone just made them up.  But still, you have to have that goal to know whether you’re at the goal or not.  There are some heuristics, like you say, “We’re spending $1 million on this to make it worth our time.  We should get this much percentage; and, based on other peers in our industry and the research we’ve done, we think this percentage is reasonable.”  There are ways that you could make that percentage more realistic, and then you could click through and see whether you’re getting it and if you’re getting it, “Test passed.”

PETER VARHOL: But, once again, you need to know that going in that you’re not looking for right or wrong answers.  You’re looking for something that you would define as acceptable in your problem domain ahead of time.  I think that’s the true challenge for testers, “How do you determine?”  We’re used to looking.  We’re pressing a button, doing an input, and getting a known output.  We’re not going to get that known output.  We need to know what’s acceptable ahead of time.

MATTHEW HEUSSER: The more information we can have about what is acceptable will help us do our job better.

PETER VARHOL: Yes.

MATTHEW HEUSSER: Now, what we could do—I’ve frequently done this—is, “Here’s the data, management.  Is that good enough?  You tell me, because I think it’s okay.”

PETER VARHOL: That’s why I think these things need to be defined before you even start the testing process, to tell you the truth.

MATTHEW HEUSSER: Well, it is better if they’re defined before you start the testing process.  I can still go in there and I can give you an analysis.  I can tell you what I think.  We get in trouble when, “Okay.  Good.  Ship it.”  “We only got this result.”  “Yeah.  That’s the result I told you to get.”  “It’s not good enough,” because level 8 in management didn’t get what level 7 did.  But that’s just like “a testing problem” that we deal with every day.

PETER VARHOL: Right.

MATTHEW HEUSSER: So, before we get going, two things I wanted to get into:  One is this kind of cultural assumption that ML is magic, does do awesome things.  The other one, which concerns me more is, there is a popular science level belief that ML will be applied to testing.  The most common explanation I’ve heard is that, “We’re going to use ML algorithms to train software, to perform quick attacks.  So, the cliché of the person who only does quick-attack level exploratory testing, that role is going to go away, and I’m just really sick of that.  I’ve been in this game for—well, not as long as some—a while.  Blah “role is going to go away,” I’ve been hearing that for a long time.  Now it’s, “AI is going to make exploratory testing go away.”  Right?  Before that it was, “Automation is going to make testing go away,” and then we found ET and said, “You can’t really automate that.”  Now, with AI, you can.  I’m just skeptical of that, but I wanted to get your reactions.

PETER VARHOL: I have not heard a compelling case for using AI as part of software testing yet, especially to replace something.  I believe that there could well be one there, but I’ve not heard a compelling case yet.

MICHAEL LARSEN: Like so many other things, when we get down to it, there’s always the possibility that something can be automated out.  But the fact is that, (A), we’re not there yet, and the fact also is that if this going to be the appropriate approach, each time that we are able to automate or simplify something that is rote repetitive.  Really that’s what machines are best at.  It’s that they find the things that are rote repetitive and they allow us to focus on more interesting problems.  I believe firmly in the sense that there is still—but we’ve still got quite a long way to go before we really exhaust all of the interesting problems that we as testers have brain power for before we’re going to be completely out of the loop on this—a tremendous amount of quality discernment.  You know, sensing whether or not we’re actually getting the right things.  That, all of us can do.  It probably does mean that we’re going to need to be a little bit more on the technical sphere of what we do.  Technical doesn’t mean that we’re going to have to be all software developers and live our lives in an IDE and that becomes our Test World.  It does mean that programmers are going to be getting more and more intimately involved in the testing aspects because of development of these systems, and it also means that we as testers are probably going to get into more and more program-y aspects so that we can be effective.  It’s not that it’s going away.  It’s that it’s blurring.

MATTHEW HEUSSER: Yeah.  I think there is room for a tool.  Maybe it’s a browser plugin.  I’d go into like 100 customers, each with 1,000 bugs.  I’d look for all their front‑end bugs, just front-end verification.  When you type this value into a text field, you get an error message, and then I would put them into a spreadsheet.  I would put them in categories, and I’d write software to count the number of things in each category.  I would sort.  The most common bugs, you could create some kind of software where you go into a web browser and it tries to overwhelm the software and then tries to figure out whether or not you caused a crash or at least a nonsensical input and then it generates a bug report for you.  I think that’s probably 10 people for 1 year, so probably a $1 million project.  I don’t know how you’d make any money on it.  Sell it as a tool?

PETER VARHOL: Unh-unh.

MATTHEW HEUSSER: You’d have to sell it as a tool.  I mean, what would you sell it at?  Anything more than $100.00, you couldn’t sell it for.  It’s probably a browser plugin.  It’s probably free.  I think there’s space for that, but I don’t think it’s going to make ET go away.  It’s just going to make the tester’s job a little easier.

PETER VARHOL: Michael, I would argue that rather than testers being able to understand development more, that I think testers more critically need to understand the problem domain intimately.  In “understanding the problem domain,” I mean in a content sense but also in a mathematical sense.  They’re not going to be sitting next to the data scientists who are coming up with these non-linear, multi-variant algorithms and be expected to understand them, but I think that they need to understand, “What are the tolerances of the problem domain and are the results they are getting honing in on those tolerances or are they diverging,” or whatever it might be?  So, I think that the tester of the future with Machine Learning Systems is going to have—I’ll say—an appreciation and an understanding of mathematics in a particular statistical analysis but also an intimate understanding of the problem domain they’re working in.

MATTHEW HEUSSER:  I would bet money that, if you went to Wall Street with the quants, trying to predict the future, and you threw someone in, a somewhat‑ignorant, asking‑dumb‑questions tester that could do what Peter is talking about.  Like, “Let me run the algorithm.  Let me know what the tolerances are.  Let see if it can predict accurately.  I’m more black box-y than I am intimately in love with the algorithm,” you might have actually prevented some of the serious problems we’ve had in the past 10 years on Wall Street with algorithms that go wonky.  A classic example is, there were a ton of algorithms that said that, “Housing prices always go up.  So, we can give out these crappy loans.  People default.  We’ll just repossess the house and resell it for more than we bought it for.  So, we can give out a lot of these crappy loans.  There’s no incentive for us not to.”  Eventually, there was a system effect where so many houses were on the market for sale that the supply was so high that prices went down because that’s how supply and demand works.  If they’d only had a tester on that team (only had a tester) going, “Where’s your algorithm for supply and demand in here?  I don’t see it.  Where’s the part that looks at the number of open houses on the market over time?  There should be a variable for that.”

PETER VARHOL: Once again, they used—I’ll say—biased data, and they ended up with biased results.

PERZE ABABA: I would even argue further that you probably don’t even need a tester on the team to determine that particular case.  Somebody must’ve asked that question.  [LAUGHTER].  It’s like, “Come on.”

PETER VARHOL: I don’t know.

MICHAEL LARSEN: That’s our assumption.  We’re sitting here thinking to ourselves, “Somebody must’ve asked that question,” and ultimately it comes down to the fact that, “Maybe not.”  Maybe somebody didn’t ask that question.

MATTHEW HEUSSER: There’s a whole movie on that.  It’s called, The Big Short.  Yes, they were people who asked that question.

PETER VARHOL: [LAUGHTER].

MATTHEW HEUSSER: They were mostly—the employees—told to, “Sit down and shut up.  It’s above your pay grade.”  Not that any tester has ever been told that.

PETER VARHOL: [LAUGHTER].

MATTHEW HEUSSER: There were a few people who said, “I’m going to actively bet against you because you’re wrong.”  Those guys are mostly millionaires now.  Some of them are billionaires.  We have done and should probably do another show on the psychological side of saying things people don’t want to hear.  That’s part of the job, and it can even be blunt.  One team I worked with, segue for just a minute, “You’ve got to fix these bugs.  We’ve got to ship.”  “There are serious problems.”  “We’ve got to ship.”  “You don’t understand the customer impact.  You don’t understand the revenue.”  “Thank you for your input, but we made our decision.  Ship it,” and then there’s a negative revenue impact because people can’t click the order button under certain conditions and those conditions are high enough that it causes revenue impact.  The comeback was, “Why didn’t you advocate more strongly—

PETER VARHOL: [LAUGHTER].

MATTHEW HEUSSER: —to have the bugs fixed?”  “But, we did.  You told us to ‘sit down.’”  “You need to be empowered and realize that you have like—”  “What?”  So, yeah.  We could do a show on that.

PETER VARHOL: Yeah.  You know something?  Also, in every systems‑oriented industry—in aircraft, in automotive, industrial—we have the concept of test pilots or test drivers.  We’re testers who are rock stars.  Why can’t we elevate software testing to that level?  We have people who, like John Glenn, like Ellen Shepherd, had the right stuff 50 years ago.  That’s a software tester today.

MICHAEL LARSEN: In a manner of speaking, yes.  However, I think one of the reason why the test pilots of yore, for the most part, with a lot of the stuff that we are testing, the likelihood of one of us dying in the process of doing it, that’s the big difference.  The reason why John Glenn was considered a hero, they put their lives on the line to do what they were doing.  The doctor that strapped himself to a rocket sled to test out the maximum G’s, literally put his life on the line just to be able to get the data necessary so that we could see if human beings could survive past the sound barrier or could survive going into space.  Yeah.  That guy deserves to be considered a hero.

PETER VARHOL: You know, we as testers have to internalize the fact that software today is increasingly safe too.

MICHAEL LARSEN: Very true.

PETER VARHOL: Maybe it’s not our lives on the line, but ultimately it is some people’s lives on the line.

MATTHEW HEUSSER: So, I think I’ll wrap the ML Discussion there and we can move on to the Parting Thoughts and What We’re Working on Now.

PETER VARHOL: I think I’ve pretty much said what I’d like to say, especially with regards to Machine Learning and testing Machine Learning Systems in that I see the tester as intimately involved with the problem domain and also having almost an intuitive understanding of the analytical end as to what they’re trying to accomplish within that problem domain.

PERZE ABABA: If I could synthesize what I’ve learned, I really like how this conversation has kind of coalesced into the idea that, “You know what?  This is a technology that we should not be scared with, right?  This is not something that we’re going to be replaced with?”  Based on Nicholas Carr’s talk, this is augmentation.  This is something that can help us learn.  Peter aptly said that, “This gives you the ability to learn more about your domain.”  Isn’t that really why we’re performing testing, because we wanted to gather more knowledge and understand how our system is, whether from a quality perspective or a compliance perspective or whatever.  But it’s us gathering data, and now we have these little assistants, whether you call it Supervised or Unsupervised Learning.  But, it definitely will give us insight.  I, for one, welcome our—not quite sentient overlords—something that can help me be better at my job and bring back more value to the company that I’m working for.

MATTHEW HEUSSER: It’s leveraged, right?  This is going to enable us.  If all of this comes out, which I think it’s going to be a year before this valuable for production life, for most people, this is going to help us do 1-1/2 hours’ worth of work in 1 hour using ML to help us on the test side.  On the testing ML side, it’s just going to be a pile of work.  It’s going to be hard, and it’s going to be fun.  So, let’s talk about what everyone’s up to.  I think Jess and Perze have missed a couple of shows.  So, are you guys doing anything new and interesting that you want to talk about?

JESSICA INGRASSELLINO: I’m giving a talk, but it’s like next week.  So, it’ll be done by the time the Podcast comes out.

MATTHEW HEUSSER: What’s the talk on, and can you repeat that?  Your audio was cutting in and out.

JESSICA INGRASSELLINO: The talk is about, Testing End-to-End with Python.  So, talking about the unit test libraries, different ways to use integration, and then different ways to do UI testing—all within a Python Structure.  It’s at the New York City Python Meetup.

MATTHEW HEUSSER: Cool.  I also understand you’ve been doing some writing lately?

JESSICA INGRASSELLINO: Like a beast, yeah.  [LAUGHTER].  Like, I can’t stop writing.  I have a lot of articles, kind of, you know, that are on their way over for several weeks.  So, it’s good stuff.

MATTHEW HEUSSER: Peter, you’re first time on the show and you’re up to a bunch of stuff.  Please tell us about it.

PETER VARHOL: Sure.  So, I was just going to say that my work with my current employer has completed.  So, I’m back in the market looking for something substantial in the way of a day job or day contract, exploring the opportunities.  Thinking of moving from writing, technology, speaking, and things like that and more into the Data Science Realm and the Machine Learning Realm.  I’m giving several talks between now and the end of the year.  I’m at two DevOpsDays, Raleigh next month, and Edinburgh in October.  Also, speaking at QA&TEST in Spain in October.  This is a first for me after several years of trying, but I will be at EuroSTAR in Copenhagen in November.

MATTHEW HEUSSER: I’ve never been to EuroSTAR either.  It just hasn’t worked out.

PETER VARHOL: Same here.  This is the first year it worked out, and I’m really pleased to be there doing some writing, doing the conference stuff, and as I said, looking for something to pay for all of this.  I’m too young and energetic to retire.  So, trying to plug away here.

MATTHEW HEUSSER: Great.  That’s awesome, man.  I’m happy for you.  If you want to have something covered in the show, you could send an e-mail to [email protected].  QualiTest is our sponsor.  We’d love to hear your ideas for show material, questions, comments, concerns, or guests we should have—all of that.  Please send us a link.  But, that’s enough for today.  Thanks everybody for coming.  This has been a fantastic show.  I think we pushed harder and longer and better than average, and I’m really glad we did, on a tough subject.  So, thanks everybody.  We’ll see you soon.

PETER VARHOL: Thanks for inviting me.

MICHAEL LARSEN: Thanks a lot.

PERZE ABABA: Thank you.

JESSICA INGRASSELLINO: Thanks.

[END OF TRANSCRIPT]

Recent posts

Get started with a free 30 minute
consultation with an expert.