The Testing Show: Testing with AI and Machine Learning

February 25, 18:14 PM
/
Transcript

Artificial Intelligence and Machine Learning are the hip buzzwords at the moment and if you listen to enough people, they are the hot new technologies that are poised to make software testers obsolete. Is that accurate? Well, yes and no. More of a “kind of but not really”.

Today, Matt Heusser and Michael Larsen chat with Daniel Geater and Jason Arbon to get to the bottom of this “AI and Machine Learning” thing. Is it real? It sure is. Is it going to transform testing? Potentially yes? Should testers be scared? Absolutely not. There are opportunities galore with this AI/ML world and Daniel and Jason are happy to tell you all about them.

 

itunesrss

 

 

Panelists:

 

 

 

 

References:

 

Transcript:

Michael Larsen: Hello and welcome to the Testing Show. This is episode number 63, for December, 2018. I’m Michael Larsen, and today Matt Heusser and I are talking with Daniel Geater and Jason Arbon about artificial intelligence and machine learning as directly applied to software testing. With that, on with the show.

Matthew Heusser: We’re here to talk about AI in testing and testing AI systems. So let’s start with Dan. Artificial intelligence can mean a lot of things. There are movies about it that have fantastical concepts. Some of them turn into reality. Others remain science fiction. Qualitest has an AI practice. What does AI mean to you?

Daniel Geater: Interesting though those movies are, it does not mean Skynet to us. It’s one of the common themes, the machines are gonna take over and all that. That’s not really where we’re at with AI technology. Artificial intelligence kinda does what it says on the tin. Artificial is anything that is not naturally occurring. And intelligence, you tend to think of as the ability to gain and retain skills. Could say, a pocket calculator counts as an intelligent system. We don’t really think about that as a true AI. You’re talking more about things that make informed decisions, expert systems, learning systems, these kind of systems that don’t necessarily follow a specific set of prescribed rules put in by us. Those are the kind of things we’re talking about when we’re talking about artificial intelligence.

Daniel Geater: Within QualiTest, we’ve got a global initiative to delve more into AI and software testing. AI is obviously a huge industry buzzword at the moment. It’s gained massive prominence in the last 10 years. Mainly because we needed the power of cloud computing to make AI available to everybody else. Before that it was a plaything for the real big players that had enough data and processing power. But now everybody’s got access to AWS, so Azure and everyone can train up AI models. Industry’s gaining real momentum with it. It’s just after the millennium we’ve seen huge uptakes in active AI start ups or active AI investment. And it poses two very interesting avenues for software testing and QA. One of them is how do you Test AI’s? They’re not inherently rule based. They’re not deterministic. We don’t necessarily know that when we put A and B in we’re gonna get C out this time. The other side of it is AI’s really good at taking huge disjoint sets of data and making very clear informed decisions. And a lot of our testing processes can be thought of like that. So it’s how can we use AI to make us better testers?

Daniel Geater: QualiTest’s practices is looking at both sides of that coin. And we’re lucky enough to see the developments that we’re looking at in both areas.

Michael Larsen: So let’s get to the meat of this. How do we go about testing AI systems?

Daniel Geater: First a little bit about the problem with testing AI systems. The problem is effectively non determinism. As testers, we’ve always run away for years from a system that can change its behavior at random. We’ve always avoided non determinism. And now we’ve gotta break this muscle memory and allow for the fact that AI’s do change their behavior over time. It’s not a behavior we can describe.

Daniel Geater: And what we’re discovering is there’s a few ways we look at this. First and foremost, you can start to look at behavioral experimentation, to see what they make of it at the end of the day. But by the time you’ve got people doing that, you’re dealing with a delivered AI. And it’s a bit like doing web site testing solely through the front end. Yeah, you can do it. But you’re not necessarily gonna see everything you want to see that way.

Daniel Geater: Beyond that you can start to look at things like statistic methods, allowing for the fact that AI’s draw from these huge sets of data. And if we go all the way back to software testing principle of using things like pair wise testing, single data point testing, look at statistically driving inputs to force the AI to behave in certain ways.

Daniel Geater: The problem you get is always that AI’s have huge combinations of input. The answers are no longer black and white. It’s more about more right, less right. More wrong, less wrong. A lot of AI testing, we’re dealing with an answer comes within a certain boundaries. So we drive the inputs and we say within reason, my answer should look something like this. If I’m using a driving app, I do wanna know that it is actually going to take me home via a fairly direct route. I can’t necessarily say, ’cause I haven’t got real time insight into whether this road or that road is better at this time on this day, but I certainly know driving me home on a detour via three different cities is probably not the quickest way to do it. It’s that level of trying a big sample, trying different ways through the data. It is manual heavy at the moment. But there are ways you can automate it when you start to see how we can bound the way an AI learns. And how we exercise enough different inputs to check that our outputs are coming within a certain area.

Michael Larsen: So, Jason, the same question.

Jason Arbon: Yeah, basically what Dan said.

Michael Larsen: Then let me flip this, then. So, Jason, I’ll ask the question a little bit of a different way. How can we use AI to do better testing?

Jason Arbon: How to use AI to do software testing. So that’s actually what every tester’s fear is, is that AI will just wake up tomorrow and start testing before they get into work and they don’t have a job. The reality is that people can work hand in hand with AI machine learning. The first thing to note is that most marketing aspects of companies, and most people’s fears are wrong, which is most people are concerned that AI is generating all the test cases and test coverage, doing the kind of thinking part of testing. The reality is that AI today can really only do the thinking that a human can do in about a second. So if a human can make a decision in about a second or less, a machine can be trained to do something very similar. But if you ask a machine to do something that a human maybe takes a minute or so to cogitate on, it’s something that the machine can’t do for probably a very, very long time.

Jason Arbon: So what we do at Test.AI is take a different approach where we assume that the humans are actually intelligent, the humans know what the behavior of the app is supposed to be. But what we do is we use machine learning and AI to do all the hard work. We basically let the humans express, “Hey, I want a test case that goes to the shopping cart in the application and make sure it’s empty.” And then what we do is write a bunch of AI combined with machine learning, reinforcement learning, and a bunch of supervised learning techniques to teach the robot how to recognize what a shopping cart is, how to know when it gets to a shopping cart page, how to recognize how to interact with the shopping cart icon, so that humans don’t have to write all that code or do all that munge work or click and tap everywhere. The machines do that.

Jason Arbon: The real path of the next several years of how AI’s gonna assist humans is it’ll do all the basic work but humans will do the thinking.

Daniel Geater: To wrap up on that one point. One of the things you get in the  beginning, we’re gonna flip a switch on AI tomorrow and it’s gonna do my job. I view that very much as analogous to when we first started doing automation, everyone was convinced that automation was gonna steal the testers’ job. And it never happened. And it didn’t happen for the same reason. Automation lets us take a very basic set of repeatable actions and do them very effectively and very quickly. But we realize that the guys needed retraining to learn how to maintain the automation packs. With AI, it’s kinda similar. AI’s do make decisions. That is their job. But they make the kind of decisions that we make fairly trivially. They sometimes do it on much bigger data sets. That’s the much more specific thing, where you’re trying to take 100 different types of input and come up with one answer. But generally speaking, they’re very, very good at solving a very, very specific problem. We don’t have this general AI, which is the AI that is a living, creative thinking, bordering on the Terminator style of robots. It just doesn’t exist yet. It may exist one day. But it’s certainly nowhere near reality now.

Michael Larsen: All right. Thanks very much. Matt, you’ve got a series of questions of your own.

Matthew Heusser: Yes. Speaking of which, I’m gonna push on Jason a little bit more. How can we use AI to do better testing? Well, AI is good at sort of one second decision and humans are better at the longer decisions. Okay. How do I actually use AI to do better testing? What do I do?

Jason Arbon: Basically, there’s a lot of different techniques. If there’s something in your testing task that’s very repetitive, often something that’s a little bit fuzzy kind of a problem. How do you wanna recognize a shopping cart in an application, for example. You can give it thousands of pictures of shopping carts. And then you can train a neural network. And you can say, “Hey, go find shopping carts.” And it will find shopping carts in your application. You don’t know what all the bugs in your application are gonna look at ahead of time. You don’t know what all the error dialogues that your app is gonna throw up in advance. But what you can do is take a bunch of pictures of your application, where there’s an error dialogue showing up, and a bunch of pictures of your application where everything looks fine and dandy, and you can train a neural network. And then it will very likely be able to tell, after you’ve trained it, an arbitrary screen in the future will be able to predict whether that’s probably an error message or that the app is probably behaving correctly.

Jason Arbon: There’s also techniques that are an API call. There’s unsupervised  learning, clustering and algorithms like that. It sounds a little fancy pants. It’s an API. I guess the key thing I would convey to people is most of these things are actually just an API call. It sounds like there’s a bunch of fancy math. But you just have to do an API call and have a big enough machine. But you can give it an example of a bunch of log files. A production server,  for example. And you don’t know what are good or bad log file patterns. You may put into the clustering algorithm, here’s a count of all the number of times we have error messages in there. Or here’s a timing information. Or here’s the number of times that a particular string appears in these logs over time. And you can just put that data into a vector, put those vectors into a matrix. People who know anything about basic building up matrices and data points, you throw that matrix into an API call, called K-Means Clustering, and you say, “Give me two different groups. Give me the number between one and n”

Jason Arbon: The machine will automatically look at all that data like a human would, and say, “Oh, all these machines seem to be behaving the same recently. Except these three machines over here in the corner. They’re behaving strangely.” Machine doesn’t have to know in advance what to look for.

Jason Arbon: ‘Cause a human would say, “If the error rate was above a certain threshold, or if the CPU utilization was below a certain threshold, then flag those machines.”

Jason Arbon: With unsupervised learning you can take raw data, performance data, functional data, log data, pass it to an unsupervised algorithm and it will just say, “These are the ones that are behaving differently.”

Jason Arbon: So the human can analyze and say, “Oh, these are all having the same kind of problem. These are misbehaviors in our system.”

Jason Arbon: The key thing I would tell people is to think about, if you have data for a particular aspect of your testing problem, just any input kind of output relationship, supervised learning is probably an interesting approach. And if you don’t know what to do with the data, but you have a lot of it, often key means or unsupervised learning is an interesting avenue.

Jason Arbon: Lastly, there’s a third avenue, which is if you know what you want the machine to do, but it’s too hard to figure out. Like Dan was saying, you don’t know how to get home, you know how to tell the car to drive to the grocery store in order to get home. It’s a hard problem for humans to figure out, to optimize that. But you know the beginning and destination. You can have a machine … You can train it with tens of thousands of iterations or epics, to figure out how to get home. And after 10 thousand iterations, playing in a graph of the world streets, you’ll figure out some pretty good ways to get home.

Jason Arbon: So that’s just kind of some interesting examples of how to apply different aspects of machine learning, just real world testing problems. You just gotta make sure you have some data to play with.

Matthew Heusser: Well, there’s a couple things there, and I’m gonna make an analogy, see if you’ll agree. Apple is particularly good at popularizing things that have already been done by other people, but maybe the pieces don’t fit together. The MP3 player. Yeah, MP3 players existed before the iPod, but Apple really made it easy to use for everybody. And I would argue that AI … We say, “Oh, you just have to gather the data and throw it to an API.” But it’s really sort of that pre-iPod level of popularity. I think your company, in a very specific domain, is in the business of sort of making the iPod. I don’t know if you’d agree with that. And could you tell us a little bit about that?

Jason Arbon: In fact we’ve added a click wheel last week. So it’s super easy to use.

Jason Arbon: If you think about what we’ve done at Test.AI … Internally we describe it as the atom smasher, and taking a search engine that goes on indexes of the world apps, web or mobile apps, and we smash that index. At least the front the front end and the back end we smashed in domain specific machine learning training system, that is generalized for mobile apps but can be specific to a particular web page or particular mobile app. That’s kinda what we do. And the way to think about it is, really what we did is just set out to go, “Hey, the thesis was that AI’s the first transformative technology we’ve seen in about 15, 20 years.” ‘Cause testing hasn’t changed in really that long. Guess to most folks It’s basically the same thing. Selenium’s pretty much the same. Appium’s the same stuff, just works on mobile apps. But we just thought, “How can we reimagine or rebuild an AI first approach of testing for structure? How do we reimagine test frameworks in an AI first approach?” That’s fundamentally all we’ve done. Over the hood, from an APF perspective, it looks very similar to existing frameworks. But under the hood it’s radically different in terms of implementation.

Matthew Heusser: Nice. I appreciate that.

Daniel Geater: And I think that’s where you’ll find AI’s will generally come to the forefront of supporting testing. It is a limited selection of test tools that are AI powered and truly stand up to the name. There’s a couple of people that say, “You know, we’ve got AI.” But it’s not as advanced as some others. But the ones that do, you usually find they’re using AI to enable a bigger process. It’s not like you’re saying, “Just go and do all my testing for me.”

Daniel Geater: Going back to Jason’s original point, under the hood, all of these AI algorithms are effectively very powerful number crunchers. Whether it’s a classifier, whether it’s a predictor, whether it’s analyzer, whether it’s a cluster up, they’re all doing the same thing. They’re taking, literally, numbers in, they’re doing statistical operation, they’re spitting numbers back out. If you can think of something you do in your day to day life, whether it’s a decision that’s repetitive, or something that’s a bit inefficient, if you can break it down into a discreet data structure, it’s actually quite easy to start seeing ways you can apply an AI to it.

Daniel Geater: Now we’ve already done a lot of the donkey work on this with regard to the fact we’ve been keeping test assets in structured systems for years. We’ve been rigorously maintaining defect trackers in our test case trackers. Well, some of us have, some of us haven’t. These things are exposing AI. And so I think what will happen is there’ll be an explosion in the next few years as people start to realize how to commoditize AI learning and they’ll build on this.

Daniel Geater: For a more practical example, something that QualiTest is driving an initiative for at the moment, is we’re using AI’s to plan testing better. We learn from how your testing’s gone in the past to build a better prioritized test execution schedule, and try and optimize the order of testing, and find the defects first, or identify areas that aren’t well tested, or maybe are flaky tests that need a bit of optimization. So that’s one of the things that’s a common testing problem which usually come under test management, test pack hygiene, that can be done.

Daniel Geater: But you’ve got the lower level applications, like Jason’s company’s product does, which is given this, can you start to find me the ways through the app to do this?

Daniel Geater: And you’ve got the ones that are somewhere in the middle. You are starting to see organizations that have test case generation based on models of testing. It’s model based testing taken to a new level.

Daniel Geater: But there’s everything in between. All of these processes where we can break our testing into a discreet set of data. It’s very easy to then start to see ways we can either optimize it using those supervised learning algorithms that Jason was talking about. Or find new patterns that we didn’t know were in our test data, like identifying the three nodes in the network that always fall over when lode goes beyond this point. And then we go and look at those more specifically.

Jason Arbon: I think what people don’t realize is my work has been AI to a default. Almost to think, the extreme end of like what if we rebuilt everything from machinery and AI from the ground up? But the most interesting thing is that what AI does is really, it abstracts the logic of testing from the implementation of the application that we’re, the under test. The beautiful thing is it’s very expensive to build the first test case. But because it’s abstracted, meaning it can recognize tens of thousands of different shopping carts in apps, you only have to write one function that says, “Click the shopping cart,” for example. And it’s reusable across many, many applications and different platforms. The power of AI, I think, in the near term is going to be the re-use of test artifacts. Like today, if you have two test teams, the two test results, two different test teams, one to run automated tests that clicks the shopping cart, they both open up the app in parallel. They start with an empty Python or Ruby file or something. They find this X path or magic ID that their developers chose for their part. They hard code that value in their automation. And then they can run it. That’s a very expensive way to right test cases.

Jason Arbon: What AI’s gonna enable in the near term here is one person can write that shopping cart test case on their app. And they can take that same logic, and then AI and ML will do the work of mapping it automatically to another application. So we’ll start to see the beginnings of test case and test artifact re-use across application, and really across platform. And that’s really gonna be the next big power up, I think, that AI’s gonna bring us. It’s a very expensive technique to apply to a single application. Here you can just write a couple lines of code and you know, maintain and wanna fix this. Power, I think, is with re-use. And will help turn testing into more of an engineering discipline, where we have tool boxes and reusable infrastructure and reusable patterns.

Daniel Geater: I think there’s gonna be a really cool thing that we’re gonna see as well, to my mind, is that as we do this, particularly on the test perspective, the more we try and apply AI, because we will, right? It’s a tool kit. It’s there. And if we don’t use it, our competition’s gonna use it. Devs are already rolling it into products. So on the test side, we need to know what to do with it now they’ve got it. But I think what’s gonna happen, is much the same as when we started really getting down with automation. We saw this rise of the developer in test, the idea that you’ve got someone with a full suite of programming skill who was specifically focused on quality and quality engineering on a product.

Daniel Geater: My personal belief is the same thing is gonna happen with AI and ML when it comes to test process. I think we are gonna see, in the next handful of years, this concept of data scientist in test is gonna come in. Because when you really wanna get hands on with how you’re gonna apply AI and ML to your testing, or if you really wanna get hands on with how do you get the best techniques and work out how you truly exercise the model to check it runs right, you’re in the realm of data science skills. And I think we are gonna see this new parallel is gonna come up where we’re gonna have a new role for the guys that are ready to be the first one that writes that asset. And then all of the ones can go beyond it. Well, ready to be the one that is capable of defining the strategy, and then all the other projects follow through.

Michael Larsen: Nice.

Jason Arbon: Dan’s very right in that there’s gonna be this kind of merge from the top down and the bottom up, where testers are gonna become elevated. The key thing about AI to think about is that you don’t have to write the code. If you just have the input and output description of what your problem is, the machines will build the code for you. That’s the amazing thing.

Jason Arbon: What software engineers don’t realize is that they’re the ones who’s job is gonna be most disrupted. Because most software engineering is hey, you’ve got a specification. Here’s the input and here’s the output. Go build it. Today they go out and hard code it. In the future, many of these functionalities will just be training AI systems and ML systems to perform those tasks. But a tester is the most perfect person to do that job, the software developer and engineer of the future. Because they’re really good at, guess what, gathering a bunch of input examples, and a bunch of output examples, and making sure that they system behaves as expected.

Jason Arbon: I realized when I dropped in at Google and looked at the research team over there, guess what, there weren’t any testers. Guess what I found out, all those engineers on the search engine that made quite a bit of money for Google, their titles weren’t Software Engineer. I moved from Microsoft, where I was a software engineer, and au contraire, when you land at Google, they’re all called Software Quality Engineers. They’re basically testers in disguise. Testers in disguise were the most highly paid, and the most highly beloved and worshiped engineers at Google. They just couldn’t call themselves testers ’cause they would lose another 100, 200, 300 K in salary. But they’re really just doing a testing job, training these machines, and these very complex systems, and it’s really just a glorified testing job.

Jason Arbon: So testers have an opportunity, not just to move up and be an AI based tester person, but there’s a real opportunity to take your testing skills and move them directly to the top of the pack and learn how to train AI machine learning systems, because it’s all about input and output and making sure that quality is maintained.

Matthew Heusser: So I’m gonna step in there with what I think is a really important little bit of etymology. Spent three years at a Silicone Valley start up funded by Draper Fisher. And I worked with Ken Peer. Michael used to work with Ken Peer, too, at Social Text. And Ken was at Xerox Park for 30 years. And he worked on the second personal computer ever created, the Dorado, I think it was. And what he told me was exactly what Jason just said. If you were a tester, that meant you physically soldered boards together, at least tested those soldered boards. And the pay was blue collar. It was low. It was hourly. So that software quality role was invented to say no, this is a professional person that can get a salary. And that because most people in Silicon Valley don’t solder boards together any more. So I think it’s absolutely true. And it’s kind of amusing that in the 21st century, we have this big movement of, “I’m not a quality guy I’m a tester and you can’t guarantee quality. I just do testing.” Yeah. So, that switch was just a word play that happened to get people higher salary. Some programmers are called architect. All kinds of reasons that we do things like that. So it’s neat that it came out here.

Matthew Heusser: Back to Dan, let’s talk about specifically some of the AI projects you’ve had to test. I know you can’t name customer names. But I’m hoping you can talk about what the system did that you were testing.

Daniel Geater: So we’ve worked from some shopper recommendation things. “I want this. Can you find me equivalent things? I wanna purchase a particular outfit. Can you find little things in stock that look this particular way?” So in other words, “I’ve got an image of the outfit that I wanna look like. Can you find me the clothing that you have in stock that would look this way?” That was pretty cool.

Daniel Geater: The other ones would kind of give the game away. It would be pretty obvious who they were for.

Matthew Heusser: Fair enough. I’m glad we came up with one.

Daniel Geater: I still have to double check [Editors note: they were OK with it] I may need to ask you to edit that out. But I think we should be okay with that.

Matthew Heusser: And can you tell me about how you’ve used AI to test?

Daniel Geater: So this has come back to what we were talking about before, the optimization engine. So one of the problems that we find, as I’m sure everybody on here can relate to, is, “I’ve got X thousand tests. And I’m not gonna get them all done.”

Daniel Geater: And obviously, historically we’d say, “Well cool, we’ll do risk based testing. We’ll do priority based testing.” But what you discover is it gets harder and harder to do that prioritization correctly and still keep it up to date and still keep looking at it. Quality systems are very much a dysphoria, you know? Every organization you go to beyond a certain size, you will inevitably find that there is more than one quality system. They know they shouldn’t. They know they’re not supposed to. But they do it anyway. And it gets really hard to track when you’ve got content that’s in ALM here and GRO there and Excel over there, and you discover you’ve got one or two guys that are the go-to guys. They’re the ones that know how to do this because this is where the testing always fails. They’re the Ninjas or the subject matter experts or, whichever flavor you wanna talk about it.

Daniel Geater: One of the things that AI’s really good at is, take a whole bunch of data, make a calculated decision, and get rid of the reliance on this Ninja or this expert. And what we’re working on internally is an AI that can read all of this data, and it can help you prioritize your testing. ‘Cause you know you’re not gonna get 200,000 test, or whatever it is, run in time for the future release next week. We’ll use our AI to turn around and say, “These are the ones that are actually likely to find you a problem. These are the areas where you’re almost certainly gonna have a defect based on where we’ve done this kind of change to our code base, or this kind of release in the past. You need to focus your testing here. You don’t just need to go and ask Fred where he thinks it should be tested ’cause he was there the last time we did this.” So that’s one of the things that we’re driving quite strongly internally. And working at how you make better planning, take some of the headache off the test managers.

Matthew Heusser: Thanks, Dan. I know you have clients that care about their internal IP and you’ve been as open as you can be and I appreciate it.

Matthew Heusser: Jason, you’ve got a tutorial on AI. I was curious if you could tell us a little bit about that. Is the content open? Can people go download it somewhere? And what are people know when they leave? Can they really go do it when they leave? Or they have a good familiarity? Or what level is that content?

Jason Arbon: My buddy, Tariq King and I, and Tariq’s over at Ultimate Software these days, he’s a lot smarter than I am. He actually knows math and things like that. But basically we have a half day and a full day tutorial. We’re not really like a teacher tutorial person, there’s so many testers that wanna know about AI and how to apply it. So we put a little program together. So it’s a half day or a full day thing, like I said. And we teach people the basics of machine learning and AI. And we use real world examples. And toward the end of the course, we start showing how it applies to different testing problems. So things like how do you build classifiers, predictors? How do you use reinforcement learning? How do you use unsupervised learning? And how it applies to very specific testing problems of today.

Jason Arbon: And to be very frank, I think the reality is probably only some fraction of those people go back and actually try to apply it. Most people are just happy to have understood, it got their fears out of the way. And it helped them to understand the basic math and the basic ways that they can apply AI. Especially managers wanna know, get a feel for it today.

Jason Arbon: But there’s still not a lot of uptake in the testing community, to be frank. That’s why Tariq King’s tutorial was teach people how to use the stuff, how to apply it, and demystify it. But it’s a lot of work to keep up and do these presentations all the time. So they’re super hands on, super intense, and people do walk away … People call me later, and they email me, they message me and say they’re working on problems. And I work with ’em and try to figure out how to bootstrap some machine learning AI stuff. It’s hard to go from zero to full AI application in your software testing world in six hours or something.

Jason Arbon: So the content, I blame … He’s my buddy, Tariq King, so I can give him trouble. We’ve got 90 slides. We’ve got some exercises and some other things to play with and make it feel real. But we haven’t open sourced it yet. The intention is to open it soon. It’s just, Tariq hasn’t gotten around … ‘Cause Tariq’s a perfectionist, I would just show a slide deck. But Tariq actually wants to take the content and make it into an interactive site so people can guide themselves through it, have little mini quizzes and make sure that they’ve learned the material. So I hope to have that sometime soon. It depends on Tariq’s quality bar.

Matthew Heusser: Okay. And people can learn more about Test.AI at Test.AI.

Jason Arbon: People can get a flavor of how Test.AI works. The one caveat is I would say that right now the company is focused on customers that have more than 50 or 100 applications, that extra value of scale. And so we’re really kinda focused on that area right now. If you have an individual app, you wanna use Test AI, I disappoint half the planet today. We won’t be able to support those kind of folks until mid-next year.

Matthew Heusser: Okay. Fair enough. Dan, where can people go to learn more about what you’re up to, your practice?

Daniel Geater: We are actually in the process of putting together some new content for QualiTest’s main site. So watch this space. That should be live really quite quickly.

Matthew Heusser: Okay, if we get any updates, they’ll be in the show notes. Jason, where can people go to learn a little bit more about you and what you’re doing besides the commercial site?

Jason Arbon: Tariq and I created a thing called Aitesting.org. It’s the AI for Software Testing Association. This is a place where we distribute and disseminate white papers and people’s insights and new applications and applying AI to software testing problems. So you can just sign up, it’s a mailing list, and get the latest information pumped straight to your inbox.

Matthew Heusser: Thanks, Jason. I’m really pleased with how this came out.

Jason Arbon: Cool. Hope I didn’t embarrass anybody.

Matthew Heusser: I understand your skill set a little bit differently now. I’d say you’re a popularizer, and that’s a really important job in this community. So it’s all good.

Jason Arbon: Oh, so I’m like Britney Spears.

All: [laughter]

Jason Arbon: Well actually, to be frank, if I can do it, other people can do it. And I feel like this is a revolution that’s coming. It’s not just moving to web or moving to mobile, this is gonna change the way we do testing fundamentally. And I just wanna make sure that people have had an opportunity to catch the train before it runs through town.

Matthew Heusser: Okay. Thanks, Jason. Thanks, everybody, for coming. And we’ll be seeing you on the internet.

Michael Larsen: Right on.

Jason Arbon: Thank you very much.

Matthew Heusser: Okay. See ya.

MICHAEL LARSEN:  That concludes this episode of The Testing Show. We also want to encourage you, our listeners,  to give us a rating and a review on Apple Podcasts. Those ratings and reviews help raise the visibility of the show and let more people find us.

Also, we want to invite you to come join us on The Testing Show Slack Channel as a way to communicate about the show, talk to us about what you like and what you’d like to hear, and also to help us shape future shows. Please email us at TheTestingShow(at)QualitestGroup(dot)com and we will send you an invite to join the group.

The Testing Show is produced and edited by Michael Larsen, Moderated by Matt Heusser, with frequent contributions from our many featured guests who bring the topics and expertise to make the show happen.

Additionally, if you have questions you’d like to see addressed on The Testing Show, or if you would like to BE a guest on the podcast, please email us at TheTestingShow(at)qualitestgroup(dot)com.

Thanks for listening and we will see you again in the new year in January 2019.

Recent posts

Get started with a free 30 minute
consultation with an expert.