Let’s Get Continuous

June 19, 15:52 PM
/

Panelists

Matthew Heusser
Michael Larsen
Nikolay Advolodkin
Transcript

Michael Larsen (INTRO):

Hello, and welcome to The Testing Show.

Episode 147.

Let’s Get Continuous.

This show was recorded on Wednesday, May 8th, 2024.

Do you practice Continuous (fill-in-the-blank)? We’ve heard and seen many companies and teams profess to working with CI/CD but what they ended up with has often been lacking or at the least, missing out on the promise. To that end, Matthew Heusser and Michael Larsen welcome Nikolay Advolodkin from Sauce Labs to talk about improving testing practices along CI/CD processes and implementing real solutions.

And with that, on with the show.

Matthew Heusser (00:00):
Welcome back to The Testing Show. Once again, we have something a little bit different this time. We’ve got Nikolay Advolodkin. Welcome to the show, Nikolai and Nikolai is a staff developer advocate at Sauce Labs, and if you’ve been around software testing and you’ve done a Google search, you’ve probably run into Sauce Labs. They were founded by co-authors of the Selenium Project and were offering a cloud-based hosting for the Selenium server a couple of decades ago. And since that, they’ve only grown to provide infrastructure and along that journey, Nikolai helps companies to learn about that. But there’s so much more. I’m getting it all wrong, so I’ll just stop now and let you tell us a little bit about you and what you do.

Nikolay Advolodkin (00:47):
Yeah, sure. Thanks so much, Matt. I started my tech journey in 2008, started with automated software testing and really enjoyed… well, started with manual testing, but transitioned into automated testing and just really enjoyed that journey and the growth and the change of the market. In 2015, I was super passionate about it, started my blog, Ultimate QA, where I started teaching people about a lot of the mistakes that I’ve made in hopes that others don’t have to make the same mistakes and can learn and help their careers. I do training there and workshops and video tutorials and so on. And in 2018 I started working at Sauce Labs as a solutions architect. That was a really cool role because it was customer facing and my role was basically helping customers implement automated software testing, including Sauce Labs, but really most of it was around helping customers to implement better practices around automated software testing and figuring out how to do it better, how to implement CI/CD, how to remove bottlenecks and so on, and stuff like that. And today I’m a developer advocate, so my role is more focused on the community. So all the knowledge that I’ve gathered over the years, taking that, sharing it with the community, providing value back to the world. And yeah, that’s kind of what I’m doing today.

Matthew Heusser (02:13):
Thank you, Nikolai. And of course as always we have Michael Larsen, show producer, show runner.

Michael Larsen (02:18):
Glad to be here as always. And I guess, Nikolai, not only am I familiar with Sauce Labs, I’ve actually used it on numerous occasions, but it’s been a few years. If you were to say to somebody, “Hey, what is Sauce Labs up to currently?”

Nikolay Advolodkin (02:32):
Yeah, great question, Michael. You’re right, especially as Matt talked about historically, when Jason Huggins co-created the company, that’s what it was all about. I actually just had on my podcast, Test Automation Experience and he told me the acronym of SAUCE, why it’s called Sauce. It was Selenium Automation Under Cloud Execution… SAUCE. That’s what it stands for. That’s what the whole premise is; functional automated browser testing with Selenium. We added Appium since then, as you guys know, there’s so much more types of testing than just functional browser testing. So we’ve added visual testing capabilities because nowadays it’s very important that our browsers and devices render things that that’s probably where most of the issues actually occur. CSS styling, rendering issues, it’s not so common that you’ll encounter functional issues in production. Sometimes you do. We added accessibility testing. Front end performance testing is a big one now too with Google and people and companies trying to rank on Google your website coming up really quickly and not having lags and content layout shifts.

(03:42):
That’s also probably pretty important for if you want to do rank on Google, we started integrations with Playwright and Cypress, so you can bring those and run those in our cloud as well against our infrastructure. And we’re working on an AI-based solution, but we don’t want it to be another fake AI solution. And so we’re trying to make actually something good. And so the idea there is we’re going to try to create automated test scripts from natural language and help you to add those to your test suite and then run those in our cloud. So that’s coming later in the year, end of the year, probably early next year, more to be shown in the future.

Matthew Heusser (04:25):
Yeah. Thanks. And one thing you mentioned earlier, CI/CD, I got a couple of theories for you and see if you can play Doctor and fix ’em. I think CI/CD as it’s described, you read the Jez Humble book from 2008, it’s just “really fantastic”. We’re going to have small teams. Every team is going to push to production several times a day. We’re going to continue continuously run all of our tests, and I will give Sauce Labs credit for enabling a piece of that. They built some of the first tooling around that, but as it’s practiced, what I see too often, and I’d love to hear from our audience, maybe it’s not as bad as I think, but when you get called in as a consultant to fix things, something’s usually broken. What I see is, we’ve got to check it into version control and then it needs to be code reviewed. Then we can run it in this environment and then it’s got to be merged. Then we can run it again in that environment and then that environment’s going to break because now it’s two weeks old, so we’ve got to fix it and the fix needs to go through the process. So it can be weeks between a commit and code is in production for a company that says, “We do CI/CD, we don’t need no stinking human exploration testers because they slow us down.” Am I right? How prevalent is that? What’s the fix?

Nikolay Advolodkin (05:43):
Great point, Matt. And I think it’s cool because you come also from a consulting background, so you probably have a very broad range of customers you’ve interacted with. And for me, when I was a solutions architect, I also spent a bunch of time with probably hundreds of different customers and looking into their infrastructure and what they were doing there. And so from what I encountered and the conversation we’ve had with other solution architects, that seems to be pretty prevalent from our point of view. People or companies will say they’re doing CI/CD, but it’s never really complete and it’s always part of the way there. Of course, it depends on the maturity of a company. Some companies will be more mature and maybe they’re pushing into SEI environment and then they’re having some tests execute, and maybe from there they’re having manual steps that will push ’em through different stages of the environment. But a lot of customers that we’ll encounter, they will just use CI/CD maybe as a place to initially push the code, probably not even run any tests, maybe just build something and it just gets stuck there and it gets stuck there for days or weeks or sometimes I’ve seen customers have pull requests just pile up for months and we’re just waiting for them to get merged before they go on to the next stage

Matthew Heusser (07:04):
Months! Wow! So I noticed a slight difference in our language there, which I think is good and important. I would say I think most companies have CI/CD implemented really poorly, or at least CI. I think you would say, “No, Matt, they have Jenkins, they have Hudson, they have Team City, they have this or that.” I wouldn’t call that CI. They’ve got a little tool that kicks off builds. That’s all that is. I think there’s a slight difference in the language there.

Nikolay Advolodkin (07:35):
That’s a great point.

Matthew Heusser (07:36):
Could you elaborate? Does making that distinction helpful and how often do you have to do that?

Nikolay Advolodkin (07:40):
I think that’s a great distinction to make because I find in our industry there are a lot of terms that have a definition that depends on the person that’s talking about it. For example, what is an integration test or what is a component test? Those definitions are not fully clear as well. And so I agree with you. CI, to me, I think continuous integration, it probably aligns with more with what you defined. It’s actually about being able to take a piece of code, push it into a CI pipeline, and then having the CI pipeline push it through to the appropriate stages in an automated manner until it gets to a point where we decide whether we want to push it to production or not. And at that stage it’s like, do we have enough confidence in this? Or if we don’t have enough confidence, do we want to push it to production because we’re ready for this feature or do we want to wait a future date where we do want to introduce this feature? So that to me is CI is basically getting a little piece of code all the way to the stage until you’re ready to push a button and say go to production.

Matthew Heusser (08:48):
So if I got that right, there’s a distinction there even to say you have CI, some of the companies we work with, they’ll have a commit to version control push, we’ll do a build, but that doesn’t really work because we need to merge to some other tip branch and that needs to go on staging and that merge is a human manual process. You might say companies that are doing that, assuming the delay is not minutes, assuming the delay is more than the business day, they’re not even really doing CI, they’re just using Jenkins.

Nikolay Advolodkin (09:20):
Yeah, that’s an interesting distinction. I guess… yeah, it’s how pedantic do you want to get in your definition of what CI is? Do you have to make it all the way to pre-production automatically to call it CI? I don’t know.

Matthew Heusser (09:37):
That’s not continually integrating. At the very least, the conversation worth having when you come in and they say, “Oh yeah, we’re a CI/CD shop.” What does that mean to you?

Nikolay Advolodkin (09:45):
Yeah, I think that is the best point we’ve had so far is let’s ask the customer what they think CI means and what is the next stage for them and how we can help ’em to get there.

Michael Larsen (09:59):
So I’ll jump in on this. I think this is actually a good point because I know that a lot of organizations talk about CI/CD and I think this is a good statement. Are you doing CI/CD or are you driving Jenkins to run a bunch of scripts? Thinking back, I used to think about the group that I worked with when I was involved with, and I was the build manager for Socialtext for a number of years, and so I was intimately familiar with all of the little bits and baubles that went together for being able to build our platform. That was exactly the point. While we said we used it and we didn’t mention it as a CI/CD environment because we had too many intermediate steps that we had to do, we didn’t really have a choice here. Now, if we wanted to deploy to our development environments or to our pre-staging environment, that was fine.

(10:47):
We were able to make that work, but if we wanted to make releases available for production, there was a fair amount of handholding that we had to do and that was mandatory. It was not something that we said we could just take away. It was actual contracts that we had with individual organizations that would force us basically to say, “No, you’re not going to push this code at this given point in time. You’ll make this release available and within a 30 to 60-day window, they will be able to deploy it on their timeline or on our timeline. But no, there is no such thing as a one-button push and it just goes everywhere. And I think that’s one of the things that people miss a lot of time when we talk about the vernacular of CI/CD/CT/ C-whatever you want to put into it.

(11:32):
I’m thinking about this from a perspective. I remember distinctly, that we had so much fun trying to get a number of pieces to work together. We used Sauce Labs for a time as an external piece so that we could get a lot of integration done, but we had to do it fairly small. You couldn’t say, Hey, we’re going to farm everything out to Sauce Labs and do all the tests necessary” because frankly, it’s already fairly beefy to spin up a bunch of test environments and run them in parallel to run your tests to begin with. Now you’re going to spin “up those environments run in parallel and then do a secondary step to run in parallel on multiple Sauce Labs servers and then get it to come back for your final finishing point. Maybe a broader conversation about the tooling aspects. It seems to me that you could go very simple with Jenkins, and have local scripts on a local machine in a single environment that’s been terraformed and it’s not terribly complex, but it takes time. But if you want to shave off the time, you have to then farm out to a whole lot of other organizations, Sauce Labs or Blue Ocean or whatever that you would then integrate with and that can become a right royal mess. Here’s my big question. I’m finally leading up to it. How would you counsel somebody saying, “We want to be able to help you effectively do CI/CD, utilizing Sauce Labs, utilizing Jenkins, or Blue Ocean, or whatever.” How would you encourage them to go about doing that? I realize that’s a huge question.

Nikolay Advolodkin (13:07):
Yeah, that is a huge question. Of course, a lot of it depends on which stage we’re at, but if we’re starting out from a greenfield environment where a team doesn’t have anything set up and they’re like, okay, we want to do CI/CD and we want to get started, I think the first recommendation I would make is to get started as early as possible. That is one of the mistakes that I’ve seen teams make is getting started with CI earlier when you have less code, and fewer dependencies is easier than it is when you are months into development and your system is drastically more complicated and you didn’t think about how all of those systems will interact in CI. And so starting in CI from the beginning, even if it’s as simple as, “Hey, I can push my code into a Jenkins or GitHub actions or whatever, and it builds”. Congratulations, that’s a great step, and I’ve seen teams get that wrong because they went way too long and they didn’t think about building that in CI.

(14:15):
And so once you get it to the next stage, I think just slowly ramping up the next steps of the journey, whether it’s adding some tests or do you now want to try to build and push to a new environment and see whether your code compiles there? It kind of depends on the team and what they’re trying to achieve and what their business goals are. As a lot of things are, I think in software development, an iterative process, figuring out what is the next logical step to take, taking that next small step, keep getting feedback, and keep improving on that feedback, and you can’t stop. You have to keep improving on CI pipelines. If you treat the configuration as code and now you’re having multiple systems integrate into your CI pipelines and pushing through different environments and managing databases and testing software, you’ll get more and more work and the system will become more and more complex, and it’s important to keep reevaluating that complexity and removing things that don’t make sense and updating things that need improvement. And I would just keep moving in that kind of direction with a customer.

Matthew Heusser (15:25):
If I can ask, can you tell us a story of a customer or a client or whatever you want to call it, someone you worked with that had, I don’t want to say a mess on the floor, but they had one of those poor performing environments we talked about earlier? What did you do to fix it?

Nikolay Advolodkin (15:40):
I can tell probably a few stories about starting with a mess and improving it. One story comes from a large organization, which tend to typically be slower moving because there’s a lot more bureaucracy to get through. They had a CI system set up, I’m going to start calling it as you guys are calling it. They had Azure DevOps set up and for the Azure DevOps set up, they would create a pull request and the pull request would go into GitHub and then it would stay there and that’s it. There was an Azure pipeline set up that was supposed to deploy through different stages of different environments, but it never did anything. And these pull requests would just pile up over months and months. Nobody would review them, nobody would merge them in and they would just sit there. We worked with the customer to understand why this was happening, and then the first step that we did was actually enable the movement of the code through the different stages.

(16:47):
So we had dev code go to Dev, great. Then we enabled that same code to go to QA and then to pre-production and then stopped there. And so we enabled that process to start happening automatically. Then we started putting in some quality gates in place with some automated testing. First, we just added it was a system around managing APIs, so it was all around API testing. So we added a few simple API tests that would ensure that as we’re deploying the system through the different stages, nothing is breaking. There, the process would be you check in the code as soon as you check in the code in your Azure DevOps, it starts to execute some tests. If those tests pass, that shows up in your pull request that the test passed, different types of tests are passing and now you can perform a pull request review assuming that nothing broke. Once you do a pull request review, you can merge that in and that will now start to automatically go through the different stages. And at each stage, there’s different types of tests that are being executed until it gets to a certain point where we want to decide whether we want to release it or not.

Matthew Heusser (17:54):
So that’s interesting. I hear a couple of different things in there, so I’m going to try to summarize it in saying that you created a web-based maybe your command line, an easy interface for you to do all of the steps through the pipeline and notify other people so you didn’t have to send emails and get on text messenger and things like that anymore, and that just made it easier for the whole thing to flow. I would’ve thought you would’ve started with the backlog. If you go after the backlog, it’s just going to keep building up. So there are two problems there.

Nikolay Advolodkin (18:25):
Yeah, that’s a great point, Matt. So Azure DevOps, different tooling have different options, but nowadays most tools make it really easy to do stage deployments. So like Azure DevOps itself has a UI that’s built in there that allows you to deploy through multiple stages. So nothing special that we have to do there just to configure it. Great point about the backlog. With this particular client, the QA team and the dev team are separate. They’re not integrated, which is not ideal. Ideally, I would love for them to be integrated and not work in silos, but probably with a lot of large customers and a lot of legacy culture, that tends to be the case. And so here this was the case and so we didn’t 100% fully control the backlog, but we did control the automated testing capabilities and the DevOps and the CI part. And so we started with the part that we can control and I think our next logical step would be to start integrating with the dev team and helping them to get their backlog into the CI process and doing that on a more regular basis.

Michael Larsen (19:31):
I think at this stage of the game, there are so many possible choices. A decade ago when I first started playing with CI/CD, it was a lot more limited, if you will. If you needed a cloud environment, typically AWS was associated with that. You would spin up EC2 instances, you would start up Jenkins and you would run through your various tests, you would split them out again so that you could run them in a reasonable amount of time and based on your gates that you would set. If you had a fail, you would stop everything. Or if you had a conditional fail, you would say, “Okay, it’s a fail, but it’s not one of our critical ones, so we’ll keep going” and then we’ll build our software and we’ll push it out to where it needs to go. And again, we started off with our entire environment in-house.

(20:19):
We didn’t even use a cloud environment at first. We just did it on machines that we had. So we were able to build up the process and develop the muscles, if you will, early on so that we could then expand it out and make it into a larger framework. And there’ve been case studies that were done, and the Socialtext test model was a pretty darn good one. Matt and I both have experience with it. We both have talked about it over the years as being a pretty good example of a good environment for its time. Of course, things change, and technology moves on. There’s now so many other choices. And I guess the question that I would want to ask here is when you no longer have the simplicity of “one ring to rule them all, which we don’t anymore, you may have one group that’s using AWS, but you might have another group that’s using Azure DevOps.

(21:09):
You might have another group using some other cloud infrastructure, using some other build tool, using some other plugins. It can make for a interesting challenge to get everything to play nice. I kind of wonder if it is even possible in that type of a scenario for everything to play nice. Is that a possibility or is it still a matter of you got to kind of pick your battles or you got to pick and choose which tools are actually going to do what you need them to do and you may honestly not be able to get everything unified?

Nikolay Advolodkin (21:43):
And when you say for everything to play nice, do you mean for example, since we’re talking about CI/CD here, different CI systems to play nicely together?

Michael Larsen (21:52):
Potentially, yes, or different cloud systems, or being able to get resources and access different things. Let’s say you’ve got a situation where you have multiple teams working with an organization. You’ve got your front-end developers, you’ve got your back-end developers, you have your microservices group. Now each of these might be… they could be acquisitions for that matter. It may not be practical to say, “Hey, guess what everybody, we’re all going to run on a unified framework. We’re all going to use the same cloud system. We’re all going to use the same CI/CD, we’re all going to use the same… whatever, tooling, et cetera,” because that would be a big hit. And people are already well-versed in certain tools. What would you encourage (or would you encourage) them to say, “Can we use that heterogeneous knowledge and bring it together effectively?” or would you encourage them to say, “Try to flatten that so that you have less tools to deal with?” Is that a reasonable option, A or B, or is it just a never-ending trade off?

Nikolay Advolodkin (23:00):
Michael, that’s such a great question. I’ll talk about my experience first from what I’ve seen, and then I can talk about some potential challenges. I’ve seen two different approaches applied at the numerous companies I’ve worked with. There will be on the one side, there will be, as you talked about, many teams, one organization, however they made it into that organization. They have different tooling and they can pick and choose what they want to do, and those teams are responsible for that. And on the other end, you’ll have the teams where there is something like a center of excellence that guides the tooling and as you said, flattens the amount of technology so that everybody’s aligned with the same tool set. And there is basically a team of people that leads the training and enablement for the rest of the teams within the organization. I’ve seen the latter perform worse on a more often basis than the former.

(24:00):
What I’ve seen from a center of excellence type of situation, where a single set of tools is promoted to all the other teams. The center of excellence team may be really highly skilled engineers, but a lot of times they won’t have enough people or enough training themselves to keep up with all the changing technologies and tools that they are expected to maintain. And so for example, from a Sauce Labs perspective, we’ll have a center of excellence that keeps receiving tickets related to Sauce Labs to this organization, and they might receive dozens, a hundred tickets a month to this org, and they just don’t have enough people to keep up with the demands. And so then what happens is the tools that they recommended, they can’t keep up with the software updates or the standards or the tools that they recommend. And all the teams start to complain, and whine about how the tooling is not up to date.

(25:02):
The software is outdated. The API suck, the user experience sucks, blah, blah, blah, blah, blah. And so then they start to basically protest against the tooling that was recommended to them. And then on the other side where we have the teams that are enabled with their own different technologies, one of the main benefits I’ve seen there is that it creates the sort of best of breed type of environment where teams can pick and choose however they want to evaluate the tooling that they want to pick, and then you start to see what works and what doesn’t work. Maybe if the teams that are performing better than other teams, maybe their tooling starts to get more prominence and more recognition, and that can start to become the standard. But then my question, and I don’t know the answer to this is at what point does that now become a center of excellence? Because they’ve outperformed every other team and whatever metrics exist, they’ve outperformed it, now they become the center of excellence. And then you get into the trouble that I talked about with the center of excellence, and I don’t have the answers to that and I wonder what your guys’ thoughts are.

Matthew Heusser (26:09):
Yeah, I’ve definitely seen the Center of Excellence says we have to use Fruity Pebbles 7.x, but my software automatically upgraded to 7.5 because of the .x, and that’s not compatible with Gimp 3.2 on Linux. On Red Hat. You say, I have to use Red Hat, I have to use Gimp. They’re incompatible, and whatever automatic download, magical tool thing breaks and you send a ticket and they get back to you in two weeks or something. Meanwhile, your stuff doesn’t run. So I’ve definitely seen that problem. We tend to think about centers of competence, communities of practice to solve that with much more of a loose, tight coupling. Without the command and control aspect, you really have to have a fair amount of trust and competent people to be successful at that, but you might need trust and competent people anyway.

Michael Larsen (27:03):
We had a similar situation with the last org that I worked with. Exactly as I was saying. We looked at this from a training perspective, interestingly enough, because they were trying to figure out who was going to work with on what projects. So we had three cohorts, if you will, and we decided, I wouldn’t say pit against each other, but it was interesting to see what each of them came up with and why they went that route. One of those cohorts was based around JavaScript and TypeScript. They were based around doing a lot of their testing using Cypress. Interestingly enough, when you go down a path, you are of course committing to a channel of tools which will have their own unique requirements, which will have their own unique limitations, and they will have their own unique feature version, et cetera, compatibility challenges. So you start there, and then we did a second group where we went down using Java natively, I think it was a fairly large group that was using Java for that purpose, and they had their own tooling.

(28:08):
They were using WebDriver for that purpose. Similar deal. Now, again, if you were going to say, “Hey, we’ve got these two groups” and you had to tell each one of them, “You’re going to need to play by the same rules and use the same tools,” you’ve sort of defeated the purpose. Each group has been given these levels of expertise, and my group that I ended up teaching was doing C# and Playwright, so that was a whole different group, and we had our own challenges and interesting things. If the goal was to see which one was going to be more effective and be better use going down the road, honestly, I think all three of them could have been as effective as you want them to do, but all three of them were used for different components. Sure. The C#/.NET environments, it made sense to use that because it’s already contained and you’ve got an ecosystem for it.

(28:58):
If you’re looking for open web, then yeah, TypeScript and JavaScript makes more sense. If you have a more traditional legacy backend type of environment, the Java might make more sense. And so I think that instead of saying, “Hey, we’re going to force you all to go into one straight pipeline,” it made more sense for them to say, “Let’s skill up these three groups and let’s let them run for a while and let’s see how they do.” And from my understanding, those three groups are still actively running and they’re doing their own thing and they’re letting their DevOps team figure out how to get those components to piece together for their final product. And I think that’s rational,

Nikolay Advolodkin (29:35):
And I think that makes sense. To me, It’s like, okay, if the business is happy with the outcomes of this team, my other question would be, are these teams happy with their tooling choices and their outcomes? And if the teams are also happy with the tooling choices and outcomes, I would question is there even a need to change and why do we want to merge technologies? If the teams are happy, the business is happy. Let’s just keep things going.

Matthew Heusser (30:00):
Fantastic. Nikolai, I wish we had more time to dive into more things. Before you head out, could you just give us an idea of you think of the current landscape between Cyprus and playwright, puppeteer, selenium, Tricentis, and all of the closed source tools that are coming. How should people be making this decision?

Nikolay Advolodkin (30:21):
Yeah, great question. I think a tooling decision depends on the needs of your organization, the needs of your team, even the needs of you as an individual. I’ll give an example. We did actually a proof of concept analysis for a customer where they originally wanted to use Selenium and C# because they’re a C# shop, they’re a .NET shop, and so they’re like, oh, we’re comfortable with this technology and I’m comfortable with the .NET ecosystem very well. But then we also recommended and said, “Hey, why don’t we check out Cypress and Playwright as well? Selenium has been around for a really long time, but maybe that’s not exactly what you need. We don’t know. Let’s take a look.” And so we analyzed a number of different applications, created a number of different scripts that would perform different operations, like functional testing, visual testing, front end performance, accessibility.

(31:14):
I think that’s pretty much what we did for the POC, and we analyzed 50 different points across the different tools to see what makes the most sense for the organization. It actually ended up that Playwright was the smart choice for the organization because it’s what they needed. It achieved their goals the best, and it was able to automate the applications that they wanted to automate. Cypress was close, but it has some limitations. One of Cypress’ limitations was it was unable to work really well with Salesforce applications because it changes domains a lot, and so that basically cut it out of the race. And so my point there is we wouldn’t have learned any of that if we didn’t do the analysis. And so I think that tools will come and go. One day a tool is the most popular. In a few years, that tool may be out of the ecosystem. I mean, we’ve seen that over and over happening all the time. So do an analysis, figure out what makes the most sense. Don’t just dive into a tool right in because it happens to be the tool that pops up on your social media the most, or the tool that’s mentioned the most by organization. Just figure out what’s important for you and then go from there. Give it a little time, don’t rush it.

Michael Larsen (32:24):
Very cool. We want to respect your time. Thank you so much for joining us today. If we wanted to, basically, if somebody wants to know more about Sauce Labs, there’s a lot of information online of course, but if somebody wanted to reach out to you after hearing about this, saying, “Hey, like what Nikolay is talking about here, I’d love to know more from him.” How can people get in touch with you? So if you want to give a little bit of a plug for that, now’s your chance.

Nikolay Advolodkin (32:48):
Sure. Thanks so much, Michael. As part of my developer advocacy role at Sauce Labs, I run a podcast, Test Automation Experience. It’s on YouTube and also comes in audio format. Would be awesome to have you guys on there to chat outside of that podcast. And you can find me at ultimate qa.com. It’s a blog and brand that I’ve built since 2015. There I blog about my experiences in different types of technologies, similar conversations that I’ve had here with Matt and Michael, maybe around different topics. You can find me there. LinkedIn, of course, Twitter, you can find me as well. And those are my main social areas where you can catch me.

Michael Larsen (33:28):
Well, Nikolai, thank you so much. I am really happy that you were able to join us today and for everybody out there listening to the testing show, we’re glad that you joined us and we look forward to getting together with you again real soon. Thanks for listening and thanks for joining and take care everybody.

Matthew Heusser (33:46):
Bye!

Nikolay Advolodkin (33:47):
Bye, everyone.

Michael Larsen (OUTRO):
That concludes this episode of the Testing Show. We also want to encourage you, our listeners to give us a rating and a review on Apple Podcasts, YouTube Music, and Spotify.

Those ratings and reviews help raise the show’s visibility and let more people find us. Matt and Michael have written a book that has distilled many years of this podcast along with our decades of experiences and you can get a copy of it for yourself. “Software Testing Strategies: a Testing Guide for the 2020s” is available from Packt Publishing and is available from Packt directly, from Amazon, Barnes and Noble, and many other online booksellers.

Also, we want to invite you to join us on The Testing Show Slack channel as a way to communicate about the show. Talk to us about what you like and what you’d like to hear. And also to help us shape future shows, please email us at [email protected] and we will send you an invite to join the group.

The Testing Show is produced and edited by Michael Larsen, moderated by Matt Heusser with frequent contributions from our many featured guests who bring the topics and expertise to make the show happen. Thanks for listening.

Recent posts

Get started with a free 30 minute consultation with an expert.