The Testing Show: The Intersection of Testing and Quality

The Testing Show: The Intersection of Testing and Quality

Once upon a time, testing was about creating the right documentation, making sure requirements were correct, be the quality police, and tell everybody else to do their job well. Lagter, we focused on specific testing, diving in to find defects and problems, and act like an insurance policy so that if there's a problem, we find it, but stop telling people how to do their jobs.

Have we gone too far from that original focus? Is it possible for us to give generalized observations about the quality of the software we are delivering? How would we do it? To answer those questions, Rachel Kibler and Chris Kenst join Matthew Heusser and Michael Larsen to discuss where testing and quality interact and make a case as to which is more important and at which point in time.

itunesrss

Panelists:


References:


Any questions or suggested topics?
Let us know!

[gravityform id=”35″ title=”false” description=”false”]


Transcript:

Michael Larsen (00:00):
[Intro] Hello and welcome to The Testing Show, Episode 95: The Intersection of Testing and Quality. This show was recorded February 11, 2021, with Rachel Kibler and Chris Kenst joining Matt Heusser and Michael Larsen to discuss where Testing and Quality come together and how those two things are related but not necessarily the same thing. and with that, on with the show…

Matthew Heusser (00:00):
Okay. This week, we’re talking about quality and testing specifically, where does testing start? Where does quality begin? What’s the difference should testers be in the quality business? And if we were, what would that even mean? And it starts as simple as this sort of classic argument where the tester says, “Well, I can’t assure quality”, and the manager looks at him and says, “well, why are you here?” To answer those questions we have Michael Larsen as a full member of the panel. Now, you might not know that Michael is a Senior Automation Engineer at PeopleFluent. When we started the show, I think it was Socialtext, a division of PeopleFluent, and he was a tester and then moved into more of the CI and build. And now does automation and platforming. So I really think he could have some things to contribute in terms of that sort of journey from test to quality. Thanks for being on Michael.

Michael Larsen (01:02):
Absolutely.

Matthew Heusser (01:03):
Then we’re going to have… it’s a very Association for Software Testing podcast. AST is the non-profit advocate for testing within software. We’ve got Rachel Kibler. Welcome back to the show, Rachel.

Rachel Kibler (01:18):
Thank you. It’s so good to be back.

Matthew Heusser (01:20):
Now, Rachel has been on the show a couple of times she is a test engineer for 1-800-contacts. Is that close enough?

Rachel Kibler (01:28):
Yes, it is. I still call myself a tester, but it’s all just semantics.

Matthew Heusser (01:33):
That’s fantastic. And you’re on the board of AST now, is that correct?

Rachel Kibler (01:38):
I am. I’m the treasurer.

Matthew Heusser (01:40):
Okay, great. And I met Rachel at an AST conference, Michael and I are life members of AST. We’ve also got Chris Kenst who’s the sitting president of AST if I’m right.

Chris Kenst (01:52):
That sounds right to me.

Matthew Heusser (01:54):
You’ve been on the board for a couple of years. AST runs… well, pre pandemic ran an annual physical conference and a large number of other events, supports local meetups, which would be like buying chicken wings or whatever, funding event space, paying for the Meetup subscriptions, then also runs a online certificate called Black Box Software Testing and has sort of a warehouse of self-education in software testing called the WHOSE Wiki, which was the WorksHop On Self-Education and software testing. Is there anything else we should be adding?

Chris Kenst (02:33):
Let me just kind of summarize it. You’re right. The AST is a global nonprofit and we are dedicated to advancing the education of software testers. And so we do have the Black Box Software Testing courses, which are probably the most highly regarded classes in the industry. We do normally run a in-person conference, but we’ll probably be moving that to virtual and we do a lot of virtual events and yeah, you’re right. We give a lot of grants and we help anybody we can get started organizing, community organizing.

Matthew Heusser (03:02):
That’s actual volunteer activity, right? I mean,

Chris Kenst (03:06):
Yeah, that’s the spare time.

Matthew Heusser (03:08):
Maybe they pick up the tab for your breakfast as you fly to the board meeting and maybe they cover your airfare, but that’s something you do on a weekend just to benefit the industry. Tell us about your day job, Chris.

Chris Kenst (03:22):
I am a… so kind of like Michael, I’m a lead automation engineer at a company called Promenade Group. We just announced five days ago that we raised $11 million in our series B. So we’re a VC backed startup. And what we do is essentially we just help small businesses build e-commerce sites and get online and handle online sales, which it turns out, you know, in a pandemic it’s really important that small mom and pop businesses actually have that channel so that they can survive. So that’s kind of what I do on a day-to-day basis. Although similar to Michael, even though my title has automation engineer in it, I do consider myself to be a tester andnd I do a lot of hands on testing, and lately I’ve also been doing a lot of hiring, so kind of more traditional things in the testing space.

Matthew Heusser (04:10):
Okay, fantastic. So let’s go back to that classic problem. This sort of “history of testing” in the eighties was a lot of “create the right documentation. Make sure the requirements are correct. Be the quality police to make sure the process is done right. Tell everybody else to do their job well. So we can have high quality software coming in to test.” And then we have around 2001, the context-driven group, Kaner and Back and Pettichord and Marick. And they say, “what if we don’t? What if we just, no matter where the software is, we’re going to be able to dive in and we’re going to find defects and problems, and we’re going to act like an insurance policy so that if there’s a problem, we’re going to find it, but we’re going to stop telling people how to do their jobs.” And I think that that put testing in a role where it could be successful. Have we gone too far in that direction? Is it possible for testers to give insight about the generalized quality of the software? What would that mean? That’s what… three questions? Have we gone too far? Is it possible for us to give generalized observations about the quality of the software we were delivering? How would we do it?

Michael Larsen (05:27):
I’ll take an initial crack at that. Why not? First and foremost, we are going to be able to –and I’m going to put this in big air quotes– “ensure quality” to the level that we have as clear and understanding of what the product can do as possible. In previous incarnations of my reality, I worked on Accessibility and that’s one area where your product could have immense quality in all sorts of other categories but if your accessibility options are terrible, you don’t have quality software in my book. That might be argued. And other people might say, “hold on, that’s taking this just a little too far”, but that’s the point I want to make is quality is subjective. Quality is based on what matters to the particular person at that particular point in time. Can we, as testers, guarantee? That at a certain level, the answer is, “no, we can’t” and “yeah, we can to a point”. And that’s, I think where we have to draw a line, we have to say, I can give you information. I can share with you my experiences of how this product works. I can share with you some expertise and things that I have done that give me a thumbs up side, thumb or thumbs down, and then you’re going to have to run with it.

Rachel Kibler (06:47):
Building off of what Michael said, quality is value to someone I’ve worked at numerous companies where different things matter. Accessibility and performance were the measure of quality for one company that my husband worked at. At a company that I worked at, it was about data privacy. How do we make sure the privacy is maintained throughout the entire system? At my current company at Contacts, we care a lot about the flow of getting a person from where they need contact lenses to check out as quickly and efficiently as possible. And that’s how we measure quality. One thing I think that we do as testers is we take those aspects of quality. What quality means to the company, what quality means to our customers and we live and breathe those aspects. I think there is some science to testing, but there’s also art to testing. When we can feel those quality aspects in our bones. We take those from the very beginning, from first collaboration about what is this going to look like all the way through the process, to the physical testing of being in the product. We live and breathe those. And so we’re in a better situation or we’re better placed to be able to talk about how we evaluate quality and give that thumbs up or thumbs down or thumbs to the side because we’ve kind of absorbed all those things.

Matthew Heusser (08:20):
I want to build on that. If I can. You mentioned quality attributes. There’s a group of people that would argue that quality is the sum of a bunch of things like reliability, security, learnability, usability… all these ‘ities. And if you can sort of add them all up, you can get at a score and that is your quality. And it’s got to be a kind of what we call a geometrical average so that if your security is zero, it doesn’t matter what everything else is. You’re done. The quality is zero. And if your performance is zero, if it doesn’t scale, it doesn’t matter what everything else is. You’re at a zero. There could be some kind of weighted average that you could use to say how good your software is. Are you saying that we as testers do this subconsciously as we get the feel of the pulse of the software?

Rachel Kibler (09:20):
I am saying that it becomes instinctive when we’ve spent enough time with the software and with the company to get a feel for what’s important. If one thing is not important at all, then it doesn’t matter what the rest of it is doing, but getting a sense for how those things are weighted, for what’s more important than other things is something that we as testers do well and can bring throughout the process.

Matthew Heusser (09:49):
Thank you, Rachel. Chris, what do you think?

Chris Kenst (09:52):
What Michael said and what Rachel said, there’s a lot of agreement there. Definitely quality is subjective. I’ve always tended to prefer working for startups because I’m closer to the customer. In a lot of ways, with startups there’s that concept of lean startup and minimum viable product. And a part of that is a focus on quality. That means value to some person, but specifically the person that you’re trying to target. Businesses are often trying to target one or just a few kinds of customers. At Promenade Group, our biggest business is called Blue Nation. So our big businesses… our customers are florists. They have end customers that are the consumers who are buying flowers. But really our focus is on the individual florist. Are we delivering the things that they need to be successful to make sales, to increase sales, handle problems, that kind of thing. All those ‘ilities are important, but there’s definitely a weight to them. And I’m not sure what that weight is. Usability is important because if you aren’t usable enough as an application, people are going to have a hard time making purchases and that’s going to affect the customer’s view of the value you’re providing. But to what degree? There’s trade0-0offs with that. The same thing with security, if your site’s not secure, no one’s going to use it, but they probably will. There will be some level of people that even if, like, you’re spilling out credit card information, if you’re the only one providing a service, they will use it. Like, everyone said the right things but I always put this in the context of “Yeah, but how much does that matter?” You don’t want to be somebody leaking credit card data. You don’t want to be somebody getting hacked and yet another company that has leaked all our data, but I guess it’s not as black and white as you have all these ilities and you have to do them. Yeah, usability is important, but if you don’t do it, maybe you lose 3% of your market. So that’s kind of how I look at it because I know a lot of business people look at it this way, which is a bit unfortunate because we can all agree that we don’t do usability or accessibility enough. And we’ve definitely don’t pay enough attention to security, but there’s reasons that we don’t. That’s because the market doesn’t necessarily reward those things. And as a tester, I should be paying attention to all these things. Do you need to be able to describe what the risk is in something in order to get something prioritized and fixed? But then you also have to realize how important it is to the business. It’s not this easy juggling act that we have to do.

Matthew Heusser (12:26):
You’ve brought up sort of two competing tensions. I think. So at the beginning you spoke about The Lean Startup and MVP. Could you tell us a little bit what that means?

Chris Kenst (12:36):
Lean Startup is a book and this concept where you have validated learning. So you start with what you think is a good enough product. “Good enough” being the key, and you deliver that to the customers. Based on their responses, whether they use it or not, the feedback that you’ve gathered from them, then you make iterations on that. Constant iterations until you get to a point where it’s satisfying enough of their problems, that you can then work on something else. That’s kind of the Lean Startup Validated Leearning perspective. The Minimum Viable Product is just that you’ve got to get to a point where you built a product that is satisfying enough problems and is usable enough that people will actually use it.

Matthew Heusser (13:21):
Sure. Well, you want to get better and better. So usability, right? If we say, we’re going to put this advertisement up and we get 2% of the people that see it click through and we can get that number from 2% to 3% is a 50% more increase in value delivered to the next stage of the pipeline. I think you’re right. We could tweak usability and we could make maybe 3% more. The perception is that’d be a lot of tweaking and we might not get 3% more. And our customers know our user interface, and this is good enough. Am I right there so far?

Chris Kenst (13:55):
This was Microsoft for years; their usability was horrible on a lot of their products yet people still use their software. That’s a little bit of a different situation, but yeah, it’s very similar.

Matthew Heusser (14:07):
Well, I think that a lot of… not to be too rough on Microsoft, but a lot of it was, they had a lot of inertia and they did a lot of incremental, “We’re going to add this feature. It’s going to be one more menu item. It’s going to be one more user interface item. It’s going to be one…” and incrementally. It wasn’t that big a deal. But when they were all done, it kind of looked like a person’s bad comb-over. At every step, it’s not that big a deal but when you go from beginning to end, “That’s a really bad haircut, man. How did you let that happen to you?” And I think that’s what happened with Microsoft products and they had some monopoly effects, so that you’re going to keep buying Microsoft Word.

Chris Kenst (14:43):
Yeah, you didn’t have a choice.

Matthew Heusser (14:45):
Yeah. So there are places where you might decide that usability is less important, which me to Rachel, before we started the recording button, she made some insights about how one might look at quality that I thought were pretty sharp as to what’s happening at the team level. Would you like to talk about that and tie that back? If you can, to usability?

Rachel Kibler (15:08):
Sure. At Contacts, we value revenue. Obviously, we sell contact lenses. So we measure quality by the number of revenue impacting bugs that make it to production. And we see them as learning opportunities. We try to fix them as quickly as possible through rollbacks or hotfixes, but that’s how Contacts views quality in production. We have a lot of analytics and a lot of Splunk alerts. And so if revenue drops, if the order number drops below a certain level, it throws a Splunk error. And the person on call takes a look and gets everyone on who needs to be on the call. We have a lot of monitoring in production. If the average order size is decreasing a lot, it might take us a little longer to notice that, but we keep an eye on conversion and order size. At my husband’s company, however, the tools that they build are almost exclusively used internally. So it depends on if the boss is happy. They could pretty much have a whiteboard that says, “Days since Andrew got mad at us”. The bigger the number, the better the quality. Usability is not as important to them because they can train everyone to work around the quirks. It just needs to work. It doesn’t need to be easy to use, but at Contacts, stuff needs to be easy to use. And we don’t want people getting discouraged.

Michael Larsen (16:45):
I have a similar take on this. This is probably not going to be something that people listening are going to relate to unless you do what I do from an audio perspective and from audio engineering. There are a number of tools out there. Some of them are really slick and wonderful and they have this fantastic ecosystem associated with them… And they can be a little bit of a pain in the neck to use, but once you get good at doing the stuff that you need to do with them, you can be really effective. And then there’s other tools that are nicely laid out but the workflow you struggle with because it just isn’t something that is natural to you. So there’s going to be all this avenue as to what is usable, really. When you get right down to it. I use a rather old tool to do the bulk of the work that I do on this podcast. I use Audacity and audacity is a well-known audio editing tool with its adherents and its detractors. Many people would look at the audacity interface and go, “Oh wow, that looks kind of painful”. Yeah, it is. It’s a little bit impenetrable. A lot of stuff is basic graphs and bolted on stuff. But once you’ve gotten used to using it, it’s remarkably fast. And there’s a lot of things that you can do to customize your flow and make it work for you in a way that is very effective. At least to me. somebody else might decide to sit down in front of audacity and go, “I don’t even want to touch this”. I can respect that. The point is, can I get the show done? And does it work when all is said and done for me right now? The answer is yes. Could I do it in a lot cleaner way with a lot more bells and whistles and other neat stuff using Logic Pro or Pro Tools for that matter? I probably could, but I haven’t really felt a need to do that. I keep with the tool that is effective.

Matthew Heusser (18:38):
I had a thought about something Rachel said. If you did measure how long it is since someone burst in the door with a serious problem and we had to drop everything to fix it or whatever it is, you could measure. Not only how long has it been since the last one, but you could measure the average amount of time between them. And then you could look at the slope, which is the amount of time between them increasing or decreasing over time. And you say that’s terrible. That’s awful. There’s all kinds of reasons it could be bad. It’s roughly as accurate as story points are for development. Am I wrong?

Rachel Kibler (19:16):
Fun fact. We actually built a dashboard for internal use based on the metrics in Dr. Nicole Forsgrens “Accelerate” book. So we have all these Accelerate metrics and we look at trends and we look at whether issues in production are increasing or decreasing for each team that has responsibility for certain areas of the code. We have a lot of insight into this. We also use Pluralsight Flow to see how much we’re changing our code, how much churn there is, all of these things that are able to tell us a better story about quality, about the code that we’re producing, about the value that we’re adding to customers. So we’ve actually built a lot of these metrics for trends, not for evaluating, no one is going to get fired at Contacts because our bugs in production are increasing, but it allows us to see where we’re failing or where we’re falling short and improve on those things.

Chris Kenst (20:21):
The funny metric for me was when Rachel said your husband’s company, it’s like the amount of times, like in the week that the boss is upset or something, but that’s not like quality of the application. That’s a quality of life thing. And there’s some level of quality of life that plays into how well we build applications as well. Right? You can have your bosses coming in and yelling at you. That definitely has an impact. So you have all these metrics that you pay attention to, and then you have some of them that we probably don’t pay attention to. How stressed are we? How well are we doing on keeping everyone calm and focused? And that kind of thing. That’s just like another metric that plays hugely into how well we do our jobs.

Matthew Heusser (21:04):
When Pete Whalen was at CSI, they actually gave away free bags of M and M’s. Company would subsidize it. And they would track the number of bags of M and Ms consumed per week by the greater development team. So the dev, the testers, the BAS, how many bags of M and M through being eaten per week. So that if the whole team is eating a lot of M and M’s, then we could say, “there seems to be some stressors on the team”. And I think you’re right. Rachel’s idea presupposes that the development director, it’s not really just that his mother has cancer and his son died in a car accident, that the actual stress is the source of the stress is a real problem on the team with the execution of the software. And that is subjective. So he could be angry about something that wasn’t that big a deal, but assuming people are normally oriented, I think it’s not a terrible first order approximation for how well things are going.

Rachel Kibler (22:04):
I agree. Metrics should not be the be-all and end-all, but they are useful in telling a story. But as soon as we see one metric as the defining thing, then we start gaming it. So we need multiple ways of looking at quality. And yes, those surrogate measures are useful to tell the story. As soon as someone is judged on something, it becomes a useless metric. Using it to tell a story about the quality and about the process is a lot more useful.

Chris Kenst (22:41):
And to add to that, there are data points, right? So you don’t, you don’t make a judgment just based on one data point. You need lots of them and you need them over time and you have to make sense of them. Is this the thing that we intended to measure, or as Rachel just said, people gaming it because they know they’re going to be held accountable to it?

Matthew Heusser (23:00):
I wouldn’t go as far as to say it becomes useless. I would agree that there is a corrosive effect that happens with measurement systems when those are tied to rewards. I don’t know that that’s necessarily bad. If you’re a freelance writer and you’re measured by your income, you’re probably going to work pretty hard to get more income. And you have to balance that long term with the quality of your work and that sort of thing. So some measures are better than others, but if you’re doing something like the number of change controls that go through, or the number of story points, and those are entirely under the control of the team, and they can just break things down into more and more change controls, number of bugs, without any context, I think it’s reasonable to be wary of that. Absolutely. Now that we’ve sort of danced around quality a little bit, I still think we have an answer to those questions, which is one, has the stick gone a little bit too far so that we just talk about testing and not enough about quality and then two, what exactly are we communicating? How are we going to measure and present things to who about quality? If we say, yeah, that should be part of our role.

Chris Kenst (24:07):
I don’t know that I have a good answer for this question. Have we gone too far? So personally, I do tend to talk a lot more about testing than I do about quality. And I think a part of that is just because I kind of have an idea of where quality is and I can work backwards from that to talk about how my testing will impact how we deliver quality software. So I can use metrics. I can use data. I can use the things that I’m working on on a daily basis to hopefully help deliver that. And of course, yeah, we have monitoring and that kind of stuff. And we have our actual customers who complain. So there are ways to indirectly get an understanding of quality about our product. I do tend to focus just on what I do and what our team does to deliver quality. I guess part of that too, is you don’t have to just test the software. You can test the processes that help you deliver that software. You can test when you’re in a story review meeting and you’re estimating a story or a body of work that is planning to help deliver quality to your customers. That’s I find a very good time to bring up quality questions, assumptions around what we think is going to work and how we’re going to measure it. And those kinds of things. I might talk about it too much. I might’ve gone too far just personally, but I think it’s just this kind of back and forth dance that we do, when we’re just constantly struggling to understand what quality is and what the impact we’re going to have is. And it’s a, day-to-day incremental thing for sure.

Matthew Heusser (25:48):
Thank you, Chris. Michael?

Michael Larsen (25:51):
I think, in a lot of ways, we have overshot the bar. I don’t want to say that the rhetoric is what causes the problem, but I will say that the rhetoric tends to get in the way. Can you measure quality? Break it down into a perfect graph? I’m going to say not in a realistic way, but there are things that you can do and you can do them effectively enough so that you can say, “I have a good feeling that this is going to do what we needed to do.” When you’re able to talk to that level and say, “Look, here’s what I can show you here is what I can demonstrate here is the world. As I see it, and I’m not pulling anything back, I’m not hiding anything from you and take it or leave it”. If we get all caught up in the rhetoric and we just bloviate about it. Nope. That’s not helpful to anybody. It’s not helpful to us. It’s not helpful to our customers. It’s not helpful to the people who have to make decisions. Shoot straight or as straight as you can. That’s how I put it.

Matthew Heusser (26:48):
Thank you, Michael. Rachel, has testing sort of gotten away from quality? And if we were to try to go back, what would that look like?

Rachel Kibler (26:59):
I like the way that Janet Gregory and Lisa Crispin look at quality in their course which… shameless plug, I’m now an instructor for their course… Quality for the Whole Team. I think the role of testing is to get involved at the very beginning and encourage the team to get quality going from the very beginning. I think the role of a tester is not just testing at the end, but thinking about quality throughout the whole process and the way that we get the highest quality is to get involved at those very earliest meetings. I don’t think that it’s gone too far. I just think that the role of a tester is larger than just testing. It’s like what Chris said, where we test the processes. We test the documentation. We test a lot of things more than just the product.

Matthew Heusser (27:57):
Oh, totally. My hope with the podcast today was to give you the audience, a couple of ideas, maybe one, you could actually do that could really help you to swing things the other way to have a meaningful conversation about quality that actually impacted outcomes. So now we’ll do the final minute where you get to talk final thoughts and anything you’re working on that is new and exciting that you want to tell the world about. And I see Michael’s got his hand raised. So, Michael, ideas? Final thought?

Michael Larsen (28:30):
My whole point to this, I think that yes, testers can have an effect on overall quality. We can be part of that conversation. We need to be part of that conversation. What is the broader implication of what we are doing? Testing is a component of quality. It’s not all of it. As to what I am working on. I’m kind of excited about this in the sense that though it’s not a formal position, I’m actually actively getting involved with helping Pacific Northwest Software Quality Conference put on their 2021 event as to what that event is going to look like. We’re still discussing it, but I actually, this time around, I have my hands in it. So I’m excited to see what happens.

Matthew Heusser (29:13):
Thanks, Michael. Rachel? Final thoughts?

Rachel Kibler (29:16):
I think testing and quality are still very tightly coupled. I think testing helps us tell a compelling story about quality, which is what I see my role as. I’ve really enjoyed this conversation. It’s been great to hear other people’s thoughts about quality and testing and metrics. This has been fantastic. As for what I’m up to. Honestly, I’m loving my job at 1-800-CONTACTS. I’m loving working with Chris and the rest of the AST board. I’m doing some writing. I’m hoping to do some speaking later this year, but can’t really talk about that yet. I’ve recently become a trainer for the Agile Testing Fellowship, which if you’re interested in a class from me, you can go to their website and sign up for notifications. I’m just having a great time doing what I do. I took the Black Box Software Testing Foundations course. That was amazing. Even as a decently experienced tester, it filled in some gaps and was wonderful. So a plug for ASTs BBST classes, too.

Speaker 2 (30:25):
Thank you, Rachel. Chris has to go still for final thoughts.

Chris Kenst (30:31):
Yeah. So final thoughts. There’s this huge discussion. It’s challenging talking about quality because it’s just this abstract thing. We’ve described a bunch of different aspects that we all look at and take into account, which again, just goes to the complexity around it. We do know that testing and quality are related. They’re impacting. One pushes on the other kind of thing. So there’s a lot to dive in and take away from that. What am I working on? My company is growing a lot. About a week ago. We just announced we raised our series B. So we’re hiring, probably doubling our engineering staff. If you follow me on LinkedIn, you will see that I’m posting jobs. So that’s always a good thing, especially during a pandemic and basically trying to build out a test team this year in 2021, which is going to be a lot of fun, but it’s a lot of work. Like Rachel said, we’re just having a good time trying to work with the Association for Software Testing to continue to plan a conference for 2021. If we can get there with a virtual conference, definitely continuing to work on updating our BBST courses, trying to better understand what it is that the global testing community needs and kind of help AST to position themselves to deliver and be there for the testing community. Outside of that, a little bit of writing on my blog. I think that’s basically the spare time, the rest of the time spending with my family and just trying to stay in doors and safe.

Matthew Heusser (32:06):
Cool. Thanks. And with that, I think we’ll call it a wrap. Thanks for everybody for being on the show today. I think it was really helpful.

Michael Larsen (32:15):
All right. Thanks for having us.

Chris Kenst (32:16):
Thanks Matt. Thanks everyone.

Rachel Kibler (32:19):
Thank you. It’s been a pleasure.