December 2, 2020
In this day and age, everything is becoming code. It used to be just the applications but with the proliferation of software to create software (think of Jenkins pipelines) or the aspects to Terraforming servers in the cloud (and Terraform being just one way to do it), there is ever more code being written to set up and deploy software. Who tests all that, and how does one get to play in these extensive sandboxes?
Perze Ababa and Gwen Iarussi join Matthew Heusser and Michael Larsen to talk about these areas of recent and new development, the skills needed to be successful, and ways to make sure that you get to be in "The Room Where it Happens", to borrow from Hamilton.
Let us know!
[gravityform id=”35″ title=”false” description=”false”]
Michael Larsen (00:00):
Hello everybody. And welcome to The Testing Show. Glad you could join us. We have a returning guest and I believe we have a first timer here, correct me if I’m wrong. But Gwen Iarussi, I believe this is your first time on the show. If so, welcome. And if it’s not welcome back!
Gwen Iarussi (00:17):
Thank you. It is my second time. It’s been a while.
Michael Larsen (00:20):
Second time. Okay. I’m losing track. We’re nearing the hundredth show. I think this is actually episode 90. We’re only 10 episodes away from a hundred. I’m kind of excited about that. And who was here the last time we were on… Now, this is feeling like old times. He was a regular for the show. Maybe he’s coming back for more. Mr. Perze Ababa, how are you doing?
Perze Ababa (00:39):
Hey everyone? I would like to say that it is also my second time this month. Thank you for having me.
Michael Larsen (00:49):
And, of course, well, you know me. I’m Michael Larsen. I’m the one that fiddles the bits, moves the sliders and stuff and produces the show. On that end, let’s get back to our regular moderator who usually runs this. That’s Matt Heusser. Matt, please take it.
Matthew Heusser (01:02):
Thanks, Michael. So Perze and Gwen both have interesting roles. Perze is the engineering manager for a Fortune 500 product company and Gwen is the director of enterprise quality for U-S-A-N-A Health sciences. Did I get that right?
Gwen Iarussi (01:22):
That is correct. USANA is how it’s pronounced.
Matthew Heusser (01:25):
Oh, okay. It’s not an abbreviation. So USANA health sciences, and that’s enterprise quality meeting software and the IT side of the enterprise, right?
Gwen Iarussi (01:34):
New Speaker (01:35):
And there’s also a product company, so you’re not checking the consistency of the Vita-Shakes to make sure there’s the right amount of vitamin B in them.
Gwen Iarussi (01:43):
Right! So, yeah, that’s one of the first things that I had to change when I joined the company because they had two QA groups. So I changed us to Quality Engineering instead of QA, because they had the manufacturing side. Now we’re clear [laughter].
Speaker 3 (01:58):
Fantastic! And What we’ve been talking about in the community is we’ve got this whole dev ops thing with creating software to create software. Infrastructure as Code is one term for it that’s a little bit more dated, but a lot of people are using user interfaces to create this stuff with infrastructure as code. If it is code, then shouldn’t it be tested and how do we test it? And that’s the topic for today’s show.
Gwen Iarussi (02:26):
Yeah. I mean, I think it includes software because everything, including network, is becoming code. So it includes all of that. And I think there are some hardware aspects to it as well.
Matthew Heusser (02:37):
Sure. So let’s start out with that. I see two different sorts of things when people talk about this kind of dev ops work, and the first one is a simple Jenkins pipeline, which as I’m sure Michael could tell us, is not that simple, but to the most part, these are linear. Get the code from Git, if there’s an error, log it, file it, report it. Red bar, do the build. If the build fails to compile, log it, email it. Red bar. Run the unit tests. If the unit tests fail, maybe you continue the process. Because unit tests are just unit tests, then build a virtual environment. All these things have the same nature. They are command line, check the output. If it’s bad, do the thing. Otherwise go to the next step in the pipeline. I think a lot of CI/CD pipelines run that way. Do I have the right impression so far, Michael?
Michael Larsen (03:29):
That’s pretty close to accurate. Much of what you’re going to do… and It depends also entirely upon how you’ve got your Jenkins environment set up. There are interesting arrangements that you can make. If you’re a small organization and you have a local server and everything’s self-contained, pipelines aren’t that hard. There’s not that much to it. If you have a whole lot more tests or a very complex code base or something that you have to do a lot of stuff for, then yeah, you get into some interesting arrangements. Let’s just put it that way. The one that I remember most the pipeline that I still maintain, I’m looking at some newer stuff right now using newer tools like Blue Ocean. Perze and I were joking about this the last time that we were together, where I was mentioning dealing with Blue Ocean, he says, “Oh yeah, that’s just Jenkins with everything moved so you don’t know where anything is anymore”. [laughter] And that’s not far from the truth. The point being is that Blue Ocean is much better at constructing pipelines. It’s one of the things that it allows you to do, but yeah, it can get very complex to where you’re literally spinning out multiple environments. You’ve got a controller and then you’ve got servers that you can spin up to be followers. And those followers can then be spun up with as many Docker containers as you want to create the environments that you need and then load the software and run your tests in parallel. Something that on a single machine running regular pipeline might take you a whole day to do, you could do in 20 minutes. Again, the more complexity you put into it, the weirder it can get, but generally speaking, yes, you are correct. You put things in order, you schedule them. As long as you don’t get a red flag, you keep going until you reach your end goal.
Matthew Heusser (05:17):
And I think a lot of the complexities of these is very similar to what 15 years ago, the operations crew used to do. They’d make a batch script or something and they would schedule it and it would just run. But then we have, I think the next step where we say we want to enable self-service so that our developers or our testers own test environments. So now the pipeline might kick out a Docker image. And I want to be able to grab that Docker image and build a test environment and goof around with it in our private cloud, then tear it down. And to do that, we’re going to build some software and there’s going to be some user interfaces and you’re going to enter some values. And all of a sudden it’s going to used by 10, 15, 20 times as many people who may accidentally step on each other. And I think that second issue we want to talk about performance and security, internationalization, maybe accessibility, all of the things that needs to be tested. And I don’t think that second conversation is happening often enough in enough places.
Gwen Iarussi (06:20):
I would agree with that. And that’s something that we as an enterprise at USANA are just starting to tackle. Getting some of that localized upstream testing. I think that’s one of the things modern engineering has taught me is that those roles between developer and tester or dev ops engineer, they’re all getting very gray. You get a lot more collaboration happening and a lot more testing by developers and software engineers happening as well as the tester. So it becomes kind of a team effort. And that’s kind of how we’re approaching it to get us up to speed on this because we’re behind the times in terms of our ability to actually have a localized environment and get a Docker image that can be just the services that are needed for functional testing of a particular area of the code. I think it’s a group effort.
Perze Ababa (07:07):
Yeah. I think it does belong to a very specific domain because when you’re looking at, let’s say product functionality, for example, there are some things that you tend to ignore or tend to just put in the background, especially with what you were saying, Matt regarding performance. And when you’re dealing with infrastructure as code, what is the time to spin up until the server is actually ready to receive requests? And that is something that’s worth looking into. So that idea of self-service kind of becomes feasible at that point. Prior to having services on the cloud, when we used to have physical infrastructure provisioning of the physical server would take weeks. It’s not like we can just easily go into the server, install, whatever we want and hand it off. But in this day and age that, say, pipeline manifest or a YAML configuration file that can give you some sense of consistency so that everyone else can use the same settings.
Matthew Heusser (08:06):
So we had a reasonably large project for a large auto manufacturer, not too long ago, that I was involved in. We actually created a tool which we call the constellation builder. I had to figure out where all the web services were and how they call each other and create a massive map. If you were testing particular functionality, you could create all of the infrastructure that called the thing that called the thing that called the thing, the source system of record that actually found the answer for you that would come back through all of the web services and produce your answer. And that might actually be five different systems, but we’d have to virtualize and create a Docker, just really testing that web service. We didn’t have to recreate the rest of the data center. That was not a simple project. And there were not tools that existed that we could find to help us do that. Is that the kind of thing you’re talking about, Gwen?
Gwen Iarussi (08:58):
It is. Obviously some of these newer tools make all of this a lot easier. The more we talk about this, the more I realize what a big topic of conversation this is. It’s a big scope because when you’re dealing with an it enterprise, like the one I’m dealing with at USANA, you’re dealing with software delivery, you’re dealing with production environments that our customers use to run their businesses. You’re dealing with retail interfaces to the world to purchase product, e-commerce interfaces. It’s a lot of different layers and it’s a lot of different support and all of it’s in scope. And I think that’s where the challenge comes. You have to decide what’s worth testing and how to test that. And it’s different layers of testing. You’ve got the software environments to help the software teams actually deliver software. And then you have the outcome of that software delivery. It becomes a very big problem to solve. And I would say yes, obviously 15 years ago, 20 years ago, it wasn’t even viable.
Matthew Heusser (09:56):
Yeah, I totally agree. Which brings me back to I was doing the testing and what are they doing?
Gwen Iarussi (10:03):
Yeah, absolutely. And again, it does come back to… when I say it’s collaborative. I think the topic of testing becomes a much broader scope of effort by a lot more teams. In some cases, testing ends up being done by end users in a lot of cases, especially when you’re dealing with internal customers.
Perze Ababa (10:22):
A lot of this is really domain specific. I do think to a certain degree, there’s not many people can test this infrastructure effectively. You have to have a deeper understanding into what we are actually trying to do, how we’re doing it as well as how can we evaluate that we’ve done pretty well because this definitely goes beyond just functional testing. If the question is, do I have an environment that’s ready for testing now? That’s the simplest question that you can ask, but then underneath that there’s multiple layers to it. For example, can this actually handle 20% of the concurrency rates of what we expect for it to be in production. Or what happens when we shift databases on the fly? Is that something that we can actually do with this approach? There’s a lot of these flows that you have to take a look at from an infrastructure perspective, which leads me to a more specialized skillset to test this.
Gwen Iarussi (11:20):
I agree. And I think automation plays a big role in this as well. It ends up being most efficient means of validating a lot of this stuff. Service availability. Performance capability. Fail-over. Disaster and recovery. Any of that stuff that is going to become part of that story.
Matthew Heusser (11:37):
So let’s peel it backwards. You start with a web service and you’re testing it with soap UI or something like that. But then you have to do the setup of the data. And it turns out that that is in a database somewhere. So then you have to go actually manipulate the database with maybe some SQL to blow out the data, insert the data that you want, and you run your tests. I have to go secure shell onto that box. And then I run the SQL command and then it works. I will run my web services tests and they all pass. And that’s a simple case where it’s not calling anything else. It’s one web service. I might want to test that web service on a different branch, which might give it a different URL. And then I might want to get the seeding of the test data, the running of the test to getting the results. And maybe if everything passes the deleting of the virtual environment, I might want to get all to happen within the CI pipeline from the command line. And we’re talking about someone to build that kind of testing infrastructure?
Gwen Iarussi (12:42):
Absolutely. That’s the thing when you’re dealing with an enterprise environment, you’re dealing with that times N. Different users with different needs. And I think that’s one area that’s really critical is really defining the problem that you want to solve. And making sure that there’s that clarity around what the end goal is for whatever infrastructure project you’re taking on, because it can grow exponentially very quickly. Then you just get nowhere. And that I think is one of the major challenges that I certainly face in the environment that I’m in. You just get these projects that they start with one idea, one problem, and then they become this massively multi-year effort that just never gained any traction because the problem just becomes too big. You just keep pushing it off. So it never gets done.
Matthew Heusser (13:32):
Yes. And it’s very difficult. So is this a person that runs around building test infrastructure that then less technical people can reuse like a test architect who creates all the things? So then someone else, it just feels like they’re going into a tool like soap UI, and then writing tests and to click around and it all works.
Gwen Iarussi (13:52):
Absolutely. I think in some cases, yes, I have a pretty diverse team. Some of them are very technical and understand infrastructure. They’ve been in the business for a very long time and they understand kind of what that evolution looks like. They learn how to do networking and they learn how to be a sysadmin and they understand Linux and they understand command lines and they understand bash scripts and they understand how all of these underlying pieces work together. And then I have individuals who understand just the software piece of it, just the code piece. The ones that are definitely more experienced and more infrastructure focused and kind of have that full stack mentality tend to be the most valuable when it comes to this type of testing, their experience kind of drives a lot of where they focus their efforts.
Matthew Heusser (14:35):
Well, that’s really interesting. It takes a village. My experience is that if you just have the test architect who is brilliant, comes up with the brilliant thing, they get bored and move on and you can’t maintain it. And you lose some of the human interaction subject matter expert, you lose some of the funky edges that that person doesn’t even really know exist in your testing.
Gwen Iarussi (14:56):
That is definitely a downside. Again, it goes back to, it’s a different level of testing. It’s a different problem to solve. So I think that becomes less of an issue because it becomes about availability. Usability, yes, certainly comes into play, but it’s… you have to start at those fundamental levels. The more I think about it, it really goes back to the layer discussion. [laughter]. This is a multi-layer strategy.
Michael Larsen (15:20):
I wanted to definitely step in on that whole bit about one of the things, interestingly enough, with infrastructure as code and having this conversation. It reminds me of some of the things that we’re dealing with right now with the group that I’m in. I’ve recently, I guess it’s not that recent. It’s been within the last eight months. I moved off of the Socialtext project, which I had been working for several years and I’d gotten to be very intimately familiar with everything related to it. I accepted a opportunity to work with another group that didn’t have a dedicated test resource for them to be able to look at creating new and exciting things. As opposed to having a product I’m used to working with an application, it’s got an interface and I interact with it. You can say these features work. It’s something you can wrap your head around and get involved with pretty easily. It’s not a difficult thing to do testing for and to make things work that way. What I’m working on now is I’m working on a team that is a specialist organization in the sense that we are responsible for data transfers. That’s solely what we focus on. We make it possible so that if you want to pull in data from some other app, we do those data transformations. That’s a very different kind of testing. And it requires a very different level of infrastructure. I have to actually interact with interesting tools and those tools, they have to work consistently. Our thing is is that if somebody submitted data 10 years ago, to be able to transform that data, we have to follow those rules again. That’s what’s really critical. That repeatability, that reliability, that’s mission critical. And it’s wild. I haven’t worked on a mission critical app with that level of scrutiny in a very long time. And it’s sort of fun. It’s a different model. It’s weird, but it’s cool. I hope that makes sense.
Gwen Iarussi (17:26):
It totally does. And that’s kind of what I was trying to pinpoint with that whole collaboration, because it isn’t just one person and it ends up being this thinking outside of the box and coming up with these really cool solutions to enable that testing that you need to do. And you have to start thinking about testing in a different way. It definitely presents an interesting challenge. It definitely changes the conversation around the limitations of testing and kind of where testers play a part in it. It speaks to the evolution that I have seen happening for quite a while now, and certainly seeing with the teams that I lead now. You’re talking about getting data from one end to another and making sure that that data is in a certain state and making sure that it’s usable and making sure that it’s accurate and all these different things, you take that to an entirely different level when you’re dealing with what you do with that data. That’s an entirely different area that you could delve into. And it becomes these little rabbit holes. And I think the challenge is, as a testing organization, you have to decide where to best invest your testing capabilities and where it makes the most sense to apply that discipline and to really focus efforts, to get the best outcome. And I think that’s what we struggle with and certainly a challenge we face as testing organizations every single day.
Matthew Heusser (18:50):
So, how can testers get, or people with the testing mindset cat more involved in these sorts of projects, per se, what can we say to be in “the room where it happens”?
Perze Ababa (19:02):
That’s a really good question. So one of the programs that we have with the organization that I’m working with right now is that we’re trying to employ a more in-sourcing approach for these horizontal services across the board. So that includes infrastructure as code, dev ops. We have all of these developers who are involved in individual projects, they’re building a React app. For example, they take that, they take whatever learnings they have and use the common infrastructure tools that we have in our organization, and then build it for everybody to inspect, use, improve, and all that. And we have a common backlog that anybody can come in and add tickets, and then we groom them. It really opens up the way for everybody to just participate. The democratization, being able to contribute, whether it’s any ideas or if someone else has an idea, you can come in and improve on top of that. I think one of the things that really enables is we’ve opened this up to everyone. And on the sideline, we have this training program that introduces, this is our infrastructure as code tool set. We’re going to show you what it does and what you can do for your own project. And by doing that, we open it up to a lot of folks within our multiple development centers all over the world and they come in and join. So it’s more like a self healing mechanism that gives you the ability to put things out there. People will try to use it and they can also improve it by contributing back into the system.
Gwen Iarussi (20:36):
I love that analogy because you’re right. It really is. Like so many testing challenges that you run across. It’s really a cultural discussion and it’s, there are cases where it is unwelcome. I think there is a little bit of fear of including, and I think a lot of that with the teams that I’m working with now, we’ve had to coach kind of getting engaged and really getting back to that evangelizing and showing the value that this testing has to the outcomes and to these teams who aren’t necessarily historically known for interacting with the testing team, and really seeing the value that comes from it. These small really high value wins. And then that grows. That’s the one thing I’ve learned about engineers is that the more value you can show and the more wins you have, you come in here and you say, “Hey, we have this fantastic solution and here’s what it’s going to do for you.” And then you make it happen and they see the effects. Then it spreads like wildfire.
Matthew Heusser (21:35):
Yeah. I think it’s on the tool front. We could get in front of it and do what Michael has done and said, “This thinge veryone’s scared of. How about I build it? I do the CI pipeline anyways. How about I’m monkey with… I don’t know. Or “I’m, I’m just some incompetent tester, but I can play with it and become the Build Meister eventually.
Michael Larsen (21:56):
Don’t sell yourself short. That actually works a lot of the time [laughter].
Matthew Heusser (22:00):
And that’s what I’m saying, especially for any “ity” for performance testing or security, any “ity”, like we need to do security. Everybody’s scared of it. I don’t know Oh, I’ll just do some research on this Zap tool, or Zed Attack Proxy. And next thing you know, you’re a security tester. I’ve seen it happen.
Gwen Iarussi (22:17):
I think from a leadership perspective. I know certainly one of my goals working with these groups is really to build a culture that supports that, supports kind of the playing around and developing these new career paths almost because you are starting to see these specializations form where people see a problem, and then they go, “Hey, you know what? I think I can fix that.” And then suddenly it’s a formal position in the company, right? [laughter] So, I mean, that’s kind of how dev ops came about.
Matthew Heusser (22:47):
Yeah, it was supposed to be developers and testers talking together, but somehow that got confused. So we all have to respond to the rewards and the incentives in the system that exists for us. So I think a lot of companies might see this as a dev ops role, and then there’s building the tools to enable the testing and doing of the testing, I think might be an interesting way to slice it. And if you’ve got the skills to build the tools, that’s great. We got to figure out who’s going to manage that person and manage that group. So mostly what I see is that role falls into the dev ops group. And then there’s a different tester group that are the customers of that. That’s mostly what I’ve seen. If we’re actually customers and we actually can do things like express requirements and negotiate over what’s built. That’s great problem. I see is when dev ops as a group goes off and does some cool stuff then maybe disconnected from what we actually need to ship software. Oh, absolutely.
Gwen Iarussi (23:46):
And that does happen. I think that’s a reality that we’ve all probably experienced [laughter]
Michael Larsen (23:52):
Yeah. That’s going to be something that I don’t know the best way around this.. To borrow that whole phrase and to do a definite Hamilton drop again. How do you get to be invited “in the room where it happens”? Well, part of it really does come down to, you have to basically show that you want to be there and you want to be part of that conversation. It’s very common. And I remember this cause I went through the exact same thing. I got hired at social taxed because the head of quality assurance there had met me, had interacted with me, liked what I was about and said, “I could definitely use you to build this team”. And at first the whole goal was, Hey, I’m going to be responsible for automation. That was going to be my initial thing. And that’s what I went in to do. And the irony of the situation was, was that there was a whole lot of other things that they needed to work on And automation really wasn’t the number one priority. I mean, they’d already had a good system there. I learned how to work with it to an extent. But the main point was was that when I got there and I started working, there was a definite need that nobody else was handling. Nobody else knew how to handle. I didn’t really know how to handle it either, but we had a really big accessibility need because a very large customer wanted to buy a lot of software and they ran an audit and the audit was rather lengthy on things that needed to be done. And they’re like, “Well, how are we going to make sure that we don’t find ourselves in this situation again?” Well, we need to have somebody who is an expert in accessibility. And Ken looked at me and he said -Ken Pier, former QA director- Ken just looked at me and in his very gruff and straight up manner, he said, “It’s okay. We got an expert. And if he’s not an expert now he’s going to be an expert very quickly. Right?” Yep. Yep. That’s right. [laughter]
Gwen Iarussi (25:49):
I think you point to something that’s important with this stuff as a tester. One, it’s about showing the initiative, not only come to the table, but come to the table with the level of competency and the knowledge. And so that takes effort. It takes work. You’re going to show up and if you’re going to take it on, then you need to own it. And you’ve got to bring yourself up, right? You got to raise yourself up to the level of competency where you can add that value. And that’s not a nine-to-five thing. I mean, that’s something that those of us who have been tested enthusiast for a very long time, we know that you gotta be passionate about this stuff. Sometimes it takes reading at night and learning about what solutions are available and how to implement those. And then bringing that to the table. You talk about testers who come to the table and want to be part of the conversation. Well, that’s one piece of it. Yeah. But being a part of that conversation means you’re bringing the value. And I think that’s where it gets really important is if you’re not an expert in it, then become the expert.
Perze Ababa (26:49):
I’ll piggyback on that A lot. I mean, showing up is a lot of what needs so that you can be in the rubric. It happens, to continue with the lines that we’ve been reading off Hamilton. And, uh, what, from what, what Glenn was saying is that the moment you’re in the room, you have to make yourself visible. You have to make yourself relevant into that room because everybody has their perspective that they can bring. That’s the beauty of the career that we have. We value diversity more than anything else. There’s just not one way to solve a problem. There’s no one solution that solves any problem for any given context. To continue to be in that room and be mindful of what the context is and knowing how to test outside of that. That’s the key for your relevance and how you could continue to reinvent yourself.
Matthew Heusser (27:40):
I think there’s only two pieces of that on top of that. Thank you, Perze. I’d add that, try to be one step ahead of the current rumor mill. So people said in my organization, “Oh no, we’re going to have to do CMM. I don’t even know what that is. So I Googled it and can study it in grad school Anyway, I studied CMM. So when C M M I came out, I started researching it. And I knew more than the people who were blustering, same thing with Six Sigma. And you can do the same thing with web services, virtualization, Docker, and all you have to be as one step ahead. And they will realize that you’ve given this more thought than they have. And then you get the second piece, which I think is out of scope for today. How do you present that in a way that don’t make people think you are a show off trying to make them look incompetent? And that’s a nice way to get kicked out of the room, which actually did happened to me once.
Gwen Iarussi (28:37):
I think that would be a fantastic show all on its own, you know, building that competency, but also building the competency around how to present it in a way that’s collaborative and doesn’t shut people down because you’re right.
Michael Larsen (28:52):
I think with that, we’ve got an idea for another show, but that’s exactly it. That’s another show. This is a good point for us to probably bring this topic to a close, because again, we could riff on this stuff all day long, but I realize people have day jobs and they probably need to get back to them. So I want to respect everybody’s time on that. And this is where we tend to get to our “Where can we find out more about you?” So Gwen, let’s ask that people want to know more about you. How can they learn about you? What’s up?
Gwen Iarussi (29:23):
YOu know, I’m mostly on LinkedIn is where I spend a lot of my time Interacting. Yeah. I’m out there on LinkedIn. Look me up.
Michael Larsen (29:30):
And Percy, you know, the drill, right?
Perze Ababa (29:33):
So I’m not going anywhere else, but you can say hi, uh, via Twitter. While I’m still trying to explore how to effectively use LinkedIn, Probably say hi to Gwen in that one of these days, but I’m on Twitter and you know where to find me.
Michael Larsen (29:49):
Well, the same on this end again, we are reaching towards the end of this calendar year. And usually the months of November and December are kind of quiet as well as January of the following year. So not much on my end, but yeah, if you want to stop by and say hi to me on Twitter, you can do the same. And again, these are the first things you see when you see the show notes on our site. So please definitely stop by definitely say hi, we’re glad to do it, Matt. How about you?
Matthew Heusser (30:13):
So I find Twitter is getting so political these days. It’s not a productive way of sharing information about testing for me right now. And if other people love it, it’s great. I’ll be back soon. I’m sure, but I’m on LinkedIn a lot. And I’ve been looking into an idea that I’m calling adaptive testing, which is a modern tests tool for someone maybe that doesn’t write code, but could, that has an analytical mindset where you don’t have to write code, but it’s not some of the brittle record and playback stuff. We’ve been working on this for 25 years now. I think since Record and Playback for windows, some of the tools that are coming out are starting to get better. So that’s a big area of research right now for me and coverage. Do you want to talk about those things, I’m around.
Michael Larsen (31:02):
Fantastic. All right. Well with that, I think it’s time to bring this episode to a close. Thank you everybody for listening. We look forward to coming together and being with you again in the very near future. Take care everybody.
Matthew Heusser (31:15):
Gwen Iarussi (31:15):
Perze Ababa (31:17):