December 19, 2019
Agile Testing has now been around in some form or another for two decades, yet it seems that what people are calling Agile Testing and what Agile Testing actually is are still two different things. Why is there such a gap in both understanding and practice? Matt Heusser and Michael Larsen welcome Lisa Crispin, Elle Gee and Jamie Phillips to discuss exactly that. In the process, we get into how Agile is practiced in both small teams and in larger organizations, where it is practiced well, and some of the common pitfalls even the best of Agile organizations still face.
- Patterns for Splitting User Stories
- How to write clean code? Lessons learned from “The Clean Code”
- Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations by Nicole Forsgren PhD (Author), Jez Humble (Author), Gene Kim (Author)
- Mixup Testing
- Agile Testing Condensed (Leanpub)
- Agile Testing Condensed (Amazon)
- A Whole Team Approach To Testing In Continuous Delivery
- European Testing Conference: Janet Gregory & Lisa Crispin – Exploring Features and Stories – Help Your Team Build Shared Understanding
- Testing In DevOps
Michael Larsen: Hello and welcome to the testing show for December, 2019 Hey, we’ve made it to the end of the year. That’s either fantastic or overwhelming, but that’s normal for this time of year, so let’s just go with it. I’m Michael Larsen. I’m the one who cuts and produces the show. With that, let’s go ahead and introduce all the folks that you are used to hearing from , Mr. Matt Heusser.
Matthew Heusser: Hey, welcome back. Thanks for listening.
Michael Larsen: Also a regular contributor, now, we’d like to welcome Ms. Elle Gee.
Elle Gee: Hey everyone, Merry Christmas or happy holidays, whatever you celebrate
Michael Larsen: and we have a returning guest who it’s been awhile since she’s been on our show. Lisa Crispin, welcome back.
Lisa Crispin: Thanks. It’s so nice to be here. I’m just so honored that you invited me yet again. It’s awesome
Michael Larsen: Always! And we have a new voice that we are introducing to the podcast for the first time and that is Jamie Phillips. Jamie, welcome to the show.
Jamie Phillips: Hey, thanks for having me. Glad to be here.
Michael Larsen: And as everybody knows, Matt is designated master of ceremonies here. So I’m going to go and turn the time over to him and let’s get this show on the road.
Matthew Heusser: Hey, thanks man. So if you’re a long time listener or even, you know, heavily involved in the community, you probably know Lisa, Michael and I, Elle, we’ve been here a lot of times. Jamie’s new. So I’m really interested in what brings you to the show. We’re going to talk about Agile testing and you’re more on the Agile side than the testing side. So I’d like to hear where you’re coming from before we dive in. Tell us a little bit about Jamie.
Jamie Phillips: Sounds good. Well I’ve been delivering software for about 20 years. I went to college to be a programmer, but figured a good place to learn the ins and outs of programming was a start in QA. I really didn’t even know what QA was. Long story short, I fell in love with quality assurance, seeing the overall big picture of how systems would work, breaking things, and then also making sure things work correctly. So I just never left QA, spent a lot of my time doing automated testing way back when the first test tools came out. Came to use my love for programming to fall into automation. My big thing was I still kept that testing mindset. It wasn’t just about writing code to write code, it was writing code to make sure the tests work. So I did that for a number of years and I really enjoy testing, but I was kinda getting burnt out being Test Lead, managing different projects and I got involved with Agile probably about seven years ago and it was a different… you know, everybody talks about an Agile mindset. Well, it, you know, it took me about a year and a half to kind of embrace that, to really say that, Hey, we don’t have to document everything. We don’t have to write extensive test cases. We don’t have to write extensive requirements to actually get something done. I really love the smaller chunks of work, the collaboration of teams, the communication, other things that I’ve seen through my career that were kind of shortcomings from waterfall. Agile seemed to have an answer for, so I spent probably the next three years of my career going to Agile meetups, going to conferences, reading every book I could get my hands on and think getting my initial certifications moved into a scrum master role. That’s my current role. I’m a scrum master on a mobile app where they’re out. A QA background really helps me when I’m working with Agile teams and I’ve also been a developer, so I see all the different roles on the team and I can have empathy for all those roles. Even though I’m a scrum master, you know my big proponent for Agile quality is supposed to be built into Agile, got to come at it from more of a QA mindset and then that kind of leads to within my company I help a lot of the testers they adjust to being tested on the Agile team.
Matthew Heusser: I was just talking to Lisa’s coauthor on a lot of things, Janet Gregory, this morning about the difference in what Agile even means. 20 years ago, 15 years ago, the people doing Agile in air quotes, extreme programming. We were a bunch of weirdos off in a corner doing a thing that was strange and nobody understood and was risky and only the earliest innovators and then the early adopters. Then it hit the early mainstream and now you know the late mainstream is huge. It’s like 70% of software development that I think Agile has hit that and in that time I think what it means to be Agile has changed. So for the companies that are doing it today and using this language of product owners and sprints and stories and that sort of thing, scrum kind of one, Jamie, can you tell me a little bit about how you think that is different than a company doing plan driven or waterfall? What makes it Agile on the ground? On a practical level, and I know there’s a billion companies doing it all differently, but to the extent that it’s possible to generalize, what do you think that means?
Jamie Phillips: 04:51 So it’s been my experience that a lot of companies are doing the ceremonies, but they really still don’t understand what it means to be Agile. They say we’re going to be Agile. And to them that means they’re going to implement scrum and have daily stand ups. Everybody knows what the ceremonies are. So if we put all those ceremonies on the calendar and we do sprints, a lot of companies claim that they’re Agile. And to me, I think that’s the thing, you’re sure 70% of the companies are doing those ceremonies and kind of following the playbook. But I think they kind of missed the boat of what the intent from the Agile manifesto was. And like I’m specifically on my team, the biggest hurdle we were facing was communication, product, and IT were not on the same page. And when things go sideways over year or a year and a half period of time, people lose trust with each other. Product team didn’t trust the devs, the devs didn’t trust the product team. You know, it all comes back to communication. So it sounds simple and when you try to sell that to upper management, Hey you guys aren’t communicating on the same page. You know, that’s not something that shows up on a spreadsheet and it’s really hard to get them to understand that. To me that still is the biggest hurdle is to have a company actually adopt and embrace the things that make up Agile. The communication, the collaboration, the trust, the transparency, they still don’t want to do those things. It takes some good leadership from an Agile side to try to help them through those hurdles, if that makes sense.
Michael Larsen: 06:14 Absolutely. Makes a lot of sense in my world. So let me add a little bit here from just my perspective from the world that I work in. I work for, and I’m going to put it in air quotes, “Agile organization”. However, it still seems that even though my company refers to itself as an Agile organization such as, yeah, we’ve got the CI/CD pipeline, but at the same time, even though we say that, and even though that we do that, it does seem that no matter how hard we try, it does seem like there’s still development needs to be done and work gets pushed and then we test. So in many ways I feel like we’re living in a Scrummerfall world more so than an actual Agile world. I mean, we’ve got all the ceremonies and please understand, I’m not dissing my company. I think that we do a pretty good job, but it just seems like this is a natural thing that just happens in the sense that it takes a big amount of commitment to be able to say, Hey, we’re going to test at the very beginning of stories and we’re going to make sure that testing is involved at the very earliest level so that we’re not just in the process of, okay, here we are, we’re reaching the end of a sprint and now it’s time to get all the testing done in the two days before the sprint ends. Am I strange for feeling like that’s my reality or is that normal? Is that what most organizations face?
Matthew Heusser: Can I take a whack at that question, Michael? I think what you’re saying is we have a cadence to release every two weeks. All of our stuff is intermingled, so we have to do some kind of traditional regression testing. And even though we have tooling and automation and magic buttons we can push, that process still needs to be managed. It takes a couple of days. Is that the state of the practice? Is that the state of art? Why are companies struggling to get past that? Is that okay? Are we falling behind? Those sorts of questions? Right?
Speaker 1: Yeah, absolutely.
Matthew Heusser: I’d throw that one to Lisa to start because I think she has a pretty broad perspective on the industry at this point.
Lisa Crispin: I do see that. I think it’s pretty common. Yeah. I think the root problem is teams don’t know how to slice. The features are developing down to small enough slices. They’re not delivering small changes frequently. They’re delivering big changes and, even every two weeks, it’s not really very frequent and I think the key for teams to get over the mini waterfall process is to plan to do less and make your workload manageable and spend the time to learn how to slice things down into little tiny pieces. A user story in my experience should not take more than a day or two to finish including all the testing and all the test automation. They have to be that small. The other thing is teams have to get good at things like release feature toggles or some means to be working in these small little chunks and be ready to expose only the little part that they want or only to the small number of customers to get the feedback that they need or to use it themselves to try it out in production and I think it’s just so hard to learn to do that. It’s one of the things is it’s hard to learn but once you’ve got the knack and use it. The frameworks to do that like story mapping, impact mapping, example mapping, all these different ways. We have a lot of tools to do this. Richard Lawrence has a really great story splitting process. Until people learn to do that, they’re just going to keep continuing to struggle because they’re doing too much at once.
Matthew Heusser: I’d offer a different perspective and then throw it to Jamie. I think it’s a design problem, especially for legacy systems where a change to one place could break something else and you have no idea.
Lisa Crispin: Oh yeah.
Matthew Heusser: But I do agree if you are practicing something like clean code and you can localize your changes and you know that you only impacted this subsystem, you can then test and deploy that subsystem, but you have to deploy the whole thing. So then there’s a whole problem with deploying is so hard and expensive, it doesn’t make economic sense for us to call operations and do the Hokey Pokey for this one change. So the problems are like mixed together. It forced legacy systems to have a slower deploy cadence. For new systems, I think that what you’re recommending is the way to go.
Lisa Crispin: You can still be small. You can still do just a small change.
Matthew Heusser: Right, so there’s a legacy system and you make a small change, but it’s a big global class and you’re afraid that you broke something somewhere else, so now you have to retest everything. Retesting everything takes two days.
Lisa Crispin: I guess I would venture to say maybe while in the process of doing that you start rescuing your legacy code and start refactoring and putting in a little automation. I get what you’re saying, so maybe they need to be even tinier changes.
Matthew Heusser: Or to be fair, if you had it in your CI pipeline, if you had your automation and your CI pipeline and you could have a high degree of confidence, didn’t break anything big with that change, maybe you can deploy those small changes more frequently.
Jamie Phillips: So one thing I think you just mentioned there, it was really important on Agile teams identifying those different areas. There’s times when you can’t test everything. Identifying those risk areas and you know, risk-based testing has been around forever. On our successful Agile teams. It was a collaboration with the devs. When you have that trust factor that says, okay, we’re changing this area here, what are the key components that we need to test that? Do we need to test everything or can we narrow that down? So there’s a good spot there for collaboration on zero, and you know exactly what you need to test. My client now kind of runs the gamut on how we test. There’s some teams that have a sprint afterwards to test, which just makes me cringe. You know there’s one sprint for the development and QA is actually a sprint behind. So one of the things that I’m working with them to do as Lisa mentioned, is there’s ways to break things down smaller to get away from that. Other teams do, like you mentioned earlier, where they spend eight days doing dev work and then the last two days of the sprint is done for testing. Again, that’s not ideal, but it is an evolution. You know, I’ve been on teams that started out with the worst case scenario and then as you build CI/CD and you lay that automation framework, I think we all can agree that automation is the key. Do you have that framework in there? Every time you check in your code, you can automatically run those scripts for regression. But thanks a lot of time, a lot of money and you have to get that commitment from the client to do that as well. The other thing is even with our manual testers, they try to stay almost one sprint ahead. They started looking at if we keep our backlog groomed pretty well and we plan out sprints ahead, our testers can go ahead and start looking at things from the backlog or the next sprint or the next sprint to start writing out as much of a test case as they can. Even if the queue does not get fully groomed, they get theirselves familiar with what they’re going to test to try to keep up, do testing within a sprint. That is the biggest challenge. Everybody thinks, you know, Agile, it does create software faster and we can put things out there, but most companies often forget about that testing piece and how do we integrate. We mentioned earlier like service-based testing or testing at the end of the process. That is a big challenge and there’s no magic wand, but I think that’s something good for, you know, consultants and people who’ve had that experience to try to help companies solve that problem.
Matthew Heusser: There may not be a magic wand, but I think most organizations can’t do significantly better if they can overcome their resistance to change and their ability to look outside for solutions.
Elle Gee: I also think coming from an outside view, one of the biggest challenges for that move from Scrum-fall or mini waterfall to Agile is managing the client or a stakeholder or wider audience expectations. People are so used to getting a lot that when you start telling your moneymakers and your decision is that they’re only going to get this match in a timeframe, there’s a natural resistance. It’s a big change from where you used to promise in three months you’d get yay amount and then not deliver it to them. Encouraging your stakeholders to understand, let’s deliver this small chunk successfully from day one and build so that you can deploy at different points out to the public or to your, your audience with less risks. And I think that managing of expectations, if we can nail that, it will be so much easier for scrum teams to be successful and to really move into proper scrum. Or proper Agile.
Matthew Heusser: I think that’s huge. Struggled with that for decades now and the only solution I’ve found is to change companies, frankly, because you start saying the expectation is that we’re going to have this in three months and we’re going to have exactly what we specify in three months and we can change our minds at any point and we’re still gonna meet that deadline. That’s like magical. Why would I ever go to two weeks and the answer is because I’ve been here three years and we’ve never ever hit those three month deadlines once and we’re always lying to you. Let’s keep doing that because it makes us feel good for 89 days out of 90 you’re talking about Agile testing a little bit comparing it to waterfall, but what I found is it really gets complex when you start talking about multiple teams on one program. How do you see that changing the mix?
Elle Gee: It’s a little bit crazy. We work with a lot of organizations with, you might have within a larger scale organization one IT team who is actually working to a pure Agile process, but there might be eight other in training projects around being run by different program managers, different design and development teams, and they’re all operating on a variety of different processes. Everything from “let’s pretend we’re Agile and just have a scrum meeting in the morning because that’s what makes us Agile” to “Let’s not even bother pretending. We just still go on to develop our own way, but if they want to think we’re Agile, let’s do that.” When we think about Agile, the earlier conversation about how it’s implemented so many different ways, we see that not only across organizations but within organizations and across teams and it certainly does add a complexity to the development process when there’s so much inconsistency across the teams and how they’re working.
Matthew Heusser: So what are some patterns you’ve seen to overcome that? Anybody?
Lisa Crispin: I’m a believer ,because I’ve seen it in practice and also supported by the state of DevOps survey results is supporting the teams to become autonomous self-directing teams. Giving them the business priorities and then letting them decide how to implement those. But that takes a lot of work. It’s a big investment and I find that a lot of executives don’t understand why they need the time to learn how to do that. It’s like, why is it so hard to figure out how to self organize and they don’t necessarily have the right support in terms of people coaching them or training and things like that. But I think it’s a cultural thing. Yeah. The “Accelerate” book by Nicole Forsgren and Jez Humble and Gene Kim has, not only does it present all this data of what makes a team high-performing, but it emphasizes that it’s mostly about the culture and the leadership and there’s a great section in that book on the need for transformational leadership and how that gets done and how to start growing that quality culture. Every team should be free. It’s to some degree to work the way those suits their context the best. But obviously if they’re all working on the same product, there has to be some coordination. It takes us a big step and I think Matt, you said a little while ago, if you can’t get this to happen, you’re just going to have to change your organization by leaving your organization. Unfortunately, a lot of times that’s true, but I think it’s worth trying to do those kinds of changes. And also I find that as testers, a lot of times you’re asked to kind of work part time on multiple teams, which is horrible in some ways, but it is good in terms of you can start to be the little hummingbird or bee that’s cross-pollinating those teams and saying, “well you know, team a is working on this and you’re kind of working in that area too. Maybe you should talk to them” or, “‘Team B solved this problem this way. Maybe you should talk to them about how they solved that problem” and kind of just getting people talking to each other. Communication is the root of all our bugs, I think, so I think those are a couple of things that you can do.
Matthew Heusser: Yeah, I was working with three continents, 20 teams, multiple deploy point dependency involved program for most of last year. The biggest thing I could see is identify who owns what features and then for your features that you are responsible for, find a way to get them tested in a day. So then you identify the bottlenecks and then you can work backwards once you know who’s slow. But that’s I think a different podcast.
Michael Larsen: As we’ve been talking about this and we were talking about how multiple teams deal. When you hear me talk about the company that I work for, which is Socialtext. In my mind of minds, all the time, I think of Socialtext as the small 10 to 12 person company that it’s been since I’ve worked there because that’s who I mostly interact. Uh, and so most of the work that I do involves that. However that’s of course changed cause we’ve been acquired twice. So even if I’m saying, Oh I work for this small Agile team, I really don’t, I work for this larger organization that is all trying to take our engine. We have multiple QA organizations. In a sense and smaller groups. My direct QA manager, interestingly enough, isn’t just with Socialtext. All of the social takes testers report to this manager who happens to be a tester for a whole nother product that we interact with that we work with, but we’re not doing the same things. I guess the question comes down to how does one utilize Agile when it’s not just you playing in the pool, but a lot of organizations all interacting with a similar platform but you’re not exactly all the same team delivering the same product at the same time and play nice with each other.
Lisa Crispin: It’s interesting in that story that you talked about what we’re working with these testers there, again as an example of the communication and the cross pollination that I think is so helpful.
Matthew Heusser: There’s an Oredev talk on, I think it’s called mix-up testing. We can probably find it. You actually tested different teams product at the when it’s time, the end of the sprint and I think that’s a really neat idea.
Michael Larsen: We do that as a matter of fact that, thanks for mentioning that as an example. The product that our manager works on that he is the tester for. Interestingly enough, very often comes out, “Hey, you know what, we’re doing this role and here’s some of the changes that we have. Can we ask everybody in these teams to interact with that component to make sure that we haven’t missed something” and it’s cool. I really actually appreciate that because “Oh great. Hey, I get a chance to work with this beyond just my immediate, Oh yeah, we’re using this component for this thing”. Like, “Oh, what else is going on here?” I learned something new and I can incorporate it with my testing and also every once in awhile I get introduced to the fact that, Oh yeah, there’s this team over in England that’s doing something else that utilizes our product and they come back to us and say, Hey, did you all know that this video component appears strange when it’s loaded in this manner? And my answer is wait, what video components? So I get introduced to a group that is actively using our product as part of their presentation of their product. And this is sometimes my first indication. I mean, that’s the nature of these interlocking pieces. I can’t know necessarily everything that might interact with it. It’s also interesting to see that I’m used to looking at my product as a thing. It’s my whole world. Thinking about an engine that goes into a Porsche, it’s easy to think about, “Hey, that engine is an entire ecosystem” if you’re only focused on the engine. There’s a lot that goes on there, but the engine actually powers wheels and a chassis and a trunk and lights. They all have to work together and if they don’t, you’ve got problems.
Matthew Heusser: Yeah, right. It’s when it comes to regression testing, there should be some point where you say, “L ook man, you just changed the windshield wipers. I don’t need to retest the engine”. What are we doing? Right? Why are systems so dependent upon each other? And there are some legacy systems like that, and I think that’s because if you make a mistake on those legacy systems, the cost to rollback is so expensive. So if we can change the system, so the cost to roll back and fix is less than we can take more risks on the change side and we can do more targeted testing, which means we can release more often. And we combine that with making it easier to deploy in the first place, so we don’t need a half dozen people to be on a conference call to deploy. I think we can make some real progress and until we see those things happen, I think Agile testing will continue to be a limited subset of its potential. And I don’t know if anybody has a good answer for this. Is there anything new and exciting in Agile testing?
Lisa Crispin: I think the new and exciting thing in Agile testing is the wider adoption of dev ops practices to people trying to move to continuous delivery and deployment. And for a long time I thought, well dev ops sounds just like Agile to me. And you know, as you were talking about our XP teams back on the early aughts, we weren’t leaving out operations, they were on our team. But that didn’t happen in most places, I guess, and somebody pointed out to me not long ago, I think it was one of the blog posts on our testing and dev ops blog that the community site that I curate is that teams that focus on Agile, they tend to focus on the ceremonies and the sort of framework of Agile and the processes where teams that are focused on dev ops and continuous delivery are focused on the actual technical practices that they need, the infrastructure they need to get that going. Say, okay, we’ve got to have automated regression tests now because we need something in our continuous integration. They focus more on doing stuff than on following the ceremonies and practices. As a result of that, they become Agile and I think that’s what’s new for testing in Agile is let’s think on testing and continuous delivery. Even if we’re not doing continuous delivery, we all need to be delivering small changes to customers. Small valuable changes to customers frequently at a sustainable pace. As Elizabeth Hendrickson defined Agile years ago, that’s what we should all be doing. Kind of focusing on a more holistic approach to that. Not only the processes and roles and all those things, but the collaboration to get the technical side of it going and the infrastructure side of it going in the platform engineering and all those things as well as the development and delivery side of it. I think the more holistic approach is where it’s at.
Matthew Heusser: Yeah. And I think those dev ops things that you’re speaking of address what I was talking about earlier in terms of make it easier to roll back, make it easier to deploy. The dev ops people speak a different language. They’re all like Docker with Kubernetes at scale, but like that’s the goal they’re trying to accomplish. Just the technology they’re using to do it with.
Jamie Phillips: I don’t have anything new and exciting that I can think of, but I really, uh, I do agree with Lisa that we’ve been trying to do CI/ D for a long time. All the companies that I go to are still just in the infancy stages of that. So I think, uh, you know, the new and exciting thing to me would be to continue to push that as a way to help testing everybody knows about it. But I’ve gone to clients and like, yeah, yeah, yeah, we’ll get around to that. You know, we’re still trying to figure out how to run a scrum or we’re still trying to figure out how to implement small chunks of work. Like we talked about earlier, the dev ops community, while it’s still growing. And even in my market here in Atlanta, when, when there’s a good dev ops person, it’s kind of hard to hold onto them. In my current client, we’re just now starting to implement a dev ops group. One quick thing I wanted to mention, it’s not new and exciting, but I think one problem that I still see is when creating stories and we go into grooming and everybody, it becomes kind of automatic and we don’t get to enough details. Cause what my testers are coming back now and the people from QA are telling me, Hey, we go to these groomingsessions and instead of really having a conversation about them, somebody writes two or three ACs and then move on to next one without really having a conversation about the story and then when the testers get a hold to it, there’s still kind of lost. Then you’re continually having to go back to the devs, go back to the product owner to try to find out more information. So that to me, I think we would get flying down that path of trying to do scrum or Agile, but we forget about sometimes just slowing down and trying to refine our processes to make sure there’s enough information for our testers to do a good job. Yeah. That’s nothing new and exciting. I think we just need to some sometimes concentrate on refining the things that we’ve already pointed out that we know we should be doing.
Lisa Crispin: Yeah. It’s funny how many teams haven’t adopted the practices that were tried and true. 20 years ago or 30 years ago. JB Rainsberger gave a keynote at Agile Testing Days a few years ago where he’s like, it’s been 15 years since we started extreme programming and why isn’t everybody doing these things that we know work and we even know even more now because of the state of DevOps survey, we have science to prove that these practices work and a lot of other studies as well and why don’t people adopt it? It’s like, well, I guess the reality of life has changed really. It’s super slow and the practices that we know work that aren’t new, they’re new to you if your team hasn’t adopted them yet and they may be very hard to learn and require a big investment in learning and time, so you got to make that investment, but as long as management is driving teams to deliver features by a certain date and don’t understand the need for the investment in quality change isn’t going to happen.
Jamie Phillips: Yup, that makes sense. I would agree with that Matt
Matthew Heusser: and I think we’ve got some followup conversations to have about, I’ve heard this phrase used a couple of times investment. We think it’s going to immediately go faster, but we have to go slow to go fast. I’d love to have a follow up conversation. This expectation problem is, I think, how Elle put it.
Lisa Crispin: Yeah
Matthew Heusser: I think I know what I want to talk about next, but for today I think we’re out of time. So before we go, where can people go to learn more about what you’re working on and what you’re interested in? Jamie, do you got any public talks coming or blog posts or things you want to refer people to?
Speaker 4: not right at the moment. I, I speak a lot in our local meetup groups, but I’ve actually been on the road a lot, probably the last three or four months. I haven’t been able to get out in the community as much. You could find me on LinkedIn. It’s under James Philips and I’m on Twitter at @EagleAgile. Pretty easy to remember is Eagle Agile
Lisa Crispin: Eagle Like the Bird? Am I hearing that right?
Jamie Phillips: Yup, that’s it.
Lisa Crispin: Okay, cool.
Michael Larsen: Yeah. Lisa, what are you up to?
Lisa Crispin: Oh, so many things. So you mentioned at the start, Janet and I have a new book. We actually managed to write a book under a hundred pages long. “Agile Testing Condensed: a Brief Introduction”. The hard copy is available on Amazon and the ebook is there as well, but it’s cheaper to get the ebook on Leanpub so if you want the ebook go to LeanPub. So we’re super excited about that. We hope it’s short enough even for managers to read. No offense to any managers listening and we just know you’re busy and I’m speaking next month at DeliveryConf in Seattle, which I’m super excited about because I’ve been trying to get accepted to developer and DevOps conferences, but they don’t know me in that world. And so I kept getting rejected but it’s like “yay. I finally got in one!” and I’ll be talking about the whole team approach to testing and continuous delivery. And my goal is we had to break down more silos. We have to cross borders on conferences now too. And I really want to get to know that community and bring everybody together thinking about testing and quality. And I’ll be at European testing conference in Amsterdam in February and I also want to plug the community site I mentioned, testingindevops.org. We have awesome blog posts from great practitioners who are leaders in testing and DevOps in lots and lots and lots of resources to help you learn. I think this is, like I say, I think this is a future we all need to learn about it.
Michael Larsen: Awesome. Thanks for that and Ellie of course on the Qualitest site, since the Qualitest site is the one who funds this podcast and all that goes with it, you got anything interesting coming up in the next few months.
Elle Gee: I’m still project oriented and staying within the actual technical delivery, but if anyone’s interested to find out more about Qualitest they can head on over to the Qualitest website, the link in the notes.
Michael Larsen: All right. Well I think at this point in time it’s a good time for us to wrap things up and call it a day. I want to say to everybody, Hey, thanks for being with us. Thanks for enjoying the show and we hope that you’ve had an amazing 2019. 2020 is just around the corner, or as a wonderful podcast that I just saw said, “The Roaring Twenties: Part Two: Electric Boogaloo”. Yes. Um, with that, I want to say, Hey, thanks everybody for joining us. Thanks for making this fun and thanks for listening. Look forward to seeing you on another episode of the testing show soon. Take care everybody.
Matthew Heusser: Thank you.
Lisa Crispin: Thank you.
Jamie Phillips: Bye.
Elle Gee: Thank you.