The Testing Show: Automation and Getting to “Done” (Part 2)

The Testing Show: Automation and Getting to “Done” (Part 2)

We continue our conversation with Angie Jones about ways that automation can be put first in stories (yes, really) and ways that she has been able to get team buy in and cooperation to make that process effective. Also, we have a mailbag question that we answer in depth, or as much as we can… is it possible to be paid as much as a developer or an SDET if you are just a manual tester? The answer is “it depends”, but we go into a lot more about why that is the case.

itunesrss

Panelists:

Larsen

Perze

Heusser


References:


Any questions or suggested topics?
Let us know!


Transcript:

[Hi, everyone, this is Michael. What you are about to hear is Part Two of a two part program on “Automation and Getting to Done”, our conversation with Angie Jones of Lexus Nexus. For continuity’s sake, if you have not heard the first part of our conversation, we encourage you to go back and listen to Part One. If, however, you like it random, then by all means, enjoy Part Two of our conversation with Angie Jones already in progress . And with that, on with the show.]

 

MICHAEL LARSEN: This is something that is relatively new. We used to just be able to do builds at random and we wanted to make sure that we didn’t have rogue commits that weren’t being tested, so one of the policies that we put In place is one of our Jenkins jobs. If you want to test to see if your merge is going to work, if you want a repo that you can do a test with, you have to submit it first, at which point it will run through the entire process of testing our unit tests, our integration tests, as well as what we refer to as wikiQtests, which are our end to end QA style Selenium tests. All of those have to pass before you can get a build that will create a repository that you can then put on a machine for testing purposes. At any given time when a story is “done”, you can’t just say “oh, I’m done, I’ve made a build, hey let’s go test this.” We put this in place because you need to go these steps.  You need to make sure that everything’s working. So if you made a tweak to an API someplace, and it breaks a whole bunch of tests, you’re going to know pretty quick.

 

ANGIE JONES: That’s another reason that I advocate for having automation included in the definition of done, and actually done before signing off, is because if you put those tests that come from new features, you put that on as backlog where you have your automation engineers working in isolation, and they’re automating these as regression type things, then those tests aren’t present in that pipeline that you just described. That’s just become technical debt, and things that are very pertinent to the new code that was just released, and yet not present.

 

JESSICA INGRASSELLINO: I guess I would love to hear an example of how you’ve done this with larger teams, because when I was at bit.ly I built the automation system there. I was essentially the automation engineer, but I was also the manual tester… I was the only person who did test formally as a job title, but the front-end engineer teamed up with me. He’s great. He had basically said to the whole front-end team “listen, if the test suite breaks, you’re responsible for fixing it if you broke it. You have to run it before you commit any of your code.” So they did. We worked together, and I taught them the kind of things that they weren’t sure about in terms of how to use… we were using Python Behave and they wanted to know about that because they were writing ReactJS, so they weren’t Python people necessarily, but they were great and really fast. That took a huge pressure off of me because, as the only tester, having to write the automation and plan for testing of new features and all that stuff, there was a lot to do. There was a lot of pressure. I was able to effect that change because it was a smaller team. So I’m wondering, in larger teams, how do you get them to embrace that level of change, and maybe learn something that they’re not sure about?

 

ANGIE JONES: It’s not just restricted to the larger teams. I find it difficult in smaller teams as well. As far as changing the culture in educating everyone, when I come in, typically, everyone has an opinion of how it should be done. I listen, but most times I don’t agree, especially if they haven’t done it or experienced it before.  I have a blog post that was kind of controversial, titled “Why Developers Shouldn’t Lead Your Automation Effort.” Some of the reasons I pointed out was their lack of understanding about automation engineering, and the techniques that are being used. Not that they aren’t capable, but they don’t study this. This is not their niche.  Again, automation is more than just programming; there’s a lot more that goes into it. I take that task on to educate them on these types of things. It’s definitely a process. Any time you are trying to bring in change into something that’s out of the norm, of course there’s all of these pain points you have to go through, and growing pains before you actually see a difference being made. Eventually, they catch on. I have to keep harping on the same things over and over again, and sometimes even let them fail and fall a little bit, and then do the “I told you so” kind of thing, and I tell you, that builds trust. After a couple “I told you so’s”, it’s like “hmmm, maybe she does have a clue and knows what she’s talking about, and we should probably listen to her.” Definitely difficult, and that’s probably the most challenging part. The technical part is not that hard, but it’s dealing with people and cultures, and trying to help them change.

 

MATTHEW HEUSSER: Yeah, well I’m still not… maybe it’s just the language. Thank you, by the way, this is great.  Still not getting a picture of… and maybe it’s the automator decides for themselves, how do you come to a shared agreement with the team of what’s going to be automated, and how do you track it? Is it part of the story? Is there a task on the story? Separate story?

 

ANGIE JONES: No, we put it on the story. On a specific story will be development, tasks to test it, and then tasks to automate it.  Working with the testers and the business folks, you get “what are some scenarios you have in mind?” I talk about this in that same conference talk. “What are some scenarios you have in mind? Let me hear everything that you’re thinking about”, and you have acceptance criteria and all this stuff as well. What are you going to explore? Do you have some ideas of the types of things you are going to do with your exploratory testing? So let’s get a list of those and then talk about which ones would be most beneficial to automate? Which ones are our riskier ones? Which ones give us the most value? Of course we want to test everything, but we might not necessarily want to automate that. One question that I use is “OK, if I automate this, and let’s say that it fails during continuous integration, and we’re blocking a build, how do we feel about that, given this is the failure”, or “if I open a bug for that, are we going to fix that right away, or is that something that’s going to sit on the back log for several months to a year?” Those types of things help you identify what‘s your low risk areas, and you don’t bother with automating those, unless someone bring s up the point “I know it’s low value, but still high risk” or something like that, so we still want to automate it. So it’s just a conversation among the team on what should be automated. I’ve also seen automation engineers where they will write down the one-line scenario ideas that they would like to automate based on this story, and then just get the feedback from the team members. “Do you guys think this is the right things to automate?” Some of those might be eliminated by team members with “Naah, I don’t think that this one should be automated. We can test it out, but I don’t think we should invest the time in automating it.” Or they might add additional ideas as well. So very much so a team type of discussion.

 

MATTHEW HEUSSER: And when does that happen in time? Does that happen during story kickoff? Does that happen early on, when the developer is writing the code? Does it happen at planning?

 

ANGIE JONES: Before the developers write the code. If we want to achieve in sprint automation, we have to start early as possible. In my talk, I get to geek on how you can write your automation code before the developer even starts writing code. You can have it already automated. Definitely during grooming, story kickoff, is where you kind of want to have those details finalized. By the time we kick off, I already know what I’m going to automate and I can already start working on it right away.

 

MATTHEW HEUSSER: When you say you can have it automated before the developer writes the code, assuming there are new UI elements…

 

ANGIE JONES: Yep

 

MATTHEW HEUSSER: Do you either get an agreement on what those UI elements will be or is it kind of “pseudo-codey”?

 

ANGIE JONES: No no no. Get an agreement with them. I propose setting up a meeting, with the developers, actually sketching out on two different papers, given this story, this textual story, what do you envision this will look like? Have the developer do the same thing. Then, you two compare notes. What does this look like? Is my picture different from your picture, and why? So this is a discussion point where as a tester, an automation engineer/tester, I’m bringing in my tester mind set and giving you information on why I think it should be developed this way, and that could be based on a lot of things like consistency across the application, so things like maybe a developer is only focused on a certain module, where you as a tester, you know the entire application. Well we have similar type areas of the application that were developed this way and look this way, so I believe that this feature should also be developed in that same vein. And then you have the developer that will say things like  “well I’m using a JavaScript widget that does a lot of this for me, and so, this is how it will look if I use that. So there is some discussion there, and then they come up with an agreement on what that should look like. Once what that should look like, in that same meeting, I tell them to go ahead and make a contract on what these identifiers will be. As the automation engineer, at this point you already know what tests that you’re going to automate. Given those tests, you can say “Okay, I’m going to need access to this element, that element, this button, this drop down” and you are letting them know what you need for this application, this feature, to be automate-able. You can go ahead and say “The ID is going to be this” and you start writing these things down. Make a copy of that, you give it to the developer, you take one back, you already know what it’s going to look like, and you have the handles to everything, so you can go ahead and automate that. The developer will have that information as well, so she knows to put that inside of her code.

 

MATTHEW HEUSSER: How many times do you get the build, the story is done, you’ve written your code, and it passes the first time? What percentage?

 

ANGIE JONES: I would say about 70%.

 

MATTHEW HEUSSER: Wow!

 

ANGIE JONES: And then the other 30%… It’s funny because, initially, when the other 30% would fail, I would always assume that it was me and I messed something up, because even though I have this information, I didn’t have the actual UI, so you kind of just doubt yourself a little bit, but maybe eight times out of 10, when you go to triage this and see why it failed, it really was something that was missing on the developers end. Maybe something we said was going to be in the picture that wasn’t there, or a typo in the ID, or something goofy like that. After awhile you just start gaining more confidence in the automation scripts themselves, and not blaming ourselves right away.

 

MATTHEW HEUSSER: I think that’s amazing. I’m impressed. Seems like it would take a significant amount of maturity and maybe time to do that, right? So at Socialtext it was, “We’re going to get you your first build in, like, two hours, so let’s not spend too much time on the kick off, because you could just test it, man”. Then we would look at the code that was generated, usually the developers were good at giving us IDs, but it was not that parallel. That’s nice. I think it’s time for us to transition into our mailbag. We have a mailbag question. Do you want to read it, Michael?

 

MICHAEL LARSEN: Sure. So this comes from… I hope I’m pronouncing this right, my apologies… Sataljeet Malagu. In one of our previous shows, we were talking about the fact that SDETs and automation engineers vs. manual, exploratory testers, and I guess one of the things that we talked about was the pay rate that many of those testers worked for, and the question was “I would like you to expand on it. In particular, can you stay on as an exploratory tester and earn equal to software development engineers and SDETs? I’ve been doing SDET work for 10 years and hiring, and firmly believe that if you want to progress in your career, you have to acquire technical skills. Have you found any contrary evidence to this?” Frankly a lot of it depends on where you are, and I will say that it also depends on the company, the culture, how they value testing and what you bring to the team.  “Do you necessarily need technical skills to warrant higher pay?” No and yes, in the sense of that if you’re just a general manual tester, that anybody can fill in your role, it’s unlikely that you’re going to rise up, or that you’re going to be able to demand that much, unless you are really exceptionally good at what you do; in that sense, you are developing technical skills that go beyond just “test this.” There are things such as people skills, there are leadership skills, there is the ability of delivering because you have an incredible knack of finding things because you’re tenacious, because you have a really good inductive and deductive thinking model, and you are faster than other people, or you just have a works way of looking at the world. Those are hard to quantify. Those aren’t where you can say “well, if I study this programming language, I will make X times more”. I’ve had both experiences. I’ve been in testing for 25 years, and I’ve had periods were I’ve been well compensated and periods were I’ve been under compensated, and it’s depended upon the conditions of the market, and on what’s happening. Right now, we are in a bit of a tech Renaissance, especially where I am in the San Francisco Bay area. I would consider myself a midline; I don’t consider myself the most technical of testers. I’m improving every day. I’m doing the best I can to get more involved with automation. I think it comes down to finding what you are good at doing and what you like doing. You don’t necessarily have to be a full-time programmer to, as a tester, be paid commensurate to a software developer. Now, if you’re talking a full-blown software architect, a senior developer, versus a senior tester, yeah, I think there’s going to be a disparity there. It may not be dramatic, but yeah, you are probably going to be looking at the programmer, especially if it’s a really senior programmer, making a bit more then a tenured seasoned tester will, but as far as my team right now and the people that I work with, it’s not a dramatic difference. The developers and testers are relatively in-line. We actually make fairly similar salaries, because the end of the day, we’re doing a lot of the same work together. We are working as an integrated team. We share ideas and suggestions. I will always say the more skills that you can bring to the table, and more technical skills that you can bring to table, of course the more valuable you will be to your team. To borrow from Seth Godin, “you will be well compensated, if your company does not have any other option.” If you are meeting a special niche that is really hard for them to fill, even if what you are doing is not necessarily considered technical, “programmey”, engineering type work, but you’ve developed a real knack for getting into the guts of your system, and you are sort of an über systems administrator at the same time of being a tester, and you’ve got some knack at doing systems configuration work that no one else really likes to get into, because it’s kind of painful, that’s a niche that you can leverage and leveraging that can certainly mean more money.

 

MATTHEW HEUSSER: I want us to take a step back, and look at the broader social map. What’s happening right now is that Uber is desperately trying to -lots of companies; Google, too- they’re trying to make self-driving cars. Let the computer do it. Think about all the complexity involved in a self-driving car. Knowing whether a street is one way, and stoplights and pedestrians and… that’s really hard, broad, general AI skills, and then you have in that world, what they think of; they think of the programmer for that, as an Automator. They think of the taxicab driver as, “Ehhh, anyone can do it. You’ve got to know some streets, but you can use GPS, what’s the big deal?” That mentality is how a lot of companies think about testing. The Automator is assumed to be more valuable. They absolutely have skills that they had to pick up on their own time or on-the-job, so the Automator sat down and learn Java or ruby or Python or something. They learned an IDE. They learned how CI works. They learned how HTML works. They learned how Xpath works. They have hard skills, which they learned which are easy enough to measure; can you sit down in an IDE and code this up? No. OK. You can? OK. The market’s rewarding that. To say, “I just want to be a broad critical thinker and asked good questions and do quick attacks and get paid as much as an SDET”… you are going to have to be a trainer of critical thinkers. You are probably going to spend a lot of time in airplanes. You are going to have to be consulting, or maybe there is one of those at a medium-sized organization, overseeing the test process with some other responsibilities. Generally speaking, I think the SDETs are going to have a leg up on the promotion ladder, on the salary ladder, and the production programmers a leg up on top of that, and I am okay with that. Without the production programmers, you wouldn’t have software. At all. Right? [laughter] That’s… it might be buggy…

 

MICHAEL LARSEN: That’s a good point, and with that, let me caveat on my side, too, and say that for the last 16 years, I have worked for small companies; when you get right down to it, companies that realistically have fewer than 20 people in the entire development and testing team. With smaller teams, you can be more of a generalist and have a broad range of skills, and it’s rewarded more equally. When you get into a more stratified company, and you have more roles and more people, I would have to say I agree; you are going to reward those who can be technically and measurably evaluated. It’s hard to evaluate “how good is your critical thinking?” It’s a little easier to evaluate “can you deliver production code in this amount of time, and how many commits have you made?’ So what I do is I make commits. I have a project that I actively update and it’s part of the source code, but if it comes up in time for review and “What are you doing on your programming skills”, well, quite a bit [laughter], so I don’t want to make it sound like I am not doing that (of course I am), but when you are in a smaller company, it’s more fluid. You’re not necessarily going to have one group that’s going to get rewarded above the other, but when you get into more stratified jobs and rolls are more defined, yes I do think the SDETs have a leg up.

 

MATTHEW HEUSSER: there’s also a power law curve on the value you have. If you’re working at Napster in 1997… maybe Napster is a bad example… Yahoo, as a tester, creating something wildly new and innovative in a small team, and you find a security bug in Gmail or whatever, you could be adding a ton of value. Today, on the Gmail team or at Netflix or something like that, there are so many systems and processes and automation pieces in place, the way to add more value is probably to add the next little piece of tooling. If you can’t do that, your skills are going to be less valued by that time… by that team in that place in time. It’s just the reality of the market. Not excited about it.

 

ANGIE JONES: yeah Robert Half just recently put out a salary guide for 2017, and they actually break out the QA role into the manual versus the automating, and it looks like it is a bout a $10,000-18,000 difference average across the United States.

 

JESSICA INGRASSELLINO: All I can say is that I am really lucky [laughter], because I work at a company that does value… I was hired as an exploratory tester, and I have technical experience; I’ve built full systems from the ground up for automation and coded them and the whole thing for three different companies where I’ve worked, and now I’m technically full-time exploratory and I am compensated as if I were not. I am compensated as similar to an SDET. I’m wondering if the skill is part of that, what I brought with me into the context, so yes I’m doing exploratory testing, but I also understand how to work in the command line and do stuff with Git and spin up environments and add to the test code if I need to. I definitely agree that any skill that anybody can gain is of course valuable because you never know where you can deploy your skill. At the same time, I’m lucky to be working at a place where they hired me, they call it exploratory testing, they want that piece of value added in addition to having a balanced automation strategy. It’s a small company within a big company, so maybe that’s part of it, too. I definitely agree; you know I’ve only been in testing for like 4 ½ years and building skills and doing movements to different places where I could build my skills has been a huge, huge benefit.

 

MATTHEW HEUSSER: Wait, did you say 4 ½ and a half years? No, you’ve been in testing longer than that.

 

JESSICA INGRASSELLINO: No, I started testing at TargetSpot in June 2012.

 

MATTHEW HEUSSER: Okay, when did we meet up?

 

JESSICA INGRASSELLINO: We met in June…  no, May 2013.

 

MATTHEW HEUSSER: Okay, yeah.  So I do think there are enlightened places that will value this, we’re generally you are going to be able to be the kind of person that says “I can do that if I wanted to, but I think this is more valuable.” So I can pair on production code. That’s fine, and in most languages; Java, C++, C#, something like that, Ruby. It’s just not where I’m most valuable, so put me where I am most valuable. That’s usually the test strategy, the process-y and the consultant-y stuff… but I’m a consultant. I’m not a test-er, and I think if you want to be a test-er, if you want to say, “I don’t have those skills, I don’t want to get them, I just want to be a really good tester”, you can have a nice life and save for retirement and probably do fine. Other people are going to get paid more. If the question were “how can I as a tester get paid as much as the CEO?” The answer is, “you probably cant.” [laughter]

 

JESSICA INGRASSELLINO: Go be the CEO!

 

MICHAEL LARSEN: Found a company!

 

MATTHEW HEUSSER: A company that specializes in test-ers, like House of Test or Qualitest or Excelon, you might have a little bit better opportunity there. Just as if you go work for a software company you are going to have more opportunity as a programmer then if you go work for an automotive manufacturer. It’s just sort of the nature of the beast. When the founder is a tester, when the founder is a software developer, you’re probably going to have a better career. On that note, I think we have wrapped up the salary discussion to the extent that we can. Angie, and maybe Jess, where can people find out more about you? What’s coming up? What are you working on?

 

ANGIE JONES: I blog over at angiejones.tech. I have a lot of information about automation strategies and techniques. There I also have a Upcoming Events page, so I have quite a few conferences that I am scheduled to speak at, or teach workshops at in 2017, so I think I am booked through the summer so far, so check that out, and you can also find me on Twitter at @techgirl1908.

 

MATTHEW HEUSSER: Cool, and it says here you’re teaching testing in college courses?

 

ANGIE JONES: I actually teach a Java development course, so I sneak teaching into that course in just preparing these developers for a world where they are expected to build quality into the things that they code, so I do little things with their assignments and the questions that I ask them to make sure that they have this quality mindset, and they are not just cranking out crappy code.

 

MATTHEW HEUSSER: Cool, great. Jess, what are you up to lately?

 

JESSICA INGRASSELLINO: I was recently accepted along with Anders Dinsen to present at the CounterPlay Conference  in Denmark March 30 through April 1 or 2, I think, so that’s going to be really fun because we are talking about using play and performativity as ways to engage in more deep thinking with the software, and as always working on the PyCon Education Summit, which is going to be happening in May in Portland, Oregon, so a lot of things on the table. My blog died, so I am trying to resurrect it [laughter] and working on a new book. I have a new book outline, so that’s a new project for 2017.

 

MATTHEW HEUSSER: That’s awesome. I should finish my old book. Markus Gärtner and I are working on “Save Our Scrum”, which is available now from LeanPub, and is about half done. It gives advice for teams adopting Scrum. I think it is harder than most people realize, and the tendency now is to say, “We’ve marked the checkbox, of course we’re doing scrum, everyone is doing scrum, it’s easy” and when you dig under the hood, it’s a mess, so “Save Our Scrum”. I’ll be at Agile Testing Days US in June. The program is out now, you can register now, and everything’s up on the web site, and then CAST 2017 is going to be in Nashville. I’ll be there, too. SQuAD Denver, Software Quality User’s Group Denver Conference in September. That’s most of my new stuff coming out. Perze, Michael, you have anything?

 

 

MICHAEL LARSEN: I’m going to take a moment and give a plug to our sponsor, Qualitest. They have a series of Webinars that they will be presenting throughout 2017, ranging on a broad variety of topics such as DevOps, Healthcare IT, the Internet of Things, Mainframe Automation, Security, Selenium and Web services, and more topics to be announced. Their plan is to present a webinar each month, so if you’d like more information, or to sign up for the next webinar, go to the Qualitest Webinar Center (link in the show notes).

 

PERZE ABABA: For me, we are always encouraging everybody to check out the NYC testers website, nyctesters.org, as well as our Meetup page. Shout out to my fellow co-organizers Anna Royzman, Kate Falanga, as well as Tony Gutierrez for, I believe, this is going into our third year for it, NYC Testers Meetup.
MATTHEW HEUSSER: Yeah it’s been really neat to watch that whole community just sort of explode, and if you are not in New York, and you want to have an awesome community, talk to us and make one. Grand Rapids is a metro market that is one 1/20 the size of New York… I don’t know, it’s small, but we still have a bunch of vibrant users groups and testing is one of them. Last thing I want to mention is the Mailbag, You can email “the testing show at Qualitest Group dot com” or just go to Facebook dot com / QualiTestGroup. Leave your comments about the show, leave your questions about the show, and leave your thoughts. It’s almost ten o’clock in the morning here; we’ve been going at this for an hour and a half. Think it’s time to say goodbye. Thanks everyone for being on the show today.

 

JESSICA INGRASSELLINO: Thank you.

 

MICHAEL LARSEN: Thanks for having us.

 

ANGIE JONES: Thank you.

 

PERZE ABABA: Thank you, everyone.