March 19, 2018 // By Paul Grizzaffi
TL;DR: After repeatedly failing to write a cogent post regarding the comments on this tweet, I realized I was writing the wrong thing. As it turns out, I agree with Danny Faught and Grégory Fontaine; my tweet was too narrow.
Now that we’re done with that messy business (including the crow consumption), we’ll move on to the long version. I recently started posting some automation thoughts on Twitter under the hashtag #MyAutomationBeliefs. My hope was that perhaps some people would find those thoughts valuable, possibly post some of their own beliefs, and have some helpful discussion around these beliefs. One of those tweets stated the following:
"The purpose of automation is to provide information to testers; it’s not to 'make all the scripts pass'"
This is something I’ve said for years, ever since the first time I heard someone report an issue of “some test scripts were failing”. Those particular scripts were failing due to product issues and, thus, they were not only working as expected, they were providing value. The focus should have been on “what are the scripts telling us?”, not “hey we have to fix the failures in the automation report”.
This particular tweet started some of that discussion I wanted. It caught the attention of a colleague of mine named Danny Faught who responded with this tweet:
"The testers, if you have any, aren’t the most important stakeholders."
At first, I disagreed with this, because, you know, I wrote the original tweet and, as I said, I’d been saying this for years.
Then, I agreed with him, because, in general, testers shouldn’t be the only ones interested in test results. Testers are not sole-deciders of whether or not a software build should be released. Ultimately, that decision is the job of “the business”, but that decision should be made in collaboration with the rest of the team. But, then again, I wasn’t talking about test results; I was talking about automation results.
Then, I remembered what I really think the relationship between testing and automation is.
As I stated in an earlier tweet, automation is a force multiplier; it’s a tool to help humans do their jobs and, in my world, those humans are typically testers. Unless team members from other disciplines are actively participating in automation creation, team members outside the testing team should not really care about automation. To the general audience, as long as the business goals are met with the appropriate value, it shouldn’t matter if the testing work was accomplished by humans, machines, or a bunch of elves hidden under my desk (I can neither confirm nor deny the existence of said elves).
Notice I said, “Unless team members from other disciplines are actively participating…”; this is where my tweet falls short. In highly cohesive, cross-functional teams, automation shouldn’t be a tester job; it’s the job of the team and thus falls to anyone with appropriate skills.
To one of Danny’s points, testers may not be the most important stakeholders in some cases, but in many cases, they are because of my “valuable elves” argument above. To Danny’s other point, some organizations no longer have a tester role, having replaced that role with SDETs, automation engineers, or some other title. I’ve not yet worked with those kinds of teams; the teams with which I work (and most of those I hear about) all have a tester role.
In retrospect, instead of writing the phrase “…to testers”, perhaps I should have used “…to stakeholders” in order to make a more accurate statement. Regardless, some of these stakeholders are generally testers and, for most organizations will be for a while.