June 13, 2018 // By Paul Grizzaffi
In the Star Trek universe, a universal translator is a device that instantly translates any language. It’s usually used to communicate with alien species. Sadly, this technology does not yet exist in our world.
When we build an automation endeavor, the attractive first step is to “translate” our existing test cases into automated test scripts. Though starting with something other than test case automation may be a more appropriate starting point, we do often find that some automation based on test cases may be our best, initial value proposition.
Even in those cases, using our existing test cases as a step-by-step recipe for automation can be troublesome. We need to understand that we are not simply making a computer do what testers currently do; we are adding a new tool to our existing capabilities. As with most tools, this one provides us with additional capabilities over what we have today, but we must use it appropriately. As an analogy, consider walking versus driving; a vehicle is a tool to help us perform an activity: travel. If we are walking to our destination we will prepare differently than if we are driving. There are some things that could change due to having access to a “travel tool” such as an automobile: preparation, route, supplies, arrival time. It’s interesting to note that, in some cases, using a car to travel to a destination may take longer than walking, especially in traffic-congested cities; similarly, using a tool for testing is not always the most appropriate option.
Now that we know that we must approach testing differently when we introduce a new tool, there are aspects we may have to consider when using existing test cases as a starting point for automation.
In general, test cases or steps are written for human usage. As such, we take jargon shortcuts to make these artifacts more meaningful and understandable to humans. These jargon shortcuts rely on our innate understanding of both the world and the application domain in which we are testing. Our automation implementations must be considerably more specific than our jargon-based test cases; computer programs require instructions to be very specific and in a syntax that is acceptable to the tool in use.
We frequently write test cases similar to the following:
- Go to the homepage
- Find an in-stock product and add it to the cart
- Purchase with a Visa card
- Verify that you get an order number
Computers, however, don’t have the ability to work at this level of abstraction or with this level of non-granularity; far more details need to be supplied, some of which are:
- What is the address of the homepage?
- How do we find a product?
- How do we determine if a product is in-stock?
- How do we add a product to the cart?
- How do we traverse our app to where we specify a credit card number?
- How do we determine if we received an order number?
- What do we do if any of the steps produce unexpected behavior?
To make things more challenging, the answers to some of the questions above may produce additional questions that need to be answered in the pursuit of the appropriate level of granularity.
We have several questions to answer when deciding how to wait for UI elements to appear:
- How long should we wait?
- What should we do if something we are waiting for never arrives? If it’s an essential part of the user’s experience, such as receiving an order confirmation number, perhaps we should indicate a failure if that order number never arrives.
- What should we do if something that may not be essential never arrives, such as a loading indicator like an hourglass or a spinner? Deciding what to do may be less obvious. It’s possible that the automation is slow enough that it missed both the appearance and the disappearance of the loading indicator. Just as common, however, is that the automation is fast enough to fail to find the loading indicator because it checked too quickly. Each of these scenarios must be considered and so that specific instructions can be written into the automation script so that it behaves as expected.
The final aspect covered in this posting is the cardinality of test cases and test scripts. It’s tempting to create a one-to-one relationship between test cases and test scripts. Depending on how our test cases or steps are written, however, it may be appropriate to create a test script that covers multiple test cases’ responsibilities, which is a one-to-many relationship. This is often accomplished by data driving the test script, i.e. using a data source to alter the logic in a test script to increase paths of coverage. For example, I once worked with a company that had individual test cases for purchasing with a Visa, with an American Express, etc. When we created automation for checking the purchase flow, we created one automated script to handle all the credit card purchasing options. To test the all appropriate payment types, we specified those types in a CSV file.
Certainly, using our existing test cases as a starting point makes sense; we’ve likely spent quite a bit of brain power deciding what and how to test so it may not make sense to ignore all that acquired knowledge. We do, though, need to realize that those test case artifacts are but a single implementation of that knowledge’s usage; introducing a new tool to our ecosystem may require reevaluating our current knowledge usage so that we make a responsible decision on how to use our new tool.