Stimulus, response, check: The core of test automation – TechBeacon

Posted: March 31, 2021 at 6:37 am

In a 1970s commercial, a boy askedawiseowl, "How many licks does it take to get to the center of a Tootsie Pop?"The owl, who was obviously a tester, decided tosee how many licks it would take; in a humorous twist, he concludedthat the answer was three.Buton the third "lick," he crunched down on the pop and ate the candy.

Like the Tootsie Pop, test automation has a core as well, but it hasthree parts: stimulus, response, and some number of checks. Here's what you need tounderstand about each of these parts, so that you will be ableto createstacks and adjacent stacks that add greater flexibility to your test automation.

At its most simplistic, a stimulus is an action that causes a reaction. For example, striking a drum causes a sound to be made; the strike is the stimulus.

In terms of testing and automation, stimuli come from many different sources, such as:

Each action causes something to happen.

A response is a reaction or result from applying a stimulus; it's the thing that a stimulus causes to happen. Following from the drum example above, the sound made by the drum is a reaction to the stimulus of striking that drum.

Based on our stimulus examples above, corresponding responses could include:

Note that a single stimulus may cause multiple responses; also, a specific response may be caused by more than one stimulus. These are important to notebecause it may be valuable or necessary to check multiple stimulus/response combinations.

Checks are what you use to determine whether you received an appropriate response from your reaction. Again, from the drum example, checks for the result of striking a drum could include "Did you hear a sound?"and "Did you feel contact with the drum?"Note that thisis also an example of multiple responses to the same stimulus.

For the previous automation and testing examples, checks could include:

When programmed into automation, these checks are usually implemented as assertions, as opposed to the question style used in the abovebullets.

At the core of the implementation, there are certain commonalities about how to generate the stimulus, receive the response, and evaluate the checks. Though they typically differ per technology, the abstractions are similar; they differ only in the implementation details.

If you think at a high enough level, you can abstract your automation steps into behaviors, such as "Add an item to cart"or "Perform checkout." Using "Add an item to cart"as an example and assuming your GUI is backed with a web serviceAPI, there are at least two different ways that you can accomplish adding an item to a cart.

Conceptually, you could write a test script that looks like the following:

cart.addAnItemToCart(item)Assert.assertTrue(cart.contains(item))The interesting part of the above pseudocode is the call to addAnItemToCart. This method can be implemented by interacting with the GUI, or it can be implemented by calling the appropriate API action(s).

Understanding this helps yourealize that behaviors can be implemented throughdifferent actions, and each of those actions can have a different implementation. (For a more detailed explanation of behaviors and actions, see this article about the automation stack.)

Following the automation stack concept, you can have one stack based on an API raw tool and a secondbased on a browser-based raw tool. In doing so, you can have different automation approaches for the same behavior.

The need and value of this kind of semi-repeated implementation of a behavior are absolutely context-dependent; some organizations might find great value in this implementation, while others may find it's redundant. The concept does, however, lead to the notion of adjacent stacks.

Again, based onthe automation stack concept, the idea of adjacent stacks is exactly that: automation stacks that can be exercised in a single test script that differ in their specific implementations but have "mostly the same"actions and behaviors. Here, "mostly the same"is context-dependent as well, but generally means that if one stack has a behavior or action, the adjacent stack also has that behavior or action.

Why are adjacent stacks valuable? Some organizations may want to exercise the system at different levels for the same function or feature. Perhaps, since API tests execute faster than GUI tests, the API-level test suite is testing deeply for the message, data, and business logic aspects. This allows for fewer of the slower GUI tests, but GUI tests can still provide value even if duplicating a behavior that's previously been tested by an API test.

Duplication is not inherently bad; in actuality, it's only bad if you are duplicating without having a specific value proposition for that duplication. Also, it could be argued that if you are getting value from the duplication because you're getting additional information from it, then it's not a duplication.

The real value from adjacent stacks, however, is in cross-technology test scriptsscripts that can use more than one automation technology in the same test script. For example, perhaps you want to test that an update to a profile is correctly saved in a database. This could be automated by driving the GUI to log in and make the update, followed by an API or SQL call to check that the data was stored as expected.

Even if cross-technology scripts are not currently useful to you, having all of your scripts using the same logging and execution frameworks can require less effort to debug and store automation results.

The application to the automation core concept is that these automation stacks can encapsulate the implementation specifics but provide the same or similar interfaces across the stacks for similar actions. This level of consistency can be used to create general approaches fordesigning test scripts where the details of the automation implementation are no longer leaked into the scripts, which reduces maintenance and, in many cases, increases readability and supportability.

As with most implementations, your mileage may vary depending on your specific needs, implementations, and goals.

Anautomation's core contains some number of checks,as stated above,but how many is "some number"?

Some teams follow an automation philosophy of one (explicit) check/assert per script. In concept, this is a great idea. Keeping automated testing scripts small and focused can help an individual script run quickly and reduce the likelihood of "lots"of failures due to the same issue.

This failure reductionis largely due to being able to check Step B of an application without having to pass through Step A first. Thismeansthat issues in Step A will cause failures in test scriptsfor Step A, but are less likely to cause failures intest scripts for Step B because youskip Step A.

This approach works, however, only if you can start testing Step B directly.Ifthe application requires thatyou perform Step A before starting Step B, you can introduce appreciable repetition across your test scripts.

For example:

When following this one-checkapproach for a messaging interface such as a REST endpoint or a telecom interface, the amount of time that each step takes may be sufficiently small that the pass-through time for these prerequisite steps is insignificant.

Sadly, this is not always the case; most of us don't work exclusively at the protocol messaging level for telecom or REST, which, by its very nature, allows endpoints to be poked at will.

Usually, these prerequisite stepscenarios occur when automating via a GUI.Interacting via GUI is slow. In these cases, having multiple checks or asserts in a single test script may be the most appropriate implementation.

Typically, the tradeoff here is a shorter duration for an automation run versusthe risk that a problem in a particular test step prevents testing of later steps in a specific script. Certainly, some of the automation run duration can be shortened by parallelizing automation runs.

Apart from GUI tests, there are often instances in API and messaging tests for which multiple assertions in a single test script are appropriate. Take, for instance, the case where you want to check many fields in an API response message. You could write a test script that checks Field 1, then write a script that checks Field 2, etc. Yes, since you are testing at the message level, the tests are typically fast to execute, most often sub-second.

If, however, you want to check 60 fields, you could be adding approximately 30 seconds to testing that response message. You likely will also need to check different configurations of that response; you likely will have to check other response messages as well. It could add multiple minutes to each automation execution's duration. In cases such as this, it can make sense to have multiple asserts or checks per test script.

Conventional wisdom says that your checks must be deterministic, i.e., you can always programmatically determine whether an assertion's condition is true or false. After all, if you can't determine whether or not an assertion fires, you don't know if a test script should report a pass or a fail.

If you can't reliably determine pass or fail, you lose trust in your automation and the data it provides to you. Therefore, only deterministic checks are useful, right? Not so fast.

Most of the time, deterministic checks are required to produce trustworthy and valuable results; this is true for traditional automated test scripts,in particular.When you go beyond traditional automation into nontraditional automation or automation assist, non-deterministic assertions can still provide value.

With this approach, automation is not about passing and failing; it's about computers helping testers do their jobs by doing things at which computers excel, namely repetitive operations and data comparisons.

When intentionally allowing non-deterministic checks, you understand that your automation is not living in a pass/fail world, but in aworld where some unexpected things happenthat might indicate an issue. Anda human needs to evaluate thoseresults to make that determination.

Want to know more? Come to my talk, "Stacking the Automation Deck,"on April 28 at the STAREAST Virtual+ conference. The online event runsApril 26-30. For my full schedule of appearances, visit my upcoming eventspage.

Read the rest here:

Stimulus, response, check: The core of test automation - TechBeacon

Related Posts