Test automation started in the mid-1980s with the emergence of automated capture/replay tools. A capture/replay tool enables testers to record interaction scenarios. Such tools record every keystroke, mouse movement, and response that was sent to the screen during the scenario. Later, the tester may replay the recorded scenarios. The capture/replay tool automatically notes any discrepancies in the expected results. Such tools improved testing efficiency and productivity by reducing manual testing efforts.

The cost justification for test automation is simple and can be expressed in a single figure. As this figure suggests, over time the number of functional features for a particular application increases owing to changes and improvements to the business operations that use the software. Unfortunately, the number of people and the amount of time invested in testing each new release either remain flat or may even decline. As a result, the test functional coverage steadily decreases, which increases the risk of failure, translating to potential business losses.

For example, if the development organization adds application enhancements equal to 10 percent of the existing code, this means that the test effort is now 110 percent as great as it was before. Because no organization budgets more time and resources for testing than they do for development, it is literally impossible for testers to keep up.

This is why applications that have been in production for years often experience failures. When test resources and time cannot keep pace, decisions must be made to omit the testing of some functional features. Typically, the newest features are targeted because the oldest ones are assumed to still work. However, because changes in one area often have an unintended impact on other areas, this assumption may not be true. Ironically, the greatest risk is in the existing features, not the new ones, for the simple reason that they are already being used.

Test automation is the only way to resolve this dilemma. By continually adding new tests for new features to a library of automated tests for existing features, the test library can track the application functionality.

The cost of failure is also on the rise. Whereas in past decades software was primarily found in back-office applications, today software is a competitive weapon that differentiates many companies from their competitors and forms the backbone of critical operations. Examples abound of errors in the tens or hundreds of millions—even billions—of dollars in losses due to undetected software errors. Exacerbating the increasing risk is the decreasing cycle times. Product cycles have compressed from years into months, weeks, or even days. In these tight time frames, it is virtually impossible to achieve acceptable functional test coverage with manual testing.

Capture/replay automated tools have undergone a series of staged improvements.The evolutionary improvements are described in the following sections.

Static Capture/Replay Tools (without Scripting Language)


With these early tools, tests were performed manually and the inputs and outputs were captured in the background. During subsequent automated playback, the script repeated the same sequence of actions to apply the inputs and compare the actual responses to the captured results. Differences were reported as errors. The GUI menus, radio buttons, list boxes, and text were stored in the script. With this approach the flexibility of changes to the GUI was limited. The scripts resulting from this method contained hard-coded values that had to change if anything at all changed in the application. The costs associated with maintaining such scripts were astronomical, and unacceptable. These scripts were not reliable even if the application had not changed, and often failed on replay (pop-up windows, messages, and other "surprises" that did not happen when the test was recorded could occur). If the tester made an error entering data, the test had to be rerecorded. If the application changed, the test had to be rerecorded.

Static Capture/Replay Tools (with Scripting Language)

The next generation of automated testing tools introduced scripting languages. Now the test script was a program. Scripting languages were needed to handle conditions, exceptions, and the increased complexity of software. Automated script development, to be effective, had to be subject to the same rules and standards that were applied to software development. Making effective use of any automated test tool required at least one trained, technical person—in other words, a programmer.

Variable Capture/Replay Tools

The next generation of automated testing tools introduced added variable test data to be used in conjunction with the capture/replay features. The difference between static capture/replay and variable is that in the former case the inputs and outputs are fixed, whereas in the latter the inputs and outputs are variable. This is accomplished by performing the testing manually, and then replacing the captured inputs and expected outputs with variables whose corresponding values are stored in data files external to the script. Variable capture/replay is available from most testing tools that use a script language with variable data capability. Variable capture/replay and extended methodologies reduce the risk of not performing regression testing on existing features, improving the productivity of the testing process.

However, the problem with variable capture/replay tools is that they still require a scripting language that needs to be programmed. However, just as development programming techniques improved, new scripting techniques emerged.

The following are four popular techniques:

* Data-driven: The data-driven approach uses input and output values that are read from data files (such CVS files, Excel files, text files, etc.) to drive the tests.

This approach to testing with variable data re-emphasizes the criticality of addressing both process and data as discussed in the "Historical Software Testing and Development Parallels" section. It is necessary to focus on the test scripts AND test automation data, i.e., development data modeling. Unfortunately, the creation of test automated data is often a challenge. The creation of test data from the requirements (if they exist) is a manual and "intuitive" process. In the future, futuristic tools such as Smartwave Technologies' "Smart Test," a test data generator tool, solves the problem by scientifically generating intelligent test data that can be imported into automated testing tools as variable data (see Chapter 34, "Software Testing Trends," for more details).
* Modular: The modular approach requires the creation of small, independent automation scripts and functions that represent modules, sections, and functions of the application under test.
* Keyword: The keyword-driven approach is one in which the different screens, functions, and business components are specified as keywords in a data table. The test data and the actions to be performed are scripted with the test automation tool.
* Hybrid: The hybrid is a combination of all of the foregoing techniques, integrating from their strengths and trying to mitigate their weaknesses. It is defined by the core data engine, the generic component functions, and the function libraries. Whereas the function libraries provide generic routines useful even outside the context of a keyword-driven framework, the core engine and component functions are highly dependent on the existence of all three elements.

0 comments