Modern testing tools are becoming more and more advanced and user-friendly. The following describes how software testing activity has evolved, and is evolving, over time. This sets the perspective on where automated testing tools are going.
Software testing is the activity of running a series of dynamic executions of software programs after the software source code has been developed. It is performed to uncover and correct as many potential errors as possible before delivery to the customer. As pointed out earlier, software testing is still an "art." It can be considered a risk management technique; the quality assurance technique, for example, represents the last defense to correct deviations from errors in the specification, design, or code.
Throughout the history of software development, there have been many definitions and advances in software testing. Figure 1.1 graphically illustrates these evolutions. In the 1950s, software testing was defined as "what programmers did to find bugs in their programs." In the early 1960s the definition of testing underwent a revision. Consideration was given to exhaustive testing of the software in terms of the possible paths through the code, or total enumeration of the possible input data variations. It was noted that it was impossible to completely test an application because (1) the domain of program inputs is too large, (2) there are too many possible input paths, and (3) design and specification issues are difficult to test. Because of the foregoing points, exhaustive testing was discounted and found to be theoretically impossible.
As software development matured through the 1960s and 1970s, the activity of software development was referred to as "computer science." Software testing was defined as "what is done to demonstrate correctness of a program" or as "the process of establishing confidence that a program or system does what it is supposed to do" in the early 1970s. A short-lived computer science technique that was proposed during the specification, design, and implementation of a software system was software verification through "correctness proof." Although this concept was theoretically promising, in practice it was too time consuming and insufficient. For simple tests, it was easy to show that the software "works" and prove that it will theoretically work. However, because most of the software was not tested using this approach, a large number of defects remained to be discovered during actual implementation. It was soon concluded that "proof of correctness" was an inefficient method of software testing. However, even today there is still a need for correctness demonstrations, such as acceptance testing, as described in various sections of this book.
In the late 1970s it was stated that testing is a process of executing a program with the intent of finding an error, not proving that it works. The new definition emphasized that a good test case is one that has a high probability of finding an as-yet-undiscovered error. A successful test is one that uncovers an as-yet-undiscovered error. This approach was the exact opposite of that followed up to this point.
The foregoing two definitions of testing (prove that it works versus prove that it does not work) present a "testing paradox" with two underlying and contradictory objectives:
1. To give confidence that the product is working well
2. To uncover errors in the software product before its delivery to the customer (or the next state of development)
If the first objective is to prove that a program works, it was determined that "we shall subconsciously be steered toward this goal; that is, we shall tend to select test data that have a low probability of causing the program to fail."
If the second objective is to uncover errors in the software product, how can there be confidence that the product is working well, inasmuch as it was just proved that it is, in fact, not working! Today it has been widely accepted by good testers that the second objective is more productive than the first objective, for if one accepts the first one, the tester will subconsciously ignore defects trying to prove that a program works.
The following good testing principles were proposed:
* A necessary part of a test case is a definition of the expected output or result.
* Programmers should avoid attempting to test their own programs.
* A programming organization should not test its own programs.
* Thoroughly inspect the results of each test.
* Test cases must be written for invalid and unexpected, as well as valid and expected, input conditions.
* Examining a program to see if it does not do what it is supposed to do is only half the battle. The other half is seeing whether the program does what it is not supposed to do.
* Avoid throwaway test cases unless the program is truly a throwaway program.
* Do not plan a testing effort under the tacit assumption that no errors will be found.
* The probability of the existence of more errors in a section of a program is proportional to the number of errors already found in that section.
The 1980s saw the definition of testing extended to include defect prevention. Designing tests is one of the most effective bug prevention techniques known. It was suggested that a testing methodology was required, specifically, that testing must include reviews throughout the entire software development life cycle and that it should be a managed process. Promoted was the importance of testing not just a program but the requirements, design, code, tests themselves, and the program.
"Testing" traditionally (up until the early 1980s) referred to what was done to a system once working code was delivered (now often referred to as system testing); however, testing today is "greater testing," in which a tester should be involved in almost every aspect of the software development life cycle. Once code is delivered to testing, it can be tested and checked, but if anything is wrong, the previous development phases have to be investigated. If the error was caused by a design ambiguity, or a programmer oversight, it is simpler to try to find the problems as soon as they occur, not wait until an actual working product is produced. Studies have shown that about 50 percent of bugs are created at the requirements (what do we want the software to do?) or design stages, and these can have a compounding effect and create more bugs during coding. The earlier a bug or issue is found in the life cycle, the cheaper it is to fix (by exponential amounts). Rather than test a program and look for bugs in it, requirements or designs can be rigorously reviewed. Unfortunately, even today, many software development organizations believe that software testing is a back-end activity.
In the mid-1980s, automated testing tools emerged to automate the manual testing effort to improve the efficiency and quality of the target application. It was anticipated that the computer could perform more tests of a program than a human could perform manually, and more reliably. These tools were initially fairly primitive and did not have advanced scripting language facilities (see the section, "Evolution of Automated Testing Tools," later in this chapter for more details).
In the early 1990s the power of early test design was recognized. Testing was redefined to be "planning, design, building, maintaining, and executing tests and test environments." This was a quality assurance perspective of testing that assumed that good testing is a managed process, a total life-cycle concern with testability.
Also, in the early 1990s, more advanced capture/replay testing tools offered rich scripting languages and reporting facilities. Test management tools helped manage all the artifacts from requirements and test design, to test scripts and test defects. Also, commercially available performance tools arrived to test system performance. These tools tested stress and load-tested the target system to determine their breaking points. This was facilitated by capacity planning.
Although the concept of a test as a process throughout the entire software development life cycle has persisted, in the mid-1990s, with the popularity of the Internet, software was often developed without a specific testing standard model, making it much more difficult to test. Just as documents could be reviewed without specifically defining each expected result of each step of the review, so could tests be performed without explicitly defining everything that had to be tested in advance. Testing approaches to this problem are known as "agile testing." The testing techniques include exploratory testing, rapid testing, and risk-based testing.
In the early 2000s Mercury Interactive (now owned by Hewlett-Packard [HP]) introduced an even broader definition of testing when they introduced the concept of business technology optimization (BTO). BTO aligns the IT strategy and execution with business goals. It helps govern the priorities, people, and processes of IT. The basic approach is to measure and maximize value across the IT service delivery life cycle to ensure applications meet quality, performance, and availability goals. Interactive digital cockpit revealed vital business availability information in real-time to help IT and business executives prioritize IT operations and maximize business results. It provided end-to-end visibility into business availability by presenting key business process indicators in real-time, as well as their mapping to the underlying IT infrastructure.
Subscribe to:
Post Comments (Atom)
Post a Comment