XML is important for Rational Functional Tester users for two reasons. First, data is now frequently formulated in an XML format, either for persistence in a file, a database, or to be sent (usually via HTTP) to another application. Verifying the data content of XMLs is often an important software quality task. So, parsing XMLs to capture data to compare it to baseline data is a common software testing need. Second, Rational Functional Tester employs the XML format to persist its own data. This chapter doesn’t discuss the details of Rational Functional Tester’s use of XML; rather, it covers all of the core XML-handling tasks that are needed to test XML data and to manipulate Rational Functional Tester XMLs.

Handling XML in Rational Functional Tester

This chapter uses a simple sample XML, but one that demonstrates all the basic moves you’ll need to make. It follows:

Rational Functional Tester
8.1.0
Rational Performance Tester
8.1.0
Rational Quality Manager
2.0

Our discussion of XML handling in Rational Functional Tester starts with a brief overview of the two main XML-handling standards, DOM (Document Object Model) and SAX (Simple API for XML). Both DOM and SAX are W3C standards; in Java, DOM and SAX are implemented in the org.w3c.dom and org.xml.sax packages. In VB.NET, the System.Xml libraries implement DOM and a SAX-like parser.

DOM and SAX

DOM and SAX are fundamentally different in that the DOM loads and persists an entire XML document in memory (in a tree-structure form), whereas SAX is event-driven, meaning that a SAX parser fires off a sequence of events reflecting XML structure and content as it scans through an XML document. A SAX parser never holds a whole document in memory. You see output from a SAX parser sooner than from a DOM for equal tasks on equal documents because the SAX parser fires off its events as it encounters them during its scan. In the DOM approach, the entire document is parsed and loaded in memory before any processing can occur.

Because of this key difference in structure, the DOM demands more memory than a SAX parser does for an equivalent document. On the other hand, the DOM provides random access to all document nodes at will because they are all in memory. One of the major factors in the choice of which to use is the size and complexity of the largest document that will have to be parsed relative to the available memory. The DOM is most useful when an XML should be persisted in memory for repeated access. SAX is strongest for processing large XMLs quickly for specific data content where it is not necessary to keep the full XML in memory.

A major intersection of test automation and XML is data content. Code to validate the data content of XML documents is mostly what you need to write, and the most direct route to this is through the DOM. The issue is not that data content can’t be validated with SAX, but more that the DOM is the path of least resistance; the code to extract the data is simpler. So, if your XMLs do not eat up too much memory, or they are not large enough to put you in the slow processing regime, DOM is the easiest route to go. If you are parsing large documents, then SAX becomes an attractive choice.

Properties cannot only be read using getProperty(), but they can be changed using setProperty():

public void setProperty( String propertyName, Object propertyValue )

Although you most likely will not use setProperty() nearly as often as you use getProperty(), it is a method worth knowing about. SetProperty() takes two arguments: the property to change and the value to change the property to.

The reference example is one that involves setting data field values in test objects (for example, text fields). In general, you should use inputKeys() or inputChars() to enter data into the SUT. With some cases, however, this becomes challenging. One such context is internationalization testing. inputKeys() and inputChars() can enter characters only in the current keyboard’s character set. If the current keyboard is set to English, for example, RFT throws a StringNotInCodePageException if your script attempts to enter any nonEnglish characters.

One potentially viable solution is to use setProperty() instead of inputKeys() to set the field value. The first step is to determine the property you need to set. Manually set a value, and then examine the test object using either the Inspector or the Verification Point and Action Wizard. Search for a property whose value is the data value you entered. If you enter a search term of Pasta Norma in a Google search field and examine the field with the Inspector, you see two properties whose values are Pasta Norma: value and .value. This is not uncommon: It’s possible that the data value is represented by more than one property. It’s a good idea to note all these property names because some might be read-only. If you try to set a property value that’s read-only, Rational Functional Tester throws an exception.

If you had a datapool with different search strings in different character sets, you can manipulate the scripts to perform multiple searches, as shown below.

Using SetProperty() to set data in a test object

Java


while (!dpDone()) {
text_q().setProperty(".value", dpString("SearchItem"));
// Do what we need to do
dpNext();
}

VB.NET


Do until(dpDone())
text_q.SetProperty(".value", dpString("SearchItem"))
' Do what we need to do
dpNext()
loop

Why Is InputKeys() Preferred?

To illustrate why inputKeys() is the preferred method to enter data into objects, test what happens if you set the quantity of CDs to buy in the Classics sample application:

quantityText().setProperty("Text", "4");

You see something odd happen (or, not happen in this case): The total amount is not updated to reflect the new value. The reason for this is that the total amount is updated when the inputKeys event is fired. setProperty() does not cause this event to fire and is therefore not a possible technique to set the quantity field.

In addition to the flexibility of being able to use datapool references in verification points created with the Rational Functional Tester Verification Point Wizard, you can create your own dynamic verification points in code. RationalTestScript (the root of all script classes) has a method, vpManual(), which you can use to create verification points.

vpManual() is used when you want your script to do all the work of verifying data. That work consists of capturing expected data, capturing actual data, comparing actual data with expected data, and logging the results of the comparison. Think of manual as referring to manual coding.

This discussion begins with the first signature (which you will likely use most often). In vpManual’s three-argument version, you supply a name for your verification point along with the baseline and actual data associated with the verification point. vpManual() then creates and returns a reference to an object; specifically, one that implements Rational Functional Tester’s IFtVerificationPoint interface. The verification point metadata (name, expected, and actual data) are stored in the returned IFtVerificationPoint.

IFtVerificationPoint myVP = vpManual( "FirstName", "Sasha", "Pasha");
Dim myVP as IFtVerificationPoint = vpManual( "FirstName", "Sasha", "Pasha" )

There are a couple of items to note about using vpManual():

* vpName— The verification point name must be a script-unique valid Java (or .net) method name and be less than 30 (.net) or 75 (Java) characters.
* Baseline and Actual data— The compiler accepts a reference to anything that inherits from Object (which means that in .NET, any argument you pass is acceptable; Java allows anything other than primitives, such as int, bool, and so on); however, you need to satisfy more than the compiler. To automate the comparison of baseline with actual data, you need to pass data types that Rational Functional Tester knows how to compare (you don’t want to have to build your own compare method). This limits you to passing value classes. Some examples of legal value classes are: any data returned by getTestData(), strings, primitives, wrapper classes (Integer, Boolean, and so on), common classes that consist of value class fields (for example, Background, Color, Bounds, ITestData, meaning, anything returned by getTestData()), and arrays (one and two-dimensional), vectors, and hashtables that contain value class elements.

What do you do with the IFtVerificationPoint that vpManual() returns? In the simplest case, you call performTest() and get on with things. performTest() compares the baseline with the actual and logs the results (boolean) of the comparison.

A simple comparison

Java

IFtVerificationPoint myVP = vpManual( "FirstName", "Sasha", "Pasha");
boolean passed = myVP.performTest();

VB.NET

Dim myVP As IFtVerificationPoint = VpManual("FirstName", "Sasha", _ "Pasha")
Dim passed As Boolean = myVP.PerformTest

In two lines of code, you have done quite a bit. You created a verification point and compared and logged the results of comparing the baseline to the actual data. It’s common to combine these two statements into one:

vpManual( "FirstName", "Minsk", "Pinsk").performTest();

You use this style when the only method you need to invoke on the IFtVerificationPoint returned by vpManual() is performTest().

It’s important to note that the three-argument version of vpManual() does not persist baseline data to the file system for future runs. It’s also important to stress the importance of the uniqueness of the name in the script.

To illustrate how Rational Functional Tester behaves when a verification point is not unique, consider the simple example where vpManual is called in a loop (demonstrated in below code). The loop in each code sample simply compares two numbers. To introduce some variety, you force the actual value to equal the baseline value only when the baseline value is even.

Consequences of a nonunique verification point name

Java

for(int baseline = 1; baseline <= 10; baseline++ ) {
int actual = baseline % 2 == 0 ? baseline : baseline + 1;
vpManual("CompareNumbers", baseline, actual).performTest();
}

VB.NET

For baseline As Integer = 1 To 10
Dim actual As Integer
If (baseline Mod 2 = 0) Then
actual = actual
Else
actual = baseline + 1
End If
VpManual("CompareNumbers", baseline, actual).PerformTest()

Next

If you execute this code, you see two interesting results in the log:
  • The pass/fail status for each verification point is what’s expected (half pass, half fail).

  • The comparator shows the correct actual values for each verification point, but a baseline value of 1 for every verification point. The reason for this is that after an IFtVerificationPoint has been created, the baseline cannot be updated.

The common technique to deal with this issue (in a looping context) is to append a counter to the verification point name, guaranteeing a unique name per iteration. This is shown in below code

Guaranteeing a unique verification point name

Java

for(int baseline = 1; baseline <= 10; baseline++ ) {

int actual = baseline % 2 == 0 ? baseline : baseline + 1;

vpManual("CompareNumbers_" + baseline, baseline,

actual).performTest();

}

VB.NET

For baseline As Integer = 1 To 10

Dim actual As Integer

If (baseline Mod 2 = 0) Then

actual = actual

Else

actual = baseline + 1

End If

VpManual("CompareNumbers_" & baseline, _

baseline, actual).PerformTest()

Next

Persisting Baseline Data

In addition to the three-argument version of vpManual(), there is a two-argument version of vpManual():

IFtVerificationPoint vpManual( String vpName, Object data )

The two-argument version is used when you want to persist the baseline data to the Rational Functional Tester project. Here’s how it works. The first time performTest() is called on an IFtVerificationPoint with a given name (the name passed to vpManual()), no comparison is done. The baseline data is written to the RFT project and a verification point displays in the Script Explorer (and an informational message is written to the log). With each subsequent execution of performTest() on an IFtVerificationPoint with the same name, the data argument passed to vpManual() is treated as actual data, and performTest() executes the comparison, logging the result.

To retrieve the value of a single property, use the getProperty() method in the TestObject class. You can use getProperty() with both value and nonvalue properties. In this section, the discussion is limited to value properties. The signature of getProperty() for Java is:

Object getProperty( String propertyName )

The signature of getProperty() for VB.NET is:

Function GetProperty( propertyName as String) as Object

The argument is the name of the property whose value you want to retrieve. Because getProperty can be used with any value or nonvalue property, the return type is generic Object. You typically want to cast to a specific type (for example, String).

For value properties, you can even use Rational Functional Tester to generate the code for you.

1.Place your cursor at the line in your script where you want the code to generate.
2.Insert recording.
3.Launch the Verification Point and Action Wizard.
4.Click the desired test object.
5.In the Select an Action window, click Get a Specific Property Value.
6.The wizard will then display all the test object’s value property names and values. Click the property you want, and then click Next.
7.In the Variable Name window, enter a variable name to hold the returned value (RFT generates a variable name for you but you will typically always want to change the variable name), and click Finish.

If you selected the label property, Rational Functional Tester generates the following code.

In VB.NET:

Dim buttonLabel As String = PlaceOrder().GetProperty( "label" )

In Java:

String buttonLabel = (String)placeOrder().getProperty( "label" );

In Java, you need to explicitly cast to the correct type. For example, if you retrieve a non-String property value, such as the Background property, you see:

java.awt.Color buttonBackground =
(java.awt.Color)placeOrder().getProperty( "background" );

If you pass a property name that does not exist in the object, Rational Functional Tester throws a PropertyNotFoundException.

As you become more comfortable with Rational Functional Tester, you can rely less on the wizard to generate code for you.

Architecture of Rational Functional Tester

This section introduces the general architecture of Rational Functional Tester, which is described in detail throughout the rest of the book. You can think of Rational Functional Tester as having three different modes of operation: normal edit mode, recording mode, and playback mode. Most of the time, you will work in edit mode. Recording and playback modes are significant because they are not passive about what you do with the keyboard and mouse of your computer.

How Test Assets Are Stored

Rational Functional Tester is a desktop file-based tool and does not have a server component. All Rational Functional Tester information is stored in files, primarily Java™ and XML, which users typically access from shared folders. You can use a configuration management tool such as Rational Team Concert, IBM® Rational ClearCase®, or simpler tools such as PVS to version control individual test assets. There is no database component, although it is possible to get test data directly from a database table. This is less common, however, and you will typically keep test data in files along with the other test assets.

If you use Rational Functional Tester Java scripting, your Functional Test project is a special kind of Java project in Eclipse™. If you use Rational Functional Tester Visual Basic®.NET scripting, your Functional Test project is a special kind of Visual Basic project in Microsoft® Visual Studio®. Both kinds of projects are created through Rational Functional Tester.

You can become highly proficient with Rational Functional Tester, even an expert user, without needing to concern yourself with most of the underlying files created by the tool. All of the test assets that you work with are created, edited, and maintained through the Rational Functional Tester interfaces. Most of the test assets, such as a test script, consist of different underlying files on the file system. The exact file structure for the Java scripting and Visual Basic .NET scripting versions are almost exactly the same with only minor differences.

How Test Results are Stored

Test results are stored in test logs, which can be stored in several different formats of your choice. You can choose how to save test logs based on the nature of the particular testing effort and what you use for test management. For example, a small informal testing effort might simply save all test results into HTML files. Another larger testing effort might send the test results to a test management tool, such as IBM Rational Quality Manager. Following are the options for storing test logs in Rational Functional Tester:
  • HTML
  • Text
  • Test and Performance Tools Platform (TPTP)
  • XML
  • Rational Quality Manager
IBM Rational Quality Manager is not required for test logging or for any other functions or uses described in this book. This is presented only as an optional tool for test management, which is often employed in testing efforts.

How Tests Are Recorded

You are likely to use the recorder in Rational Functional Tester to create new test scripts. The reason for this is that the recorder is usually the fastest and easiest way to generate lines of test script, even if that script is extensively modified later. Whether you capture long linear test procedures, developing a keyword-driven test framework, or do something in between, the recording mode is the same. When Rational Functional Tester goes into recording mode, it captures all keyboard and mouse input that goes to all enabled applications and environments. Every time you press a key or perform anything with a mouse, other than simply moving the pointer, it gets captured into the test recording. The exceptions to this are: Rational Functional Tester does not record itself and it does not record applications that are not enabled. You must be careful when you are recording and be sure that you do not click or type anything that you do not want to be part of your test.

Rational Functional Tester creates the test script as it is recording; there are no intermediate files or steps to generate the test script, and you can view the steps as you record. Some information about the test is stored in files that are different from the test script; this includes test objects, verification points, and test data. These files are hidden, and you see abstractions of them only as test assets in the test script.

The test scripts are either Java or Visual Basic .NET files, which must be executed through Rational Functional Tester. These are not just any Java or Visual Basic .NET files, however. They are extensions of the com.rational.test.ft.script package and include several other packages for functional testing, which is what makes them automated tests. Using the recorder or creating a new blank test from within Rational Functional Tester automatically sets up the required packages so you do not have to manually do this.

Although there are many techniques for recording tests, you always capture steps that interact with an application or system interface. Unlike many unit testing or developer testing tools, there is nothing that automatically generates tests by “pointing” to a class, interface, or package.

How Tests Are Executed

When you run a test in Rational Functional Tester, the machine goes into playback mode. In playback mode, Rational Functional Tester sends all of the mouse and keyboard actions that you recorded to the application under test. While Rational Functional Tester is “driving” the computer, it does not lock the user out from also using the mouse and keyboard. In general, you should not touch the keyboard or mouse when Rational Functional Tester is in playback mode. However, at times you can run tests in interactive mode to manipulate potential playback issues.

A test script is comprised largely of statements that interact, including performing tests, with various objects in the application under test. When you execute a test script, Rational Functional Tester first has to identify and find each object by matching recognition properties in the script’s saved test object map against the actual objects that are present at runtime. If Rational Functional Tester cannot find a close enough match, it logs an error and either attempts to continue or aborts. If Rational Functional Tester does find a matching object, it performs the action on the object. These actions might be user interactions, such as clicking, selections, or other operations such as getting or setting values. Finally, the actions performed on the object might be a test (a verification point), in which case Rational Functional Tester compares some saved expected value or range with an actual runtime result. Although every statement (line of code) in the script produces a result, you normally see only verification points (the tests) and other key results in the test log that are created for every test run.

You can either play a test back on the same machine or on any other machine running a Rational Agent, which gets installed by default with Rational Functional Tester. You can also run multiple tests on multiple remote machines for distributed functional testing. This makes it possible to complete much more testing in a shorter period of time. A given machine can run only one test at a time, or many sequentially, but you cannot run multiple tests in parallel on the same machine. Although it is not required, you can also execute Rational Functional Tester tests on remote machines using test management tools, such as Rational Quality Manager.

Integrations with Other Applications

Rational Functional Tester is a stand alone product that does not require other tools or applications. Integration with other tools or applications is optional and based on your particular needs. Following are some of the common types of applications that you can integrate with Rational Functional Tester:
  • Test management or quality management, such as IBM Rational Quality Manager
  • Defect tracking or change request management, such as IBM Rational ClearQuest
  • Configuration management or version control, such as IBM Rational Team Concert and IBM Rational ClearCase
  • Unit or developer testing tools, such as JUnit
  • Automated testing tools, such as IBM Rational Performance Tester
  • Development tools, such as IBM Rational Application Developer
Most of the applications listed previously, especially those developed by IBM, require little or no work to set up the integration. Many applications and tools, such as JUnit, Rational Service Tester, Rational Software Architect, or WebSphere® Integration Developer (to name a few) run in the Eclipse shell and can share the same interface as Rational Functional Tester. With these tools, you can switch between Rational Functional Tester and the other tools simply by switching perspectives (tabs).

In addition to these applications, you can also integrate Rational Functional Tester with many other kinds of applications. This requires varying amounts of work to implement the integration, although you can find existing examples on IBM developerWorks® (www.ibm.com\developerworks\rational\). These include:
  • Custom-built test harnesses (extending the test execution)
  • Spreadsheets (for logging or simple test management)
  • Email notifications

The following describes a methodology to reduce development time through reuse of the prototype and knowledge gained in developing and using the prototype. It does not include how to test the prototype within spiral development. This is included in the next part.

Step 1: Develop the Prototype

In the construction phase of spiral development, the external design and screen design are translated into real-world windows using a 4GL tool such as Visual Basic or PowerBuilder. The detailed business functionality is not built into the screen prototypes, but a "look and feel" of the user interface is produced so the user can see how the application will look.

Using a 4GL, the team constructs a prototype system consisting of data entry screens, printed reports, external file routines, specialized procedures, and procedure selection menus. These are based on the logical database structure developed in the JAD data modeling sessions. The sequence of events for performing the task of developing the prototype in a 4GL is iterative and is described as follows.

Define the basic database structures derived from logical data modeling. The data structures will be populated periodically with test data as required for specific tests.

Define printed report formats. These may initially consist of query commands saved in an executable procedure file on disk. The benefit of a query language is that most of the report formatting can be done automatically by the 4GL. The prototyping team needs only to define what data elements to print and what selection and ordering criteria to use for individual reports.

Define interactive data entry screens. Whether each screen is well designed is immaterial at this point. Obtaining the right information in the form of prompts, labels, help messages, and validation of input is more important. Initially, defaults should be used as often as possible.

Define external file routines to process data that is to be submitted in batches to the prototype or created by the prototype for processing by other systems. This can be done in parallel with other tasks.

Define algorithms and procedures to be implemented by the prototype and the finished system. These may include support routines solely for the use of the prototype.

Define procedure selection menus. The developers should concentrate on the functions as the user would see them. This may entail combining seemingly disparate procedures into single functions that can be executed with a single command from the user.

Define test cases to ascertain that:

* Data entry validation is correct.
* Procedures and algorithms produce expected results.
* System execution is clearly defined throughout a complete cycle of operation.

Repeat this process, adding report and screen formatting options, corrections of errors discovered in testing, and instructions for the intended users. This process should end after the second or third iteration or when changes become predominantly cosmetic rather than functional.

At this point, the prototyping team should have a good understanding of the overall operation of the proposed system. If time permits, the team must now describe the operation and underlying structure of the prototype. This is most easily accomplished through the development of a draft user manual. A printed copy of each screen, report, query, database structure, selection menu, and catalogued procedure or algorithm must be included. Instructions for executing each procedure should include an illustration of the actual dialogue.

Step 2: Demonstrate Prototypes to Management

The purpose of this demonstration is to give management the option of making strategic decisions about the application on the basis of the prototype's appearance and objectives. The demonstration consists primarily of a short description of each prototype component and its effects, and a walkthrough of the typical use of each component. Every person in attendance at the demonstration should receive a copy of the draft user manual, if one is available.

The team should emphasize the results of the prototype and its impact on development tasks still to be performed. At this stage, the prototype is not necessarily a functioning system, and management must be made aware of its limitations.

Step 3: Demonstrate Prototype to Users

There are arguments for and against letting the prospective users actually use the prototype system. There is a risk that users' expectations will be raised to an unrealistic level with regard to delivery of the production system and that the prototype will be placed in production before it is ready. Some users have actually refused to give up the prototype when the production system was ready for delivery. This may not be a problem if the prototype meets the users' expectations and the environment can absorb the load of processing without affecting others. On the other hand, when users exercise the prototype, they can discover the problems in procedures and unacceptable system behavior very quickly.

The prototype should be demonstrated before a representative group of users. This demonstration should consist of a detailed description of the system operation, structure, data entry, report generation, and procedure execution. Above all, users must be made to understand that the prototype is not the final product, that it is flexible, and that it is being demonstrated to find errors from the users' perspective.

The results of the demonstration include requests for changes, correction of errors, and overall suggestions for enhancing the system. Once the demonstration has been held, the prototyping team cycles through the steps in the prototype process to make the changes, corrections, and enhancements deemed necessary through consensus of the prototyping team, the end users, and management.

For each iteration through prototype development, demonstrations should be held to show how the system has changed as a result of feedback from users and management. The demonstrations increase the users' sense of ownership, especially when they can see the results of their suggestions. The changes should therefore be developed and demonstrated quickly.

Requirements uncovered in the demonstration and use of the prototype may cause profound changes in the system scope and purpose, the conceptual model of the system, or the logical data model. Because these modifications occur in the requirements specification phase rather than in the design, code, or operational phases, they are much less expensive to implement.

Step 4: Revise and Finalize Specifications

At this point, the prototype consists of data entry formats, report formats, file formats, a logical database structure, algorithms and procedures, selection menus, system operational flow, and possibly a draft user manual.

The deliverables from this phase consist of formal descriptions of the system requirements, listings of the 4GL command files for each object programmed (i.e., screens, reports, and database structures), sample reports, sample data entry screens, the logical database structure, data dictionary listings, and a risk analysis. The risk analysis should include the problems and changes that could not be incorporated into the prototype and the probable impact that they would have on development of the full system and subsequent operation.

The prototyping team reviews each component for inconsistencies, ambiguities, and omissions. Corrections are made, and the specifications are formally documented.

Step 5: Develop the Production System

At this point, development can proceed in one of three directions:

1. The project is suspended or canceled because the prototype has uncovered insurmountable problems or the environment is not ready to mesh with the proposed system.
2. The prototype is discarded because it is no longer needed or because it is too inefficient for production or maintenance.
3. Iterations of prototype development are continued, with each iteration adding more system functions and optimizing performance until the prototype evolves into the production system.

The decision on how to proceed is generally based on such factors as:

* The actual cost of the prototype
* Problems uncovered during prototype development
* The availability of maintenance resources
* The availability of software technology in the organization
* Political and organizational pressures
* The amount of satisfaction with the prototype
* The difficulty in changing the prototype into a production system
* Hardware requirements

Prototyping is an iterative approach often used to build systems that users initially are unable to describe precisely. The concept is made possible largely through the power of fourth-generation languages (4GLs) and application generators.

Prototyping is, however, as prone to defects as any other development effort, maybe more so if not performed in a systematic manner. Prototypes need to be tested as thoroughly as any other system. Testing can be difficult unless a systematic process has been established for developing prototypes.

There are various types of software prototypes, ranging from simple printed descriptions of input, processes, and output to completely automated versions. An exact definition of a software prototype is impossible to find; the concept is made up of various components. Among the many characteristics identified by MIS professionals are the following:

* Comparatively inexpensive to build (i.e., less than 10 percent of the full system's development cost).
* Relatively quick development so that it can be evaluated early in the life cycle.
* Provides users with a physical representation of key parts of the system before implementation.
* Prototypes:

Do not eliminate or reduce the need for comprehensive analysis and specification of user requirements.

Do not necessarily represent the complete system.

Perform only a subset of the functions of the final product.

Lack the speed, geographical placement, or other physical characteristics of the final system.

Basically, prototyping is the building of trial versions of a system. These early versions can be used as the basis for assessing ideas and making decisions about the complete and final system. Prototyping is based on the premise that, in certain problem domains (particularly in online interactive systems), users of the proposed application do not have a clear and comprehensive idea of what the application should do or how it should operate.

Often, errors or shortcomings overlooked during development appear after a system becomes operational. Application prototyping seeks to overcome these problems by providing users and developers with an effective means of communicating ideas and requirements before a significant amount of development effort has been expended. The prototyping process results in a functional set of specifications that can be fully analyzed, understood, and used by users, developers, and management to decide whether an application is feasible and how it should be developed.

Fourth-generation languages have enabled many organizations to undertake projects based on prototyping techniques. They provide many of the capabilities necessary for prototype development, including user functions for defining and managing the user—system interface, data management functions for organizing and controlling access, and system functions for defining execution control and interfaces between the application and its physical environment.

In recent years, the benefits of prototyping have become increasingly recognized. Some include the following:

* Prototyping emphasizes active physical models. The prototype looks, feels, and acts like a real system.
* Prototyping is highly visible and accountable.
* The burden of attaining performance, optimum access strategies, and complete functioning is eliminated in prototyping.
* Issues of data, functions, and user—system interfaces can be readily addressed.
* Users are usually satisfied, because they get what they see.
* Many design considerations are highlighted, and a high degree of design flexibility becomes apparent.
* Information requirements are easily validated.
* Changes and error corrections can be anticipated and, in many cases, made on the spur of the moment.
* Ambiguities and inconsistencies in requirements become visible and correctable.
* Useless functions and requirements can be quickly eliminated.

The psychology of life-cycle testing encourages testing by individuals outside the development organization. The motivation for this is that with the life-cycle approach, there typically exist clearly defined requirements, and it is more efficient for a third party to verify these. Testing is often viewed as a destructive process designed to break development's work.

The psychology of spiral testing, on the other hand, encourages cooperation between quality assurance and the development organization. The basis of this argument is that, in a rapid application development environment, requirements may or may not be available, to varying degrees. Without this cooperation, the testing function would have a difficult task defining the test criteria. The only possible alternative is for testing and development to work together.

Testers can be powerful allies to development and, with a little effort, they can be transformed from adversaries into partners. This is possible because most testers want to be helpful; they just need a little consideration and support. To achieve this, however, an environment needs to be created to bring out the best of a tester's abilities. The tester and development manager must set the stage for cooperation early in the development cycle and communicate throughout the cycle.

Tester/Developer Perceptions

To understand some of the inhibitors to a good relationship between the testing function and development, it is helpful to understand how each views his or her role and responsibilities.

Testing is a difficult effort. It is a task that is both infinite and indefinite. No matter what testers do, they cannot be sure they will find all the problems, or even all the important ones.

Many testers are not really interested in testing and do not have the proper training in basic testing principles and techniques. Testing books or conferences typically treat the testing subject too rigorously and employ deep mathematical analysis. The insistence on formal requirement specifications as a prerequisite to effective testing is not realistic in the real world of a software development project.

It is hard to find individuals who are good at testing. It takes someone who is a critical thinker motivated to produce a quality software product, likes to evaluate software deliverables, and is not caught up in the assumption held by many developers that testing has a lesser job status than development. A good tester is a quick learner and eager to learn, is a good team player, and can effectively communicate both verbally and in writing.

The output from development is something that is real and tangible. A programmer can write code and display it to admiring customers, who assume it is correct. From a developer's point of view, testing results in nothing more tangible than an accurate, useful, and all-too-fleeting perspective on quality. Given these perspectives, many developers and testers often work together in an uncooperative, if not hostile, manner.

In many ways the tester and developer roles are in conflict. A developer is committed to building something successful. A tester tries to minimize the risk of failure and tries to improve the software by detecting defects. Developers focus on technology, which takes a lot of time and energy when producing software. A good tester, on the other hand, is motivated to provide the user with the best software to solve a problem.

Testers are typically ignored until the end of the development cycle when the application is "completed." Testers are always interested in the progress of development and realize that quality is only achievable when they take a broad point of view and consider software quality from multiple dimensions.

Project Goal: Integrate QA and Development

The key to integrating the testing and developing activities is for testers to avoid giving the impression that they are out to "break the code" or destroy development's work. Ideally, testers are human meters of product quality and should examine a software product, evaluate it, and discover if the product satisfies the customer's requirements. They should not be out to embarrass or complain, but inform development how to make their product even better. The impression they should foster is that they are the "developer's eyes to improved quality."

Development needs to be truly dedicated to quality and view the test team as an integral player on the development team. They need to realize that no matter how much work and effort has been expended by development, if the software does not have the correct level of quality, it is destined to fail. The testing manager needs to remind the project manager of this throughout the development cycle. The project manager needs to instill this perception in the development team.

Testers must coordinate with the project schedule and work in parallel with development. They need to be informed about what is going on in development, and so should be included in all planning and status meetings. This lessens the risk of introducing new bugs, known as "side effects," near the end of the development cycle and also reduces the need for time-consuming regression testing.

Testers must be encouraged to communicate effectively with everyone on the development team. They should establish a good relationship with the software users, who can help them better understand acceptable standards of quality. In this way, testers can provide valuable feedback directly to development.

Testers should intensively review online help and printed manuals whenever they are available. It will relieve some of the communication burden to get writers and testers to share notes rather than saddle development with the same information.

Testers need to know the objectives of the software product, how it is intended to work, how it actually works, the development schedule, any proposed changes, and the status of reported problems.

Developers need to know what problems were discovered, what part of the software is or is not working, how users perceive the software, what will be tested, the testing schedule, the testing resources available, what the testers need to know to test the system, and the current status of the testing effort.

When quality assurance starts working with a development team, the testing manager needs to interview the project manager and show an interest in working in a cooperative manner to produce the best software product possible. The next section describes how to accomplish this.

Iterative/Spiral Development Methodology

Spiral methodologies are a reaction to the traditional waterfall methodology of systems development, a sequential solution development approach. A common problem with the waterfall model is that the elapsed time for delivering the product can be excessive.

By contrast, spiral development expedites product delivery. A small but functioning initial system is built and quickly delivered, and then enhanced in a series of iterations. One advantage is that the clients receive at least some functionality quickly. Another is that the product can be shaped by iterative feedback; for example, users do not have to define every feature correctly and in full detail at the beginning of the development cycle, but can react to each iteration.

With the spiral approach, the product evolves continually over time; it is not static and may never be completed in the traditional sense. The term spiral refers to the fact that the traditional sequence of analysis—design—code—test phases is performed on a microscale within each spiral or cycle, in a short period of time, and then the phases are repeated within each subsequent cycle. The spiral approach is often associated with prototyping and rapid application development.

Traditional requirements-based testing expects that the product definition will be finalized and even frozen prior to detailed test planning. With spiral development, the product definition and specifications continue to evolve indefinitely; that is, there is no such thing as a frozen specification. A comprehensive requirements definition and system design probably never will be documented.

The only practical way to test in the spiral environment, therefore, is to "get inside the spiral." Quality assurance must have a good working relationship with development. The testers must be very close to the development effort, and test each new version as it becomes available. Each iteration of testing must be brief, in order not to disrupt the frequent delivery of the product iterations. The focus of each iterative test must be first to test only the enhanced and changed features. If time within the spiral allows, an automated regression test also should be performed; this requires sufficient time and resources to update the automated regression tests within each spiral.

Clients typically demand very fast turnarounds on change requests; there may be neither formal release nor a willingness to wait for the next release to obtain a new system feature. Ideally, there should be an efficient, automated regression test facility for the product, which can be used for at least a brief test prior to the release of the new product version.

Spiral testing is a process of working from a base and building a system incrementally. Upon reaching the end of each phase, developers reexamine the entire structure and revise it. Drawing the four major phases of system development— planning/analysis, design, coding, and test/deliver—into quadrants, as shown in Figure 12.1, represents the spiral approach. The respective testing phases are test planning, test case design, test development, and test execution/evaluation.
The spiral process begins with planning and requirements analysis to determine the functionality. Then a design is made for the base components of the system and the functionality determined in the first step. Next, the functionality is constructed and tested. This represents a complete iteration of the spiral.

Having completed this first spiral, users are given the opportunity to examine the system and enhance its functionality. This begins the second iteration of the spiral. The process continues, looping around and around the spiral until the users and developers agree the system is complete; the process then proceeds to implementation.

The spiral approach, if followed systematically, can be effective in ensuring that the users' requirements are being adequately addressed and that the users are closely involved with the project. It can allow for the system to adapt to any changes in business requirements that occurred after the system development began. However, there is one major flaw with this methodology: there may never be any firm commitment to implement a working system. One can go around and around the quadrants, never actually bringing a system into production. This is often referred to as "spiral death."

Although the waterfall development has often proved itself to be too inflexible, the spiral approach can produce the opposite problem. Unfortunately, the flexibility of the spiral methodology often results in the development team ignoring what the user really wants, and thus, the product fails the user verification. This is where quality assurance is a key component of a spiral approach. It will ensure that user requirements are being satisfied.

A variation to the spiral methodology is the iterative methodology, in which the development team is forced to reach a point where the system will be implemented. The iterative methodology recognizes that the system is never truly complete, but is evolutionary. However, it also realizes that there is a point at which the system is close enough to completion to be of value to the end user.

The point of implementation is decided upon prior to the start of the system, and a certain number of iterations will be specified, with goals identified for each iteration. Upon completion of the final iteration, the system will be implemented in whatever state it may be.

The client/server architecture for application development divides functionality between a client and server so that each performs its task independently. The client cooperates with the server to produce the required results.

The client is an intelligent workstation used as a single user, and because it has its own operating system, it can run other applications such as spreadsheets, word processors, and file processors. The user and the server process client/server application functions cooperatively. The server can be a PC, minicomputer, local area network, or even a mainframe. The server receives requests from the clients and processes them. The hardware configuration is determined by the application's functional requirements.

Some advantages of client/server applications include reduced costs, improved accessibility of data, and flexibility. However, justifying a client/server approach and ensuring quality are difficult and present additional difficulties not necessarily found in mainframe applications. Some of these problems include the following:

* The typical graphical user interface has more possible logic paths, and thus the large number of test cases in the mainframe environment is compounded.
* Client/server technology is complicated and, often, new to the organization. Furthermore, this technology often comes from multiple vendors and is used in multiple configurations and in multiple versions.
* The fact that client/server applications are highly distributed results in a large number of failure sources and hardware/software configuration control problems.
* A short- and long-term cost—benefit analysis must be performed to justify client/ server technology in terms of the overall organizational costs and benefits.
* Successful migration to a client/server depends on matching migration plans to the organization's readiness for client/server technology.
* The effect of client/server technology on the user's business may be substantial.
* Choosing which applications will be the best candidates for a client/server implementation is not straightforward.
* An analysis needs to be performed of which development technologies and tools enable a client/server.
* Availability of client/server skills and resources, which are expensive, needs to be considered.
* Although client/server technology is more expensive than mainframe computing, cost is not the only issue. The function, business benefit, and the pressure from end users have to be balanced.

Integration testing in a client/server environment can be challenging. Client and server applications are built separately. When they are brought together, conflicts can arise no matter how clearly defined the interfaces are. When integrating applications, defect resolutions may have single or multiple solutions, and there must be open communication between quality assurance and development.

In some circles there exists a belief that the mainframe is dead and the client/ server prevails. The truth of the matter is that applications using mainframe architecture are not dead, and client/server technology is not necessarily the panacea for all applications. The two will continue to coexist and complement each other in the future. Mainframes will certainly be part of any client/server strategy.

The life-cycle development methodology consists of distinct phases from requirements to coding. Life-cycle testing means that testing occurs in parallel with the development life cycle and is a continuous process. Although the life-cycle or waterfall development is very effective for many large applications requiring a lot of computer horsepower, for example, DOD, financial, security-based, and so on, it has a number of shortcomings:

* The end users of the system are only involved at the very beginning and the very end of the process. As a result, the system that they were given at the end of the development cycle is often not what they originally visualized or thought they requested.
* The long development cycle and the shortening of business cycles lead to a gap between what is really needed and what is delivered.
* End users are expected to describe in detail what they want in a system, before the coding phase. This may seem logical to developers; however, there are end users who have not used a computer system before and are not certain of its capabilities.
* When the end of a development phase is reached, it is often not quite complete, but the methodology and project plans require that development press on regardless. In fact, a phase is rarely complete, and there is always more work than can be done. This results in the "rippling effect"; sooner or later, one must return to a phase to complete the work.
* Often, the waterfall development methodology is not strictly followed. In the haste to produce something quickly, critical parts of the methodology are not followed. The worst case is ad hoc development, in which the analysis and design phases are bypassed and the coding phase is the first major activity. This is an example of an unstructured development environment.
* Software testing is often treated as a separate phase starting in the coding phase as a validation technique and is not integrated into the whole development life cycle.
* The waterfall development approach can be woefully inadequate for many development projects, even if it is followed. An implemented software system is not worth very much if it is not the system the user wanted. If the requirements are incompletely documented, the system will not survive user validation procedures; that is, it is the wrong system. Another variation is when the requirements are correct, but the design is inconsistent with the requirements. Once again, the completed product will probably fail the system validation procedures.
* Because of the foregoing issues, experts began to publish methodologies based on other approaches, such as prototyping.

Each defect discovered during the foregoing tests is documented to assist in the proper recording of these defects. A problem report is generated when a test procedure gives rise to an event that cannot be explained by the tester. The problem report documents the details of the event and includes at least these items (see Appendix E12, "Defect Report," for more details):

* Problem identification
* Author
* Release/build number
* Open date
* Close date
* Problem area
* Defect or enhancement
* Test environment
* Defect type
* Who detected
* How detected
* Assigned to
* Priority
* Severity
* Status

Other test reports to communicate the testing progress and results include a test case log, test log summary report, and system summary report.

A test case log documents the test cases for a test type to be executed. It also records the results of the tests, which provides the detailed evidence for the test log summary report and enables reconstructing testing, if necessary.

A test log summary report documents the test cases from the tester's logs in progress or completed for the status reporting and metric collection.

A system summary report should be prepared for every major testing event. Sometimes it summarizes all the tests. It typically includes the following major sections: general information (describing the test objectives, test environment, references, etc.), test results and findings (describing each test), software functions and findings, and analysis and test summary.

After systems testing, acceptance testing certifies that the software system satisfies the original requirements. This test should not be performed until the software has successfully completed systems testing. Acceptance testing is a user-run test that uses black-box techniques to test the system against its specifications. The end users are responsible for ensuring that all relevant functionality has been tested.

The acceptance test plan defines the procedures for executing the acceptance tests and should be followed as closely as possible. Acceptance testing continues even when errors are found, unless an error itself prevents continuation. Some projects do not require formal acceptance testing. This is true when the customer or user is satisfied with the other system tests, when timing requirements demand it, or when end users have been involved continuously throughout the development cycle and have been implicitly applying acceptance testing as the system is developed.

Acceptance tests are often a subset of one or more system tests. Two other ways to measure acceptance testing are as follows:

1. Parallel Testing—A business-transaction-level comparison with the existing system to ensure that adequate results are produced by the new system.
2. Benchmarks—A static set of results produced either manually or from an existing system is used as expected results for the new system.

After integration testing, the system is tested as a whole for functionality and fitness of use based on the System/Acceptance Test Plan. Systems are fully tested in the computer operating environment before acceptance testing occurs. The sources of the system tests are the quality attributes that were specified in the Software Quality Assurance Plan. System testing is a set of tests to verify these quality attributes and ensure that the acceptance test occurs in a relatively trouble-free manner. System testing verifies that the functions are carried out correctly. It also verifies that certain nonfunctional characteristics are present. Some examples include usability testing, performance testing, stress testing, compatibility testing, conversion testing, and document testing.

Black-box testing is a technique that focuses on testing a program's functionality against its specifications. White-box testing is a testing technique in which paths of logic are tested to determine how well they produce predictable results. Gray-box testing is a combination of these two approaches and is usually applied during system testing. It is a compromise between the two and is a well-balanced testing approach that is widely used during system testing.

After unit testing is completed, all modules must be integration-tested. During integration testing, the system is slowly built up by adding one or more modules at a time to the core of already-integrated modules. Groups of units are fully tested before system testing occurs. Because modules have been unit-tested prior to integration testing, they can be treated as black boxes, allowing integration testing to concentrate on module interfaces. The goals of integration testing are to verify that each module performs correctly within the control structure and that the module interfaces are correct.

Incremental testing is performed by combining modules in steps. At each step one module is added to the program structure, and testing concentrates on exercising this newly added module. When it has been demonstrated that a module performs properly with the program structure, another module is added, and testing continues. This process is repeated until all modules have been integrated and tested.

Unit testing is the basic level of testing. Unit testing focuses separately on the smaller building blocks of a program or system. It is the process of executing each module to confirm that each performs its assigned function. The advantage of unit testing is that it permits the testing and debugging of small units, thereby providing a better way to manage the integration of the units into larger units. In addition, testing a smaller unit of code makes it mathematically possible to fully test the code's logic with fewer tests. Unit testing also facilitates automated testing because the behavior of smaller units can be captured and played back with maximized reusability. A unit can be one of several types of application software. Examples include the module itself as a unit, GUI components such as windows, menus, and functions, batch programs, online programs, and stored procedures.

By the end of this phase, all the items in each section of the test plan should have been completed. The actual testing of software is accomplished through the test data in the test plan developed during the requirements, logical design, physical design, and program unit design phases. Because results have been specified in the test cases and test procedures, the correctness of the executions is ensured from a static test point of view; that is, the tests have been reviewed manually.

Dynamic testing, or time-dependent techniques, involves executing a specific sequence of instructions with the computer. These techniques are used to study the functional and computational correctness of the code.

Dynamic testing proceeds in the opposite order of the development life cycle. It starts with unit testing to verify each program unit independently and then proceeds to integration, system, and acceptance testing. After acceptance testing has been completed, the system is ready for operation and maintenance. Figure below briefly describes each testing type.

Unit testing is the process of executing a functional subset of the software system to determine whether it performs its assigned function. It is oriented toward the checking of a function or a module. White-box test cases are created and documented to validate the unit logic and black-box test cases to test the unit against the specifications. Unit testing, along with the version control necessary during correction and retesting, is typically performed by the developer. During unit test case development, it is important to know which portions of the code have been subjected to test cases and which have not. By knowing this coverage, the developer can discover lines of code that are never executed or program functions that do not perform according to the specifications. When coverage is inadequate, implementing the system is risky because defects may be present in the untested portions of the code. Unit test case specifications are started and documented in the Test Specification section, but all other items in this section should have been completed.

All items in the Introduction, Test Approach and Strategy, Test Execution Setup, Test Tools, and Personnel Resources should have been completed prior to this phase. Items in the Test Procedures section, however, continue to be refined. The functional decomposition, integration, system, and acceptance test cases should be completed during this section. Refinement continues for all items in the Test Procedures and Test Schedule sections.

The following describes a methodology for creating integration test cases.

Step 1: Identify Unit Interfaces

The developer of each program unit identifies and documents the unit's interfaces for the following unit operations:

* External inquiry (responding to queries from terminals for information)
* External input (managing transaction data entered for processing)
* External filing (obtaining, updating, or creating transactions on computer files)
* Internal filing (passing or receiving information from other logical processing units)
* External display (sending messages to terminals)
* External output (providing the results of processing to some output device or unit)

Step 2: Reconcile Interfaces for Completeness

The information needed for the integration test template is collected for all program units in the software being tested. Whenever one unit interfaces with another, those interfaces are reconciled. For example, if program unit A transmits data to program unit B, program unit B should indicate that it has received that input from program unit A. Interfaces not reconciled are examined before integration tests are executed.

Step 3: Create Integration Test Conditions

One or more test conditions are prepared for integrating each program unit. After the condition is created, the number of the test condition is documented in the test template.

Step 4: Evaluate the Completeness of Integration Test Conditions

The following list of questions will help guide evaluation of the completeness of integration test conditions recorded on the integration testing template. This list can also help determine whether test conditions created for the integration process are complete.

* Is an integration test developed for each of the following external inquiries?
o — Record test
o — File test
o — Search test
o — Match/merge test
o — Attributes test
o — Stress test
o — Control test
* Are all interfaces between modules validated so that the output of one is recorded as input to another?
* If file test transactions are developed, do the modules interface with all those indicated files?
* Is the processing of each unit validated before integration testing?
* Do all unit developers agree that integration test conditions are adequate to test each unit's interfaces?
* Are all software units included in integration testing?
* Are all files used by the software being tested included in integration testing?
* Are all business transactions associated with the software being tested included in integration testing?
* Are all terminal functions incorporated in the software being tested included in integration testing?

The documentation of integration tests is started in the Test Specifications section. Also in this section, the functional decomposition continues to be refined, but the system-level test cases should be completed during this phase.

Test items in the Introduction section are completed during this phase. Items in the Test Approach and Strategy, Test Execution Setup, Test Procedures, Test Tool, Personnel Requirements, and Test Schedule continue to be refined.

Integration testing is designed to test the structure and the architecture of the software and determine whether all software components interface properly. It does not verify that the system is functionally correct, only that it performs as designed.

Integration testing is the process of identifying errors introduced by combining individual program unit-tested modules. It should not begin until all units are known to perform according to the unit specifications. Integration testing can start with testing several logical units or can incorporate all units in a single integration test.

Because the primary concern in integration testing is that the units interface properly, the objective of this test is to ensure that they integrate, that parameters are passed, and the file processing is correct. Integration testing techniques include top-down, bottom-up, sandwich testing, and thread testing.

A requirements traceability matrix is a document that traces user requirements from analysis through implementation. It can be used as a completeness check to verify that all requirements are present or that there are no unnecessary/extra features, and as a maintenance guide for new personnel. At each step in the development cycle, the requirements, code, and associated test cases are recorded to ensure that the user requirement is addressed in the final system. Both the user and developer have the ability to easily cross-reference the requirements to the design specifications, programming, and test cases.

A software technical review is a form of peer review in which a team of qualified personnel examines the suitability of the software product for its intended use and identifies descrepancies from specifications and standards. Technical reviews may also provide recommendations of alternatives and examiniation of various alternatives. Technical reviews differ from software walkthroughs in its specific focus is on the technical quality of the product reviews. It differs from a software inspection in its ability to suggest direct alterations to the product reviewed, and its lack of a direct focus on training and process improvements

An Ambiguity Review, developed by Richard Bender from Bender RBT, Inc., is a very powerful testing technique that eliminates defects in the requirements phase of the software life cycle, thereby avoiding those defects from propagating to the remaining phases of the software development life cycle. A QA Engineer trained in the technique performs the Ambiguity Review. The Engineer is not a domain expert (SME), and is not reading the requirements for content, but only to identify ambiguities in the logic and structure of the wording. The Ambiguity Review takes place after the requirements, or section of the requirements, reach first draft, and prior to them being reviewed for content, i.e. correctness and completeness by domain experts. The Engineer identifies all ambiguous words and phrases on a copy of the requirements. A summary of the findings is presented to the Business Analyst.

The Ambiguity Review Checklist identifies 15 common problems that occur in writing requirements.

The testing process should begin early in the application development life cycle, not just at the traditional testing phase at the end of coding. Testing should be integrated with the application development phases.

During the requirements phase of the software development life cycle, the business requirements are defined on a high level and are the basis of the subsequent phases and the final implementation. Testing in its broadest sense commences during the requirements phase, which increases the probability of developing a quality system based on the user's expectations. The result is that the requirements are verified to be correct and complete. Unfortunately, more often than not, poor requirements are produced at the expense of the application. Poor requirements ripple down the waterfall and result in a product that does not meet the user's expectations. Some characteristics of poor requirements include the following:

* Partial set of functions defined
* Performance not considered
* Ambiguous requirements
* Security not defined
* Interfaces not documented
* Erroneous and redundant requirements
* Requirements too restrictive
* Contradictory requirements

Requirements phase and acceptance testing

Functionality is the most important part of the specification and should include a hierarchic decomposition of the functions. The reason for this is that it provides a description that is described in levels to enable all the reviewers to read as much detail as needed. Specifically, this will make the task of translating the specification to test requirements much easier.

Another important element of the requirements specification is the data description. It should contain details such as whether the database is relational or hierarchical. If it is hierarchical, a good representation is a data model or entity relationship diagram in terms of entities, attributes, and relationships.

Another section in the requirements should be a description of the interfaces between the system and external entities that interact with the system, such as users, external software, or external hardware. A description of how users will interact with the system should be included. This would include the form of the interface and the technical capabilities of the users.

During the requirements phase, the testing organization needs to perform two functions simultaneously. It needs to build the system/acceptance test plan and also verify the requirements. The requirements verification entails ensuring the correctness and completeness of the documentation prepared by the development team.

Step 1: Plan for the Review Process

Planning can be described at both the organizational level and the specific review level. Considerations at the organizational level include the number and types of reviews that are to be performed for the project. Project resources must be allocated for accomplishing these reviews.

At the specific review level, planning considerations include selecting participants and defining their respective roles, scheduling the review, and developing a review agenda. There are many issues involved in selecting the review participants. It is a complex task normally performed by management, with technical input. When selecting review participants, care must be exercised to ensure that each aspect of the software under review can be addressed by at least some subset of the review team.

To minimize the stress and possible conflicts in the review processes, it is important to discuss the role that a reviewer plays in the organization and the objectives of the review. Focusing on the review objectives will lessen personality conflicts.

Step 2: Schedule the Review

A review should ideally take place soon after a producer has completed the software but before additional effort is expended on work dependent on the software. The review leader must state the agenda based on a well-thought-out schedule. If all the inspection items have not been completed, another inspection should be scheduled.

The problem of allocating sufficient time to a review stems from the difficulty in estimating the time needed to perform the review. The approach that must be taken is the same as that for estimating the time to be allocated for any meeting; that is, an agenda must be formulated and time estimated for each agenda item. An effective technique is to estimate the time for each inspection item on a time line.

Another scheduling problem is the duration of the review when the review is too long. This requires that review processes be focused in terms of their objectives. Review participants must understand these review objectives and their implications in terms of actual review time, as well as preparation time, before committing to the review. The deliverable to be reviewed should meet a certain set of entry requirements before the review is scheduled. Exit requirements must also be defined.

Step 3: Develop the Review Agenda

A review agenda must be developed by the review leader and the producer prior to the review. Although review agendas are specific to any particular product and the objective of its review, generic agendas should be produced for related types of products. These agendas may take the form of checklists (see Appendix F, "Checklists," for more details).

Step 4: Create a Review Report

The output of a review is a report. The format of the report is not important. The contents should address the management perspective, user perspective, developer perspective, and quality assurance perspective.

From a management perspective, the review report serves as a summary of the review that highlights what was reviewed, who did the reviewing, and their assessment. Management needs an estimate of when all action items will be resolved to successfully track the project.

The user may be interested in analyzing review reports for some of the same reasons as the manager. The user may also want to examine the quality of intermediate work products in an effort to monitor the development organization's progress.

From a developer's perspective, the critical information is contained in the action items. These may correspond to actual errors, possible problems, inconsistencies, or other considerations that the developer must address.

The quality assurance perspective of the review report is twofold: quality assurance must ensure that all action items in the review report are addressed, and it should also be concerned with analyzing the data on the review forms and classifying defects to improve the software development and review process. For example, a large number of specification errors might suggest a lack of rigor or time in the requirements specifications phase of the project. Another example is a large number of defects reported, suggesting that the software has not been adequately unit tested.

Roles will depend on the specific review methodology being followed, that is, structured walkthroughs or inspections. These roles are functional, which implies that it is possible in some reviews for a participant to execute more than one role. The role of the review participants after the review is especially important because many errors identified during a review may not be fixed correctly by the developer. This raises the issue of who should follow up on a review and whether another review is necessary.

The review leader is responsible for the review. This role requires scheduling the review, conducting an orderly review meeting, and preparing the review report. The review leader may also be responsible for ensuring that action items are properly handled after the review process. Review leaders must possess both technical and interpersonal management characteristics. The interpersonal management qualities include leadership ability, mediator skills, and organizational talents. The review leader must keep the review group focused at all times and prevent the meeting from becoming a problem-solving session. Material presented for review should not require the review leader to spend more than two hours for preparation.

The recorder role in the review process guarantees that all information necessary for an accurate review report is preserved. The recorder must understand complicated discussions and capture their essence in action items. The role of the recorder is clearly a technical function and one that cannot be performed by a non-technical individual.

The reviewer role is to objectively analyze the software and be accountable for the review. An important guideline is that the reviewer must keep in mind that it is the software that is being reviewed and not the producer of the software. This cannot be overemphasized. Also, the number of reviewers should be limited to six. If too many reviewers are involved, productivity will decrease.

In a technical review, the producer may actually lead the meeting in an organized discussion of the software. A degree of preparation and planning is needed in a technical review to present material at the proper level and pace. The attitude of the producer is also important, and it is essential that he or she does not become defensive. This can be facilitated by the group leader's emphasizing that the purpose of the inspection is to uncover defects and produce the best product possible.

There are formal and informal reviews. Informal reviews occur spontaneously among peers; the reviewers do not necessarily have any responsibility and do not have to produce a review report. Formal reviews are carefully planned meetings in which reviewers are held responsible for their participation, and a review report is generated that contains action items.

The spectrum of review ranges from very informal peer reviews to extremely formal and structured inspections. The complexity of a review is usually correlated to the complexity of the project. As the complexity of a project increases, the need for more formal reviews increases.

Structured Walkthroughs

A structured walkthrough is a presentation review in which a review participant, usually the developer of the software being reviewed, narrates a description of the software, and the remainder of the group provides feedback throughout the presentation. Testing deliverables such as test plans, test cases, and test scripts can also be reviewed using the walkthrough technique. These are referred to as presentation reviews because the bulk of the feedback usually occurs only for the material actually presented.

Advance preparation of the reviewers is not necessarily required. One potential disadvantage of a structured walkthrough is that, because of its informal structure, disorganized and uncontrolled reviews may result. Walkthroughs may also be stressful if the developer is conducting the walkthrough.

Inspections

The inspection technique is a formally defined process for verification of the software product throughout its development. All software deliverables are examined at defined phases to assess the current status and quality effectiveness, from the requirements to coding phase. One of the major decisions within an inspection is whether a software deliverable can proceed to the next development phase.

Software quality is achieved in a product during the early stages when the cost to remedy defects is 10 to 100 times less than it would be during testing or maintenance. It is, therefore, advantageous to find and correct defects as near to their point of origin as possible. Exit criteria are the standard against which inspections measure completion of the product at the end of a phase.

The advantages of inspections are that they are very systematic, controlled, and less stressful. The inspection process promotes the concept of egoless programming. If managed properly, it is a forum in which developers need not become emotionally protective of the work produced. An inspection requires an agenda to guide the review preparation and the meeting itself. Inspections have rigorous entry and exit requirements for the project work deliverables.

A major difference between structured walkthroughs and inspections is that inspections collect information to improve the development and review processes themselves. In this sense, an inspection is more of a quality assurance technique than walkthroughs.

Phased inspections apply the PDCA (Plan, Do, Check, and Act) quality model. Each development phase has entrance requirements; for example, how to qualify to enter an inspection and exit criteria, and how to know when to exit the inspection. In-between the entry and exit are the project deliverables that are inspected.

The Plan step of the continuous improvement process consists of inspection planning and preparing an education overview. The strategy of an inspection is to design and implement a review process that is timely, efficient, and effective. Specific products are designated, as are acceptable criteria, and meaningful metrics are defined to measure and maximize the efficiency of the process. Inspection materials must meet inspection entry criteria. The right participants are identified and scheduled. In addition, a suitable meeting place and time are decided. The group of participants is educated on what is to be inspected and their roles.

The Do step includes individual preparation for the inspections and the inspection itself. Participants learn the material and prepare for their assigned roles, and the inspection proceeds. Each review is assigned one or more specific aspects of the product to be reviewed in terms of technical accuracy, standards and conventions, quality assurance, and readability.

The Check step includes the identification and documentation of the defects uncovered. Defects are discovered during the inspection, but solution hunting and the discussion of design alternatives are discouraged. Inspections are a review process, not a solution session.

The Act step includes the rework and follow-up required to correct any defects. The author reworks all discovered defects. The team ensures that all the potential corrective actions are effective and no secondary defects are inadvertently introduced.

By going around the PDCA cycle for each development phase using inspections, we verify and improve each phase deliverable at its origin and stop it dead in its tracks when defects are discovered (see Figure 6.4). The next phase cannot Start until the discovered defects are corrected. The reason is that it is advantageous to find and correct defects as near to their point of origin as possible. Repeated application of the PDCA results in an ascending spiral, facilitating quality improvement at each phase. The end product is dramatically improved, and the bewildering task of the software testing process will be minimized; for example, a lot of the defects will have been identified and corrected by the time the testing team receives the code.

The motivation for a review is that it is impossible to test all software. Clearly, exhaustive testing of code is impractical. Technology also does not exist for testing a specification or high-level design. The idea of testing a software test plan is also bewildering. Testing also does not address quality issues or adherence to standards, which are possible with review processes.

There are a variety of software technical reviews available for a project, depending on the type of software product and the standards that affect the review processes. The types of reviews depend on the deliverables to be produced. For example, for a Department of Defense contract, there are certain stringent standards for reviews that must be followed. These requirements may not be required for in-house application development.

A review increases the quality of the software product, reduces rework and ambiguous efforts, reduces testing, and defines test parameters, and is a repeatable and predictable process. It is an effective method for finding defects and discrepancies; it increases the reliability of the delivered product, has a positive impact on the schedule, and reduces development costs.

Early detection of errors reduces rework at later development stages, clarifies requirements and design, and identifies interfaces. It reduces the number of failures during testing, reduces the number of retests, identifies requirements testability, and helps identify missing or ambiguous requirements.

Quality control is a key preventive component of quality assurance. Defect removal via technical reviews during the development life cycle is an example of a quality control technique. The purpose of technical reviews is to increase the efficiency of the development life cycle and provide a method to measure the quality of the products. Technical reviews reduce the amount of rework, testing, and "quality escapes," that is, undetected defects. They are the missing links to removing defects and can also be viewed as a testing technique, even though we have categorized testing as a separate quality assurance component.

Originally developed by Michael Fagan of IBM in the 1970s, inspections have several aliases. They are often referred to interchangeably as "peer reviews," "inspections," or "structured walkthroughs." Inspections are performed at each phase of the development life cycle from user requirements through coding. In the latter, code walkthroughs are performed in which the developer walks through the code for the reviewer.

Research demonstrates that technical reviews can be a lot more productive than automated testing techniques in which the application is executed and tested. A technical review is a form of testing, or manual testing, not involving program execution on the computer. Structured walkthroughs and inspections are a more efficient means of removing defects than software testing alone. They also remove defects earlier in the life cycle, thereby reducing defect-removal costs significantly. They represent a highly efficient, low-cost technique of defect removal and can potentially result in a reduction of defect-removal costs of greater than two thirds when compared to dynamic software testing. A side benefit of inspections includes the ability to periodically analyze the defects recorded and remove the root causes early in the software development life cycle.

The purpose of the following section is to provide a framework for implementing software reviews. Discussed is the rationale for reviews, the roles of the participants, planning steps for effective reviews, scheduling, allocation, agenda definition, and review reports.

A test plan is the basis for accomplishing testing and should be considered a living document; that is, as the application changes, the test plan should change.

A good test plan encourages the attitude of "quality before design and coding." It is able to demonstrate that it contains full functional coverage, and the test cases trace back to the functions being tested. It also contains workable mechanisms for monitoring and tracking discovered defects and report status. Appendix E2 is a System/Acceptance Test Plan template that combines unit, integration, and system test plans into one. It is also used in this section to describe how a test plan is built during the waterfall life-cycle development methodology.

The following are the major steps that need to be completed to build a good test plan.

Step 1: Define the Test Objectives

The first step in planning any test is to establish what is to be accomplished as a result of the testing. This step ensures that all responsible individuals contribute to the definition of the test criteria that will be used. The developer of a test plan determines what is going to be accomplished with the test, the specific tests to be performed, the test expectations, the critical success factors of the test, constraints, scope of the tests to be performed, the expected end products of the test, a final system summary report (see Appendix E11, "System Summary Report"), and the final signatures and approvals. The test objectives are reviewed and approval for the objectives is obtained.

Step 2: Develop the Test Approach

The test plan developer outlines the overall approach or how each test will be performed. This includes the testing techniques that will be used, test entry criteria, test exit criteria, procedures to coordinate testing activities with development, the test management approach, such as defect reporting and tracking, test progress tracking, status reporting, test resources and skills, risks, and a definition of the test basis (functional requirement specifications, etc.).

Step 3: Define the Test Environment

The test plan developer examines the physical test facilities, defines the hardware, software, and networks, determines which automated test tools and support tools are required, defines the help desk support required, builds special software required for the test effort, and develops a plan to support the foregoing.

Step 4: Develop the Test Specifications

The developer of the test plan forms the test team to write the test specifications, develops test specification format standards, divides up the work tasks and work breakdown, assigns team members to tasks, and identifies features to be tested. The test team documents the test specifications for each feature and cross-references them to the functional specifications. It also identifies the interdependencies and work flow of the test specifications and reviews the test specifications.

Step 5: Schedule the Test

The test plan developer develops a test schedule based on the resource availability and development schedule, compares the schedule with deadlines, balances resources and workload demands, defines major checkpoints, and develops contingency plans.

Step 6: Review and Approve the Test Plan

The test plan developer or manager schedules a review meeting with the major players, reviews the plan in detail to ensure it is complete and workable, and obtains approval to proceed.