The psychology of life-cycle testing encourages testing by individuals outside the development organization. The motivation for this is that with the life-cycle approach, there typically exist clearly defined requirements, and it is more efficient for a third party to verify these. Testing is often viewed as a destructive process designed to break development's work.
The psychology of spiral testing, on the other hand, encourages cooperation between quality assurance and the development organization. The basis of this argument is that, in a rapid application development environment, requirements may or may not be available, to varying degrees. Without this cooperation, the testing function would have a difficult task defining the test criteria. The only possible alternative is for testing and development to work together.
Testers can be powerful allies to development and, with a little effort, they can be transformed from adversaries into partners. This is possible because most testers want to be helpful; they just need a little consideration and support. To achieve this, however, an environment needs to be created to bring out the best of a tester's abilities. The tester and development manager must set the stage for cooperation early in the development cycle and communicate throughout the cycle.
Tester/Developer Perceptions
To understand some of the inhibitors to a good relationship between the testing function and development, it is helpful to understand how each views his or her role and responsibilities.
Testing is a difficult effort. It is a task that is both infinite and indefinite. No matter what testers do, they cannot be sure they will find all the problems, or even all the important ones.
Many testers are not really interested in testing and do not have the proper training in basic testing principles and techniques. Testing books or conferences typically treat the testing subject too rigorously and employ deep mathematical analysis. The insistence on formal requirement specifications as a prerequisite to effective testing is not realistic in the real world of a software development project.
It is hard to find individuals who are good at testing. It takes someone who is a critical thinker motivated to produce a quality software product, likes to evaluate software deliverables, and is not caught up in the assumption held by many developers that testing has a lesser job status than development. A good tester is a quick learner and eager to learn, is a good team player, and can effectively communicate both verbally and in writing.
The output from development is something that is real and tangible. A programmer can write code and display it to admiring customers, who assume it is correct. From a developer's point of view, testing results in nothing more tangible than an accurate, useful, and all-too-fleeting perspective on quality. Given these perspectives, many developers and testers often work together in an uncooperative, if not hostile, manner.
In many ways the tester and developer roles are in conflict. A developer is committed to building something successful. A tester tries to minimize the risk of failure and tries to improve the software by detecting defects. Developers focus on technology, which takes a lot of time and energy when producing software. A good tester, on the other hand, is motivated to provide the user with the best software to solve a problem.
Testers are typically ignored until the end of the development cycle when the application is "completed." Testers are always interested in the progress of development and realize that quality is only achievable when they take a broad point of view and consider software quality from multiple dimensions.
Project Goal: Integrate QA and Development
The key to integrating the testing and developing activities is for testers to avoid giving the impression that they are out to "break the code" or destroy development's work. Ideally, testers are human meters of product quality and should examine a software product, evaluate it, and discover if the product satisfies the customer's requirements. They should not be out to embarrass or complain, but inform development how to make their product even better. The impression they should foster is that they are the "developer's eyes to improved quality."
Development needs to be truly dedicated to quality and view the test team as an integral player on the development team. They need to realize that no matter how much work and effort has been expended by development, if the software does not have the correct level of quality, it is destined to fail. The testing manager needs to remind the project manager of this throughout the development cycle. The project manager needs to instill this perception in the development team.
Testers must coordinate with the project schedule and work in parallel with development. They need to be informed about what is going on in development, and so should be included in all planning and status meetings. This lessens the risk of introducing new bugs, known as "side effects," near the end of the development cycle and also reduces the need for time-consuming regression testing.
Testers must be encouraged to communicate effectively with everyone on the development team. They should establish a good relationship with the software users, who can help them better understand acceptable standards of quality. In this way, testers can provide valuable feedback directly to development.
Testers should intensively review online help and printed manuals whenever they are available. It will relieve some of the communication burden to get writers and testers to share notes rather than saddle development with the same information.
Testers need to know the objectives of the software product, how it is intended to work, how it actually works, the development schedule, any proposed changes, and the status of reported problems.
Developers need to know what problems were discovered, what part of the software is or is not working, how users perceive the software, what will be tested, the testing schedule, the testing resources available, what the testers need to know to test the system, and the current status of the testing effort.
When quality assurance starts working with a development team, the testing manager needs to interview the project manager and show an interest in working in a cooperative manner to produce the best software product possible. The next section describes how to accomplish this.
Iterative/Spiral Development Methodology
Spiral methodologies are a reaction to the traditional waterfall methodology of systems development, a sequential solution development approach. A common problem with the waterfall model is that the elapsed time for delivering the product can be excessive.
By contrast, spiral development expedites product delivery. A small but functioning initial system is built and quickly delivered, and then enhanced in a series of iterations. One advantage is that the clients receive at least some functionality quickly. Another is that the product can be shaped by iterative feedback; for example, users do not have to define every feature correctly and in full detail at the beginning of the development cycle, but can react to each iteration.
With the spiral approach, the product evolves continually over time; it is not static and may never be completed in the traditional sense. The term spiral refers to the fact that the traditional sequence of analysis—design—code—test phases is performed on a microscale within each spiral or cycle, in a short period of time, and then the phases are repeated within each subsequent cycle. The spiral approach is often associated with prototyping and rapid application development.
Traditional requirements-based testing expects that the product definition will be finalized and even frozen prior to detailed test planning. With spiral development, the product definition and specifications continue to evolve indefinitely; that is, there is no such thing as a frozen specification. A comprehensive requirements definition and system design probably never will be documented.
The only practical way to test in the spiral environment, therefore, is to "get inside the spiral." Quality assurance must have a good working relationship with development. The testers must be very close to the development effort, and test each new version as it becomes available. Each iteration of testing must be brief, in order not to disrupt the frequent delivery of the product iterations. The focus of each iterative test must be first to test only the enhanced and changed features. If time within the spiral allows, an automated regression test also should be performed; this requires sufficient time and resources to update the automated regression tests within each spiral.
Clients typically demand very fast turnarounds on change requests; there may be neither formal release nor a willingness to wait for the next release to obtain a new system feature. Ideally, there should be an efficient, automated regression test facility for the product, which can be used for at least a brief test prior to the release of the new product version.
Spiral testing is a process of working from a base and building a system incrementally. Upon reaching the end of each phase, developers reexamine the entire structure and revise it. Drawing the four major phases of system development— planning/analysis, design, coding, and test/deliver—into quadrants, as shown in Figure 12.1, represents the spiral approach. The respective testing phases are test planning, test case design, test development, and test execution/evaluation.
The spiral process begins with planning and requirements analysis to determine the functionality. Then a design is made for the base components of the system and the functionality determined in the first step. Next, the functionality is constructed and tested. This represents a complete iteration of the spiral.
Having completed this first spiral, users are given the opportunity to examine the system and enhance its functionality. This begins the second iteration of the spiral. The process continues, looping around and around the spiral until the users and developers agree the system is complete; the process then proceeds to implementation.
The spiral approach, if followed systematically, can be effective in ensuring that the users' requirements are being adequately addressed and that the users are closely involved with the project. It can allow for the system to adapt to any changes in business requirements that occurred after the system development began. However, there is one major flaw with this methodology: there may never be any firm commitment to implement a working system. One can go around and around the quadrants, never actually bringing a system into production. This is often referred to as "spiral death."
Although the waterfall development has often proved itself to be too inflexible, the spiral approach can produce the opposite problem. Unfortunately, the flexibility of the spiral methodology often results in the development team ignoring what the user really wants, and thus, the product fails the user verification. This is where quality assurance is a key component of a spiral approach. It will ensure that user requirements are being satisfied.
A variation to the spiral methodology is the iterative methodology, in which the development team is forced to reach a point where the system will be implemented. The iterative methodology recognizes that the system is never truly complete, but is evolutionary. However, it also realizes that there is a point at which the system is close enough to completion to be of value to the end user.
The point of implementation is decided upon prior to the start of the system, and a certain number of iterations will be specified, with goals identified for each iteration. Upon completion of the final iteration, the system will be implemented in whatever state it may be.
Subscribe to:
Post Comments (Atom)
Post a Comment