Testing is very important phase in the Life Cycle of Software Development so whatever the work testers and test lead are doing in this Phase it should be properly documented and testers should be able to provide the needful information in proper Reports. Here are some important documents, their frequency and the reason for those documents have a look at it:
Test Documentation | Frequency | Reason |
High Level Test Plan | Each project | A test plan is produced for each project. It defines what functions will be tested and what functions will not be tested. This document will detail any risks and contingencies as well as stating assumptions and defining required resources. |
Test Specification | Each stage | This document details the test conditions and test cases for each stage of testing. This will be used by the testers in the running of their tests. |
Test Logs | Each stage | Test Logs are to be produced by each tester – this will enable progress to be monitored and controlled. It will also provide a suitable test audit at the end of the project. |
Test Progress Report | Weekly | This report will provide managers with a weekly progress on the testing being carried out. ‘S’ curves to show test progress will be used as well as faults found/fixed. |
Test Summary Report | When required | This summary report will be produced when requested by Senior Management at any stage of the development lifecycle. This will be a condensed version of the Test Progress Report specifically aimed at Senior Management. |
Post Project Report | Each project | At the end of each project a ‘Post Project Review’ will be carried. This activity will contain an analysis of what went well and what didn’t go well in terms of the project. A report will be produced detailing the changes for process improvement. |
Some or all of the following charts should be used to monitor progress. It is recommended that the graphs be displayed clearly on a wall so everyone is made aware of the current situation. Charts, rather than tables, are also recommended since they are easier to read and are more likely to attract attention and be taken notice of:
Faults Found:
This chart simply logs the number of faults found during each day of the test schedule. The severity of faults could be indicated by using different columns on the chart.
Faults Found vs. Faults Fixed:
Monitoring the number of faults found together with the number of faults fixed is a useful way to spot potential scheduling problems early. In the example above the increasing gap between the number of faults found and the number fixed suggests that more development effort is needed to fix more of the outstanding faults. Leaving these until the last moment invariably ends in disaster (delayed release, poor quality system, or both).
Tests Run and Tests Passed vs. Tests Planned:
Since we will know at the start of a test effort how many tests we are intending to run (defined by the scope) it is possible to plan the number of tests that should be completed each day of the test effort. This will normally look like an S-Curve, as shown on the example graph above (this is a third-order polynomial). Plotting both the number of tests actually run and the number of tests that have passed will quickly highlight any of a number of problems. For example, if the number of tests run falls below the number planned then either more testers or more time will be needed. If the number of tests passed falls much below the number run (i.e. if a large number of tests fail) then more faults than expected are being found and this will impact both the test team and the developers.
Faults Per Owner:
It is important to know how many faults are being worked on at any one time and by whom. This chart will show whether there is a balance of faults being assigned to the team. Each “column” on the bar chart can show priority and/or severity for further analysis.
Number of Defects for each Cycle:
This Graph shows the number of bugs (New bugs, Assigned Bugs, Open Bugs, Fixed Bugs, Reviewed-not-ok Bugs, Closed Bugs, Deferred Bugs) in each cycle for every release. The status of the bugs may be different from company to company. So based on the company the testers have to change the Status in Graph.
Number of different severity bugs in each Cycle:
This Graph shows the number of bugs of different severity (Critical bugs, Major Bugs, Minor Bugs, Cosmetic Bugs) in each cycle for every release. This Severity also each and every organization maintains their own standards so it may be different from company to company. Based on their own standards they have to change the Graph.
Number of different severity bugs in each Test Level:
This Graph shows the number of bugs of different severity (Critical bugs, Major Bugs, Minor Bugs, Cosmetic Bugs) in test level (Unit Testing, Integration Testing, System Testing and User Acceptance Testing) for every release. This Severity also each and every organization maintains their own standards so it may be different from company to company. Based on their own standards they have to change the Graph.
Number of different severity bugs in each Version:
This Graph shows the number of bugs of different severity (Critical bugs, Major Bugs, Minor Bugs, Cosmetic Bugs) in each Version. This Severity also each and every organization maintains their own standards so it may be different from company to company. Based on their own standards they have to change the Graph.
Post a Comment