To see all articles of ISTQB-ISEB Foundation guide, see here:

Software Testing-ISTQB ISEB Foundation Guide

Having developed the test plan, the activities and timescales determined within it need to be constantly reviewed against what is actually happening. This is test progress monitoring. The purpose of test progress monitoring is to provide feedback and visibility of the progress of test activities.

The data required to monitor progress can be collected manually, e.g. counting test cases developed at the end of each day, or, with the advent of sophisticated test management tools, it also possible to collect the data as an automatic output from a tool either already formatted into a report, or as a data file that can be manipulated to present a picture of progress.

The progress data is also used to measure exit criteria such as test coverage, e.g. 50 per cent requirements coverage achieved.

Common test metrics include:

  • Percentage of work done in test case preparation (or percentage of planned test cases prepared).
  • Percentage of work done in test environment preparation.
  • Test case execution (e.g. number of test cases run/not run, and test cases passed/failed).
  • Defect information (e.g. defect density, defects found and fixed, failure rate and retest results).
  • Test coverage of requirements, risks or code.
  • Subjective confidence of testers in the product.
  • Dates of test milestones.
  • Testing costs, including the cost compared with the benefit of finding the next defect or to run the next test.
Ultimately test metrics are used to track progress towards the completion of testing, which is determined by the exit criteria. So test metrics should relate directly to the exit criteria.

There is a trend towards ‘dashboards’, which reflect all of the relevant metrics on a single screen or page, ensuring maximum impact. For a dashboard, and generally when delivering metrics, it is best to use a relatively small but impact-worthy subset of the various metric options available. This is because the readers do not want to wade through lots of data for the key item of information they are after, which invariably is ‘Are we on target to complete on time?’

These metrics are often displayed in graphical form, examples of which are shown in below figure. This reflects progress on the running of test cases and reports on defects found. There is also a box at the top left for some written commentary on progress to be documented (this could simply be the issues and/or successes of the previous reporting period).



The graph in below Figure is the one shown at the bottom left of the dashboard in the above Figure. It reports the number of incidents raised, and also shows the planned and actual numbers of incidents.



Test Reporting


Test reporting is the process where by test metrics are reported in summarized format to update the reader regarding the testing tasks undertaken. The information reported can include:
  • What has happened during a given period of time, e.g. a week, a test level or the whole test endeavor, or when exit criteria have been met.
  • Analyzed information and metrics required to support recommendations and decisions about future actions, such as:
1. an assessment of defects remaining;
2. the economic benefit of continued testing, e.g. additional tests are exponentially more expensive than the benefit of running;
3. outstanding risks;
4. the level of confidence in tested software, e.g. defects planned vs actual defects found.The IEEE 829 standard includes an outline of a test summary report that could be used for test reporting.

The information gathered can also be used to help with any process improvement opportunities. This information could be used to assess whether:
  • the goals for testing were correctly set (were they achievable; if not why not?);
  • the test approach or strategy was adequate (e.g. did it ensure there was enough coverage?);
  • the testing was effective in ensuring that the objectives of testing were met.
Test Control

We have referred above to the collection and reporting of progress data. Test control uses this information to decide on a course of action to ensure control of the test activities is maintained and exit criteria are met. This is particularly required when the planned test activities are behind schedule. The actions taken could impact any of the test activities and may also affect other software life-cycle activities.

Examples of test-control activities are as follows:
  • Making decisions based on information from test monitoring.
  • Prioritize tests when an identified project risk occurs (e.g. software delivered late).
  • Change the test schedule due to availability of a test environment.
  • Set an entry criterion requiring fixes to be retested (confirmation tested) by a developer before accepting them into a build (this is particularly useful when defect fixes continually fail again when retested).
  • Review of product risks and perhaps changing the risk ratings to meet the target.
  • Adjusting the scope of the testing (perhaps the amount of tests to be run) to manage the testing of late change requests.
The following test-control activities are likely to be outside the test leader's responsibility. However, this should not stop the test leader making a recommendation to the project manager.
  • Descoping of functionality, i.e. removing some less important planned deliverables from the initial delivered solution to reduce the time and effort required to achieve that solution.
  • Delaying release into the production environment until exit criteria have been met.
  • Continuing testing after delivery into the production environment so that defects are found before they occur in production.
To check your understanding, I would again like to ask you some questions:

Name four common test metrics.
Name the eight headings in the IEEE 829 summary report.
Identify three ways a test leader can control testing if there are more tests than there is time to complete.

You may follow the complete series of Test Management articles here:

Test Management Risk And Testing
Software Test Organization
Software Test Approaches Test Strategies
Software Test Planning And Estimation
Software Test Progress Monitoring & Control
Software Testing Incident Management
Software Test Configuration Management

To see all articles of ISTQB-ISEB Foundation guide, see here:

Software Testing-ISTQB ISEB Foundation Guide

0 comments