To see all articles of ISTQB-ISEB Foundation guide, see here:

Software Testing-ISTQB ISEB Foundation Guide

In our Previous article, we have discussed What is a Test Tool.

There are many stages in the process that should be considered before implementing a test tool.

Analyze the Problem/Opportunity

An assessment should be made of the maturity of the test process used within the organization. If the organization's test processes are immature and ineffective then the most that the tool can do is to make the repetition of these processes quicker and more accurate—quick and accurate ineffective processes are still ineffective!

It is therefore important to identify the strengths, weaknesses and opportunities that exist within the test organization before introducing test tools. Tools should only be implemented that will either support an established test process or support required improvements to an immature test process. It may be beneficial to carry out some TPI (Test Process Improvement) or CMMi (Capability Maturity Model Integration) assessment to establish the maturity of the organization before considering the implementation of any test tool.

Generate Alternative Solutions

It may be more appropriate and cost-effective to do something different. In some organizations, performance testing, which may only need to be done from time to time, could be outsourced to a specialist testing consultancy. Training or recruiting better staff could provide more benefits than implementing a test tool and improve the effectiveness of a test process more significantly. In addition, it is more effective to maintain a manual regression pack so that it accurately reflects the high-risk areas than to automate an outdated regression pack (that is no longer relevant) using a test execution tool.

An early investigation of what tools are available is likely to form part of this activity.

Constraints and Requirements

A thorough analysis of the constraints and requirements of the tool should be performed. Interested parties should attend workshops and/or be interviewed so that a formal description of the requirements can be produced and approved by the budget holder and other key stakeholders.

A failure to specify accurate requirements (as with a failure to specify accurate requirements for a piece of software) can lead to delays, additional costs and the wrong things being delivered. This could lead to a review tool being implemented that does not allow access across the internet, even though there is a need for staff from many countries to participate in reviews. Any financial or technical constraints (e.g. compatibility with particular operating systems or databases) should also be considered.

It is useful to attach some sort of priority or ranking to each requirement or group of requirements.

Training, coaching and mentoring requirements should also be identified. For example, experienced consultants could be used for a few weeks or months to work on overcoming implementation problems with the tool and to help transfer knowledge to permanent staff. Such consultants could be provided by the vendor or could be from the contract market.

Requirements for the tool vendor should also be considered. These could include the quality of training and support offered by the vendor during and after implementation and the ability to enhance and upgrade the tool in the future. In addition, their financial stability should be considered as the vendor could go bankrupt or sell to another vendor. Therefore, using a small niche vendor may be a higher risk than using an established tool supplier.

If non-commercial tools (such as open source and freeware) are being considered then there are likely to be risks around the lack of training and support available. In addition, the ability or desire of the service support supplier (or open-source provider) to continue to develop and support the tool should be taken into account.

Evaluation and Shortlist

The tools available in the marketplace should be evaluated to identify a shortlist of the tools that provide the best fit to the requirements and constraints. This may involve:

  • searching the internet;
  • attending exhibitions of test tools;
  • discussions with tool vendors;
  • engaging specialist consultants to identify relevant tools.
It may also be useful for the test organization to send a copy of its list of requirements and constraints to tool vendors so that:
  • the vendor is clear about what the test organizations wants;
  • the vendor can respond with clarity about what its own tools can do and what workarounds there are to meet the requirements that the tool cannot provide;
  • the test organization does not waste time dealing with vendors that cannot satisfy its key requirements.
The outcome of this initial evaluation should result in a shortlist of perhaps one, two or three tools that appear to meet the requirements.

Detailed Evaluation/Proof of Concept

A more detailed evaluation (proof of concept) should then be performed against this shortlist. This should be held at the test organization's premises in the test environment in which the tool will be used. This test environment should use the system under test and other software, operating systems and hardware with which the tool will be used. There are several reasons why there is little benefit from evaluating the tool on something different. For example:
  • Test execution tools do not necessarily recognize all object types in the system under test, or they may need to be reconfigured to do so.
  • Performance measurement tools may need to be reconfigured to provide meaningful performance information.
  • Test management tools may need to have workflow redesigned to support established test processes and may need to be integrated with existing tools used within the test process.
  • Static analysis tools may not work on the version of programming languages used.
In some cases, it may be worth considering whether changes can be made to the organization's test environments and infrastructure, but the costs and risks need to be understood and quantified.

(Note that if there is only one tool in the shortlist then it may be appropriate to combine the proof of concept and the pilot project.)

After each proof of concept the performance of the tool should be assessed in relation to each predefined requirement. Any additional features demonstrated should be considered and noted as potential future requirements.

Once all proofs of concept have been carried out it may be necessary to amend the requirements as a result of what was found during the tool selection process. Any amendments should be agreed with stakeholders. Each tool should then be assessed against the finalised set of requirements.

There are three likely outcomes at this stage:
  • None of the tools meet the requirements sufficiently well to make it worthwhile purchasing and implementing them.
  • One tool meets the requirement much better than the others and is likely to be worthwhile. In this case select this tool.
  • The situation is unclear and more information is needed. In this case a competitive trial or another cycle/iteration of the process may be needed. Perhaps the requirements need to be revised or further questions need to be put to vendors. It may also be time to start negotiations with vendors about costs.
Negotiations with Vendor of Selected Tool

Once a tool has been selected discussions will be held with the vendor to establish and negotiate the amount of money to be paid and the timing of payments. This will include some or all of the following:
  • purchase price;
  • annual license fee;
  • consultancy costs;
  • training costs;
  • implementation costs.
Discussions should establish the amount to be paid, first, for a pilot project and, secondly (assuming the pilot project is successful), the price to be paid for a larger scale implementation.

The Pilot Project

The aims of a pilot project include the following:
  • It is important to establish what changes need to be made to the high-level processes and practices currently used within the test organization. This involves assessing whether the tool's standard workflow, processes and configuration need to be amended to fit with the test process or whether the existing processes need to be changed to obtain the optimum benefits that the tool can provide.
  • To determine lower level detail such as templates, naming standards and other guidelines for using the tool. This can take the form of a user guidelines document.
  • To establish whether the tool provides value for money. This is done by trying to estimate and quantify the financial and other benefits of using the tool and then comparing this with the fees paid to the vendor and the projected internal costs to the organization (e.g. lost time that could be used for other things, the cost of hiring contractors, etc.).
  • A more intangible aim is to learn more about what the tool can and cannot do and how these functions (or workarounds) can be applied within the test organisation to obtain maximum benefit.
The pilot project should report back to the group of stakeholders that determined the requirements of the tool.

If a decision is made to implement the tool on a larger scale then a formal project should be created and managed according to established project management principles.

Key Factors in Successful Implementations of Test Tools

There are certain factors or characteristics that many successful tool implementation projects have in common:
  • Implementing findings from the pilot project such as high-level process changes and using functions or workarounds that can add additional benefits.
  • Identifying and subsequently writing user guidelines, based on the findings of the pilot project.
  • An incremental approach to rolling out the tool into areas where it is likely to be most useful. For example, this can allow ‘quick wins’ to be made and good publicity obtained, resulting in a generally positive attitude towards the tool.
  • Improving the process to fit with the new tool, or amending the use of the tool to fit with existing processes.
  • Ensuring that the appropriate level of training, coaching and mentoring is available. Similarly, there may be a need to recruit permanent or contract resources to ensure that sufficient skills exist at the outset of the tool's use within the organization.
  • Using a database (in whatever format) of problems encountered and lessons learnt to overcome them. This is because new users are likely to make similar mistakes.
  • Capturing metrics to monitor the amount of use of the tool. Recording the benefits obtained. This can then be used to support arguments about implementing to other areas within the test organization.
  • Agreeing or obtaining a budget to allow the tool to be implemented appropriately.
Summary of Test Tool Implementation Process

The below figure outlines the process for selecting and implementing a test tool in an organization. This shows that there are several points at which a decision could be made not to introduce a tool. It also demonstrates that the activities during the evaluation and negotiation stages can follow an iterative process until a decision is made.



Test tool implementation process


To check your understanding, I would again like to ask you some questions:

Why is an understanding of the test organization's maturity essential before introducing a test tool?
What is the purpose of defining requirements for the tool?
Why is it important to evaluate the tool vendor as well as the tool itself?
What is meant by a proof of concept?
What is the purpose of a pilot project?
When is it appropriate to combine a proof of concept and pilot project?
Name three factors in the successful implementation of tools.

You may follow the complete series of Tool Support for Testing articles here:

What is a Test Tool?
Introducing Test Tool In Organization

To see all articles of ISTQB-ISEB Foundation guide, see here:

Software Testing-ISTQB ISEB Foundation Guide

0 comments