To see all articles of ISTQB-ISEB Foundation guide, see here:

Software Testing-ISTQB ISEB Foundation Guide

One or more of these factors may be important on any given occasion. Some leave no room for selection: regulatory or contractual requirements leave the tester with no choice. Test objectives, where they relate to exit criteria such as test coverage, may also lead to mandatory techniques. Where documentation is not available, or where time and budget are limited, the use of experience based techniques may be favored. All others provide pointers within a broad framework: level and type of risk will push the tester towards a particular approach, where high risk is a good reason for using systematic techniques; knowledge of testers, especially where this is limited, may narrow down the available choices; the type of system and the development life cycle will encourage testers to lean in one direction or another depending on their own particular experience. There are few clear-cut cases, and the exercise of sound judgement in selecting appropriate techniques is a mark of a good test manager or team leader.

CHECK OF UNDERSTANDING

1. What is meant by experience-based testing?
2. Briefly compare error guessing and exploratory testing.
3. When is the best time to use experience-based testing?

You may follow the complete series of Test Design Techniques articles here:
LinkTest Development Process
The Idea of Test Coverage
Categories of Design Techniques
Specification Black Box Techniques
Structure based Whitebox techniques
Experience based Testing
Choosing Test Techniques

To see all articles of ISTQB-ISEB Foundation guide, see here:

Software Testing-ISTQB ISEB Foundation Guide

To see all articles of ISTQB-ISEB Foundation guide, see here:

Software Testing-ISTQB ISEB Foundation Guide

Experience-based techniques are those that you fall back on when there is no adequate specification from which to derive specification-based test cases or no time to run the full structured set of tests. They use the users' and the testers' experience to determine the most important areas of a system and to exercise these areas in ways that are both consistent with expected use (and abuse) and likely to be the sites of errors—this is where the experience comes in. Even when specifications are available it is worth supplementing the structured tests with some that you know by experience have found defects in other similar systems.

Techniques range from the simplistic approach of ad hoc testing or error guessing through to the more sophisticated techniques such as exploratory testing, but all tap the knowledge and experience of the tester rather than systematically exploring a system against a written specification.

Error Guessing

Error guessing is a very simple technique that takes advantage of a tester's skill, intuition and experience with similar applications to identify special tests that may not be easy to capture by the more formal techniques. When applied after systematic techniques, error guessing can add another value in identifying and exercising test cases that target known or suspected weaknesses or that simply address aspects of the application that have been found to be problematical in the past.

The main drawback of error guessing is its varying effectiveness, depending as it does on the experience of the tester deploying it. However, if several testers and/or users contribute to constructing a list of possible errors and tests are designed to attack each error listed, this weakness can be effectively overcome. Another way to make error guessing more structured is by the creation of defect and failure lists. These lists can use available defect and failure data (where this exists) as a starting point, but the list can be expanded by using the testers' and users' experience of why the application under test in particular is likely to fail. The defect and failure list can be used as the basis of a set of tests that are applied after the systematic techniques have been used. This systematic approach is known as fault attack.

Exploratory Testing

Exploratory testing is a technique that combines the experience of testers with a structured approach to testing where specifications are either missing or inadequate and where there is severe time pressure. It exploits concurrent test design, test execution, test logging and learning within time-boxes and is structured around a test charter containing test objectives. In this way exploratory testing maximizes the amount of testing that can be achieved within a limited time frame, using test objectives to maintain focus on the most important areas.

SYSTEMATIC AND EXPERIENCE-BASED TECHNIQUES

How do we decide which is the best technique? There are some simple rules of thumb:

1. Always make functional testing the first priority. It may be necessary to test early code products using structural techniques, but we only really learn about the quality of software when we can see what it does.

2. When basic functional testing is complete that is a good time to think about test coverage. Have you exercised all the functions, all the requirements, all the code? Coverage measures defined at the beginning as exit criteria can now come into play. Where coverage is inadequate extra tests will be needed.

3. Use structural methods to supplement functional methods where possible. Even if functional coverage is adequate, it will usually be worth checking statement and decision coverage to ensure that enough of the code has been exercised during testing.

4. Once systematic testing is complete there is an opportunity to use experience-based techniques to ensure that all the most important and most error-prone areas of the software have been exercised. In some circumstances, such as poor specifications or time pressure, experience based testing may be the only viable option.

The decision of which test technique to use is not a simple one. There are many factors to bear in mind, some of which are listed in the box.

KEY SELECTION FACTORS

  • Type of system
  • Regulatory standards
  • Customer or contractual requirements
  • Level of risk
  • Type of risk
  • Test objectives
  • Documentation available
  • Knowledge of the testers
  • Time and budget
  • Development life cycle
  • Use case models
  • Experience of type of defects found
You may follow the complete series of Test Design Techniques articles here:
Link
Test Development Process
The Idea of Test Coverage
Categories of Design Techniques
Specification Black Box Techniques
Structure based Whitebox techniques
Experience based Testing
Choosing Test Techniques

To see all articles of ISTQB-ISEB Foundation guide, see here:

Software Testing-ISTQB ISEB Foundation Guide

To see all articles of ISTQB-ISEB Foundation guide, see here:

Software Testing-ISTQB ISEB Foundation Guide

Structure-based test techniques are used to explore system or component structures at several levels. At the component level, the structures of interest will be program structures such as decisions; at the integration level we may be interested in exploring the way components interact with other components (in what is usually termed a calling structure); at the system level we may be interested in how users will interact with a menu structure. All these are examples of structures and all may be tested using white-box test case design techniques. Instead of exercising a component or system to see if it functions correctly white-box tests focus on ensuring that particular elements of the structure itself are correctly exercised. For example, we can use structural testing techniques to ensure that each statement in the code of a component is executed at least once. At the component level, where structure-based testing is most commonly used, the test case design techniques involve generating test cases from code, so we need to be able to read and analyze code. You may wish to run simple test cases on code to ensure that it is basically sound before you begin detailed functional testing, or you may want to interpret test results from programmers to ensure that their testing adequately exercises the code.

Our starting point, then, is the code itself.

You may follow the complete series of Test Design Techniques articles here:
Link
Test Development Process
The Idea of Test Coverage
Categories of Design Techniques
Specification Black Box Techniques
Structure based Whitebox techniques
Experience based Testing
Choosing Test Techniques

To see all articles of ISTQB-ISEB Foundation guide, see here:

Software Testing-ISTQB ISEB Foundation Guide

To see all articles of ISTQB-ISEB Foundation guide, see here:

Software Testing-ISTQB ISEB Foundation Guide

The main thing about specification-based techniques is that they derive test cases directly from the specification or from some other kind of model of what the system should do. The source of information on which to base testing is known as the ‘test basis’. If a test basis is well defined and adequately structured we can easily identify test conditions from which test cases can be derived.

The most important point about specification-based techniques is that specifications or models do not (and should not) define how a system should achieve the specified behavior when it is built; it is a specification of the required (or at least desired) behavior. One of the hard lessons that software engineers have learned from experience is that it is important to separate the definition of what a system should do (a specification) from the definition of how it should work (a design). This separation allows the two specialist groups (testers for specifications and designers for design) to work independently so that we can later check that they have arrived at the same place, i.e. they have together built a system and tested that it works according to its specification.

If we set up test cases so that we check that desired behavior actually occurs then we are acting independently of the developers. If they have misunderstood the specification or chosen to change it in some way without telling anyone then their implementation will deliver behavior that is different from what the model or specification said the system behavior should be. Our test, based solely on the specification, will therefore fail and we will have uncovered a problem.

Bear in mind that not all systems are defined by a detailed formal specification. In some cases the model we use may be quite informal. If there is no specification at all, the tester may have to build a model of the proposed system, perhaps by interviewing key stakeholders to understand what their expectations are. However formal or informal the model is, and however it is built, it provides a test basis from which we can generate tests systematically.

Remember, also, that the specification can contain non-functional elements as well as functions; topics such as reliability, usability and performance are examples. These need to be systematically tested as well.

What we need, then, are techniques that can explore the specified behavior systematically and thoroughly in a way that is as efficient as we can make it. In addition, we use what we know about software to ‘home in’ on problems; each of the test case design techniques is based on some simple principles that arise from what we know in general about software behavior.

You need to know five specification-based techniques for the Foundation Certificate:

  • Equivalence partitioning
  • Boundary value analysis
  • Decision table testing
  • State transition testing
  • Use case testing
You should be capable of generating test cases for the first four of these techniques.

Equivalence Partitioning

Input Partitions

Equivalence partitioning is based on a very simple idea: it is that in many cases the inputs to a program can be ‘chunked’ into groups of similar inputs. For example, a program that accepts integer values can accept as valid any input that is an integer (i.e. a whole number) and should reject anything else (such as a real number or a character). The range of integers is infinite, though the computer will limit this to some finite value in both the negative and positive directions (simply because it can only handle numbers of a certain size; it is a finite machine). Let us suppose, for the sake of an example, that the program accepts any value between −10,000 and +10,000 (computers actually represent numbers in binary form, which makes the numbers look much less like the ones we are familiar with, but we will stick to a familiar representation). If we imagine a program that separates numbers into two groups according to whether they are positive or negative the total range of integers could be split into three ‘partitions’: the values that are less than zero; zero; and the values that are greater than zero. Each of these is known as an ‘equivalence partition’ because every value inside the partition is exactly equivalent to any other value as far as our program is concerned. So if the computer accepts −2,905 as a valid negative integer we would expect it also to accept −3. Similarly, if it accepts 100 it should also accept 2,345 as a positive integer. Note that we are treating zero as a special case. We could, if we chose to, include zero with the positive integers, but my rudimentary specification did not specify that clearly, so it is really left as an undefined value (and it is not untypical to find such ambiguities or undefined areas in specifications). It often suits us to treat zero as a special case for testing where ranges of numbers are involved; we treat it as an equivalence partition with only one member. So we have three valid equivalence partitions in this case.

The equivalence partitioning technique takes advantage of the properties of equivalence partitions to reduce the number of test cases we need to write. Since all the values in an equivalence partition are handled in exactly the same way by a given program, we need only test one of them as a representative of the partition. In the example given, then, we need any positive integer, any negative integer and zero. We generally select values somewhere near the middle of each partition, so we might choose, say, −5,000, 0 and 5,000 as our representatives. These three test inputs would exercise all three partitions and the theory tells us that if the program treats these three values correctly it is very likely to treat all of the other values, all 19,998 of them in this case, correctly.

The partitions we have identified so far are called valid equivalence partitions because they partition the collection of valid inputs, but there are other possible inputs to this program that would not be valid—real numbers, for example. We also have two input partitions of integers that are not valid: integers less than −10,000 and integers greater than 10,000. We should test that the program does not accept these, which is just as important as the program accepting valid inputs.

Non-valid partitions are also important to test. If you think about the example we have been using you will soon recognize that there are far more possible non-valid inputs than valid ones, since all the real numbers (e.g. numbers containing decimals) and all characters are non-valid in this case. It is generally the case that there are far more ways to provide incorrect input than there are to provide correct input; as a result, we need to ensure that we have tested the program against the possible non-valid inputs. Here again equivalence partitioning comes to our aid: all real numbers are equally non-valid, as are all alphabetic characters. These represent two non-valid partitions that we should test, using values such as 9.45 and ‘r’ respectively. There will be many other possible non-valid input partitions, so we may have to limit the test cases to the ones that are most likely to crop up in a real situation.

Exercise 4.1

Suppose you have a bank account that offers variable interest rates: 0.5 per cent for the first £1,000 credit; 1 per cent for the next £1,000; 1.5 per cent for the rest. If you wanted to check that the bank was handling your account correctly what valid input partitions might you use?

The partitions would be: 0.00 1,000.00, 1,000.01 2,000.00, and -= 2,000.01.

Answers

The partitions would be: £0.00–£1,000.00, £1,000.01–£2,000.00, and >= £2,000.01.

Output Partitions

Just as the input to a program can be partitioned, so can the output. The program in the exercise above could produce outputs of 0.5 per cent, 1 per cent and 1.5 per cent, so we could use test cases that generate each of these outputs as an alternative to generating input partitions. An input value in the range £0.00–£1,000.00 would generate the 0.5 per cent output; a value in the range £1,001.00–£2,000.00 would generate the 1 per cent output; a value greater than £2,000.00 would generate the 1.5 per cent output.

Other Partitions


If we know enough about an application we may be able to partition other values instead of or as well as input and output. For example, if a program handles input requests by placing them on one of a number of queues we could, in principle, check that requests end up on the right queue. In this case a stream of inputs can be partitioned according to the queue we anticipate it will be placed into. This is more technical and difficult than input or output partitioning but it is an option that can be considered when appropriate.

Exercise 4.2

A mail-order company selling flower seeds charges £3.95 for postage and packing on all orders up to £20 value and £4.95 for orders above £20 value and up to £40 value. For orders above £40 value there is no charge for postage and packing.

If you were using equivalence partitioning to prepare test cases for the postage and packing charges what valid partitions would you define?

What about non-valid partitions?

The valid partitions would be: 0.00 20.00, 20.01 40.00, and -= 40.01. Non-valid partitions would include negative values and alphabetic characters.

Answers

The valid partitions would be: £0.00–£20.00, £20.01–£40.00, and >= £40.01. Non-valid partitions would include negative values and alphabetic characters.

Boundary Value Analysis

One thing we know about the kinds of mistakes that programmers make is that errors tend to cluster around boundaries. For example, if a program should accept a sequence of numbers between 1 and 10, the most likely fault will be that values just outside this range are incorrectly accepted or that values just inside the range are incorrectly rejected. In the programming world these faults coincide with particular programming structures such as the number of times a program loop is executed or the exact point at which a loop should stop executing.

This works well with our equivalence partitioning idea because partitions must have boundaries. A partition of integers between 1 and 99, for instance, has a lowest value, 1, and a highest value, 99. These are called boundary values. Actually they are called valid boundary values because they are the boundaries on the inside of a valid partition. What about the values on the outside? Yes, they have boundaries too. So the boundary of the non-valid values at the lower end will be zero because it is the first value you come to when you step outside the partition at the bottom end. (You can also think of this as the highest value inside the non-valid partition of integers that are less than one, of course.) At the top end of the range we also have a non-valid boundary value, 100.

This is the boundary value technique, more or less. For most practical purposes the boundary value analysis technique needs to identify just two values at each boundary. For reasons that need not detain us here there is an alternative version of the technique that uses three values at each boundary. For this variant, which is the one documented in BS 7925-2, we include one more value at each boundary when we use boundary value analysis: the rule is that we use the boundary value itself and one value (as close as you can get) either side of the boundary.

So, in this case lower boundary values will be 0, 1, 2 and upper boundary values will be 98, 99, 100. What does ‘as close as we can get’ mean? It means take the next value in sequence using the precision that has been applied to the partition. If the numbers are to a precision of 0.01, for example, the lower boundary values would be 0.99, 1.00, 1.01 and the upper boundary values would be 98.99, 99.00, 99.01.

When you come to take your exam you will find that the exam recognizes that there are two possible approaches to boundary value analysis. For this reason any questions about boundary value analysis will clearly signal whether you are expected to identify 2 or 3 values at any boundary. You will find that this causes no problems, but there are examples below using both the 2 value and the 3 value approach, just to be on the safe side and ensure that you do not get taken by surprise in the exam.

Exercise 4.3

A system is designed to accept scores from independent markers who have marked the same examination script. Each script should have 5 individual marks, each of which is out of 20, and a total for the script. Two markers' scores are compared and differences greater than three in any question score or 10 overall are flagged for further examination.

Using equivalence partitioning and boundary value analysis identify the boundary values that you would explore for this scenario.

(In practice, some of the boundary values might actually be in other equivalence partitions, and we do not need to test them twice, so the total number of boundary values requiring testing might be less than you might expect.)

Answers

The partitions would be: question scores 0–20; total 0–100; question differences: 0–3 and > 3; total differences 0–10 and > 10.

Boundary values would be: −1, 0, 1 and 19, 20, 21 for the question scores; −1, 0, 1 (again) and 99, 100, 101 for the question paper totals; −1, 0, 1 (again) and 2, 3, 4 for differences between question scores for different markers; and −1, 0, 1 (again) and 9, 10, 11 for total differences between different markers.

In this case, although the −1, 0, 1 values occur several times, they may be applied to different parts of the program (e.g. the question score checks will probably be in a different part of the program from the total score checks) so we may need to repeat these values in the boundary tests.

Decision Table Testing

Specifications often contain business rules to define the functions of the system and the conditions under which each function operates. Individual decisions are usually simple, but the overall effect of these logical conditions can become quite complex. As testers we need to be able to assure ourselves that every combination of these conditions that might occur has been tested, so we need to capture all the decisions in a way that enables us to explore their combinations. The mechanism usually used to capture the logical decisions is called a decision table.

A decision table lists all the input conditions that can occur and all the actions that can arise from them. These are structured into a table as rows, with the conditions at the top of the table and the possible actions at the bottom. Business rules, which involve combinations of conditions to produce some combination of actions, are arranged across the top. Each column therefore represents a single business rule (or just ‘rule’) and shows how input conditions combine to produce actions. Thus each column represents a possible test case, since it identifies both inputs and expected outputs.

STATE TRANSITION DIAGRAMS

A state transition diagram is a representation of the behavior of a system. It is made up from just two symbols.

The first is

which is the symbol for a state. A state is just what it says it is: the system is ‘static’, in a stable condition from which it will only change if it is stimulated by an event of some kind. For example, a TV stays ‘on’ unless you turn it ‘off’; a multifunction watch tells the time unless you change mode.

The second is
which is the symbol for a transition, i.e. a change from one state to another. The state change will be triggered by an event (e.g. pressing a button or switching a switch). The transition will be labelled with the event that caused it and any action that arises from it. So we might have ‘mode button pressed’ as an event and ‘presentation changes’ as the action. Usually (but not necessarily) the start state will have a double arrowhead pointing to it. Often the start state is obvious anyway.

If we have a state transition diagram representation of a system we can analyze the behavior in terms of what happens when a transition occurs.

Transitions are caused by events and they may generate outputs and/or changes of state. An event is anything that acts as a trigger for a change; it could be an input to the system, or it could be something inside the system that changes for some reason, such as a database field being updated.

In some cases an event generates an output, in others the event changes the system's internal state without generating an output, and in still others an event may cause an output and a change of state. What happens for each change is always deducible from the state transition diagram.

Use Case Testing

Use cases are one way of specifying functionality as business scenarios or process flows. They capture the individual interactions between ‘actors’ and the system. An actor represents a particular type of user and the use cases capture the interactions that each user takes part in to produce some output that is of value. Test cases based on use cases at the business process level, often called scenarios, are particularly useful in exercising business rules or process flows and will often identify gaps or weaknesses in these that would not be found by exercising individual components in isolation. This makes use case testing very effective in defining acceptance tests because the use cases represent actual likely use.

Use cases may also be defined at the system level, with preconditions that define the state the system needs to be in on entry to a use case to enable the use case to complete successfully, and post-conditions that define the state of the system on completion of the use case. Use cases typically have a mainstream path, defining the expected behavior, and one or more alternative paths related to such aspects as error conditions. Well defined use cases can therefore be an excellent basis for system level testing, and they can also help to uncover integration defects caused by incorrect interaction or communication between components.

In practice, writing a test case to represent each use case is often a good starting point for testing, and use case testing can be combined with other specification based testing.

You may follow the complete series of Test Design Techniques articles here:
LinkTest Development Process
The Idea of Test Coverage
Categories of Design Techniques
Specification Black Box Techniques
Structure based Whitebox techniques
Experience based Testing
Choosing Test Techniques

To see all articles of ISTQB-ISEB Foundation guide, see here:

Software Testing-ISTQB ISEB Foundation Guide

To see all articles of ISTQB-ISEB Foundation guide, see here:

Software Testing-ISTQB ISEB Foundation Guide

There are very many ways to design test cases. Some are general, others are very specific. Some are very simple to implement, others are difficult and complex to implement. The many excellent books published on software testing techniques every year testify to the rate of development of new and interesting approaches to the challenges that confront the professional software tester.

There is, however, a collection of test case design techniques that has come to be recognized as the most important ones for a tester to learn to apply, and these have been selected as the representatives of test case design for the Foundation Certificate, and hence for this book.

The test case design techniques we will look at are grouped into three categories:

  • Those based on deriving test cases directly from a specification or a model of a system or proposed system, known as specification-based or black-box techniques. So black-box techniques are based on an analysis of the test basis documentation, including both functional and non-functional aspects. They do not use any information regarding the internal structure of the component or system under test.
  • Those based on deriving test cases directly from the structure of a component or system, known as structure-based, structural or white-box techniques. We will concentrate on tests based on the code written to implement a component or system, but other aspects of structure, such as a menu structure, can be tested in a similar way.
  • Those based on deriving test cases from the tester's experience of similar systems and general experience of testing, known as experience-based techniques.
It is convenient to categorize techniques for test case design in this way (it is easier for you to remember, for one thing) but do not assume that these are the only categories or the only techniques; there are many more that can be added to the tester's ‘tool kit’ over time.

The category now known as specification-based was originally called ‘black-box’ because the techniques in it take a view of the system that does not need to know what is going on ‘inside the box’. Those of us born in the first half of the 20th century will recognize ‘black box’ as the name of anything technical that you can use but about which you know nothing or next to nothing. ‘Specification-based’ is a more descriptive title for those who may not find the ‘black-box’ image helpful. The natural alternative to ‘black box’ is ‘white box’ and so ‘white box’ techniques are those that are based on internal structure rather than external function.

Experience-based testing was not really treated as ‘proper’ testing in testing prehistory, so it was given a disdainful name such as ‘ad hoc’; the implication that this was not a systematic approach was enough to exclude it from many discussions about testing. Both the intellectual climate and the sophistication of experience-based techniques have moved on from those early days. It is worth bearing in mind that many systems are still tested in an experience-based way, partly because the systems are not specified in enough detail or in a sufficiently structured way to enable other categories of technique to be applied, or because neither the development team nor the testing team have been trained in the use of specification-based or structure-based techniques.

Before we look at these categories in detail, think for a moment about what we are trying to achieve. We want to try to check that a system does everything that its specification says it should do and nothing else. In practice the ‘nothing else’ is the hardest part and generates the most tests; that is because there are far more ways of getting anything wrong than there are ways of getting it right. Even if we just concentrate on testing that the system does what it is supposed to do, we will still generate a very large number of tests. This will be expensive and time consuming, which means it probably will not happen, so we need to ensure that our testing is as efficient as possible. As you will see, the best techniques do this by creating the smallest set of tests that will achieve a given objective, and they do that by taking advantage of certain things we have learned about testing; for example, that defects tend to cluster in interesting ways.

You may follow the complete series of Test Design Techniques articles here:
LinkTest Development Process
The Idea of Test Coverage
Categories of Design Techniques
Specification Black Box Techniques
Structure based Whitebox techniques
Experience based Testing
Choosing Test Techniques

To see all articles of ISTQB-ISEB Foundation guide, see here:

Software Testing-ISTQB ISEB Foundation Guide

To see all articles of ISTQB-ISEB Foundation guide, see here:

Software Testing-ISTQB ISEB Foundation Guide

Test coverage is a very important idea because it provides a quantitative assessment of the extent and quality of testing. In other words, it answers the question ‘how much testing have you done?’ in a way that is not open to interpretation. Statements such as ‘I'm nearly finished’, or ‘I've done two weeks' testing’ or ‘I've done everything in the test plan’ generate more questions than they answer. They are statements about how much testing has been done or how much effort has been applied to testing, rather than statements about how effective the testing has been or what has been achieved. We need to know about test coverage for two very important reasons:

  • It provides a quantitative measure of the quality of the testing that has been done by measuring what has been achieved.
  • It provides a way of estimating how much more testing needs to be done. Using quantitative measures we can set targets for test coverage and measure progress against them.
Statements like ‘I have tested 75 per cent of the decisions’ or ‘I've tested 80 per cent of the requirements’ provide useful information. They are neither subjective nor qualitative; they provide a real measure of what has actually been tested. If we apply coverage measures to testing based on priorities, which are themselves based on the risks addressed by individual tests, we will have a reliable, objective and quantified framework for testing.

Test coverage can be applied to any systematic technique; in this chapter we will apply it to specification-based techniques to measure how much of the functionality has been tested, and to structure-based techniques to measure how much of the code has been tested. Coverage measures may be part of the completion criteria defined in the test plan (step 1 of the FTP) and used to determine when to stop testing in the final step of the FTP.

You may follow the complete series of Test Design Techniques articles here:
LinkTest Development Process
The Idea of Test Coverage
Categories of Design Techniques
Specification Black Box Techniques
Structure based Whitebox techniques
Experience based Testing
Choosing Test Techniques

To see all articles of ISTQB-ISEB Foundation guide, see here:

Software Testing-ISTQB ISEB Foundation Guide

To see all articles of ISTQB-ISEB Foundation guide, see here:

Software Testing-ISTQB ISEB Foundation Guide

The specification of test cases is the second step in the fundamental test process (FTP) defined in the Introduction. The terms specification and design are used interchangeably in this context; in this section we discuss the creation of test cases by design.

The design of tests comprises three main steps:

  1. Identify test conditions.
  2. Specify test cases.
  3. Specify test procedures.
Our first task is to become familiar with the terminology.

A test condition—an item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, feature, quality attribute, or structural element.

In other words, a test condition is some characteristic of our software that we can check with a test or a set of tests.

A test case—a set of input values, execution preconditions, expected results and execution post conditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement.

In other words, a test case: gets the system to some starting point for the test (execution preconditions); then applies a set of input values that should achieve a given outcome (expected result), and leaves the system at some end point (execution post condition).

Our test design activity will generate the set of input values and we will predict the expected outcome by, for example, identifying from the specification what should happen when those input values are applied.

We have to define what state the system is in when we start so that it is ready to receive the inputs and we have to decide what state it is in after the test so that we can check that it ends up in the right place.

A test procedure specification—a sequence of actions for the execution of a test.

A test procedure therefore identifies all the necessary actions in sequence to execute a test. Test procedure specifications are often called test scripts (or sometimes manual test scripts to distinguish them from the automated scripts that control test execution tools).

So, going back to our three step process above, we:
  1. decide on a test condition, which would typically be a small section of the specification for our software under test;
  2. design a test case that will verify the test condition;
  3. write a test procedure to execute the test, i.e. get it into the right starting state, input the values, and check the outcome.
In spite of the technical language, this is quite a simple set of steps. Of course we will have to carry out a very large number of these simple steps to test a whole system, but the basic process is still the same. To test a whole system we write a test execution schedule, which puts all the individual test procedures in the right sequence and sets up the system so that they can be run.

Bear in mind as well that the test development process may be implemented in more or less formal ways. In some situations it may be appropriate to produce very little documentation and in others a very formal and documented process may be appropriate. It all depends on the context of the testing, taking account of factors such as maturity of development and test processes, the amount of time available and the nature of the system under test. Safety-critical systems, for example, will normally require a formal test process.

The best way to clarify the process is to work through a simple example.

TEST CASE DESIGN BASICS

Suppose we have a system that contains the following specification for an input screen:

1.2.3 The input screen shall have three fields: a title field with a drop-down selector; a surname field that can accept up to 20 alphabetic characters and the hyphen (-) character; a first name field which can accept up to 20 alphabetic characters. All alphabetic characters shall be case insensitive. All fields must be completed. The data is validated when the Enter key is pressed. If the data is valid the system moves on to the job input screen; if not, an error message is displayed.

This specification enables us to define test conditions; for example, we could define a test condition for the surname field (i.e. it can accept up to 20 alphabetic characters and the hyphen (-) character) and define a set of test cases to test that field.

To test the surname field we would have to navigate the system to the appropriate input screen, select a title, tab to the surname field (all this would be setting the test precondition), enter a value (the first part of the set of input values), tab to the first name field and enter a value (the second part of the set of input values that we need because all fields must be completed), then press the Enter key. The system should either move on to the job input screen (if the data we input was valid) or display an error message (if the input data was not valid). Of course, we would need to test both of these cases.

The preceding paragraph is effectively the test procedure, though we might lay it out differently for real testing.

A good test case needs some extra information. First, it should be traceable back to the test condition and the element of the specification that it is testing; we can do this by applying the specification reference to the test, e.g. by identifying this test as T1.2.3.1 (because it is the first test associated with specification element 1.2.3). Secondly, we would need to add a specific value for the input, say ‘Hambling’ and ‘Brian’. Finally we would specify that the system should move to the job input screen when ‘Enter’ is pressed.

TEST CASE DESIGN EXAMPLE

As an example, we could key in the following test cases:

Mr Hambling Brian

Ms Samaroo Angelina

Ms Simmonite Compo

Mr Hyde-White Wilfred

All these would be valid test cases; even though Compo Simmonite was an imaginary male character in a TV series, the input is correct according to the specification.

We should also test some invalid inputs, such as:

Mr Thompson1 Geoff

Mr "Morgan" Peter

Mr Williams ‘Pete’

There are many more possibilities that infringe the rules in the specification, but these should serve to illustrate the point. You may be thinking that this simple specification could generate a very large number of test cases—and you would be absolutely right. One of our aims in using systematic test case design techniques will be to cut down the number of tests we need to run to achieve a given level of confidence in the software we are testing.

The test procedure would need to add some details along the following lines:
  1. Select the option from the main menu.
  2. Select the ‘input’ option from the menu.
  3. Select ‘Mr’ from the ‘Title’ drop-down menu.
  4. Check that the cursor moves to the ‘surname’ field.
  5. Type in ‘Hambling’ and press the tab key once; check that the cursor moves to the ‘first name’ field.
  6. Type in ‘Brian’ and press the Enter key.
  7. Check that the Job Input screen is displayed.
That should be enough to demonstrate what needs to be defined, and also how slow and tedious such a test would be to run, and we have only completed one of the test cases so far!

The test procedure would collect together all the test cases related to this specification element so that they can all be executed together as a block; there would be several to test valid and non-valid inputs, as you have seen in the example.

In the wider process (the FTP) we would move on to the test execution step next. In preparation for execution the test execution schedule collects together all the tests and sequences them, taking into account any priorities (highest priority tests would be run first) and any dependencies between tests. For example, it would make sense to do all the tests on the input screen together and to do all the tests that use input data afterwards; that way we get the input screen tests to do the data entry that we will need for the later tests. There might also be technical reasons why we run tests in a particular sequence; for example, a test of the password security needs to be done at the beginning of a sequence of tests because we need to be able to get into the system to run the other tests.

You may follow the complete series of Test Design Techniques articles here:
LinkTest Development Process
The Idea of Test Coverage
Categories of Design Techniques
Specification Black Box Techniques
Structure based Whitebox techniques
Experience based Testing
Choosing Test Techniques

To see all articles of ISTQB-ISEB Foundation guide, see here:

Software Testing-ISTQB ISEB Foundation Guide