What is Ramp Testing? - Continuously raising an input signal until the system breaks down.

What is Depth Testing? - A test that exercises a feature of a product in full detail.

What is Quality Policy? - The overall intentions and direction of an organization as regards quality as formally expressed by top management.

What is Race Condition? - A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access.

What is Emulator? - A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.

What is Dependency Testing? - Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

What is Documentation testing? - The aim of this testing is to help in preparation of the cover documentation (User guide, Installation guide, etc.) in as simple, precise and true way as possible.

What is Code style testing? - This type of testing involves the code check-up for accordance with development standards: the rules of code comments use; variables, classes, functions naming; the maximum line length; separation symbols order; tabling terms on a new line, etc. There are special tools for code style testing automation.

What is scripted testing? - Scripted testing means that test cases are to be developed before tests execution and some results (and/or system reaction) are expected to be shown. These test cases can be designed by one (usually more experienced) specialist and performed by another tester.

Random Software Testing Terms and Definitions:

• Formal Testing: Performed by test engineers

• Informal Testing: Performed by the developers

• Manual Testing: That part of software testing that requires human input, analysis, or evaluation.

• Automated Testing: Software testing that utilizes a variety of tools to automate the testing process. Automated testing still requires a skilled quality assurance professional with knowledge of the automation tools and the software being tested to set up the test cases.

• Black box Testing: Testing software without any knowledge of the back-end of the system, structure or language of the module being tested. Black box test cases are written from a definitive source document, such as a specification or requirements document.

• White box Testing: Testing in which the software tester has knowledge of the back-end, structure and language of the software, or at least its purpose.

• Unit Testing: Unit testing is the process of testing a particular complied program, i.e., a window, a report, an interface, etc. independently as a stand-alone component/program. The types and degrees of unit tests can vary among modified and newly created programs. Unit testing is mostly performed by the programmers who are also responsible for the creation of the necessary unit test data.

• Incremental Testing: Incremental testing is partial testing of an incomplete product. The goal of incremental testing is to provide an early feedback to software developers.

• System Testing: System testing is a form of black box testing. The purpose of system testing is to validate an application's accuracy and completeness in performing the functions as designed.

• Integration Testing: Testing two or more modules or functions together with the intent of finding interface defects between the modules/functions.

• System Integration Testing: Testing of software components that have been distributed across multiple platforms (e.g., client, web server, application server, and database server) to produce failures caused by system integration defects (i.e. defects involving distribution and back-office integration).

• Functional Testing: Verifying that a module functions as stated in the specification and establishing confidence that a program does what it is supposed to do.

• Parallel/Audit Testing: Testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations correctly.

• Usability Testing: Usability testing is testing for 'user-friendliness'. A way to evaluate and measure how users interact with a software product or site. Tasks are given to users and observations are made.

• End-to-end Testing: Similar to system testing - testing a complete application in a situation that mimics real world use, such as interacting with a database, using network communication, or interacting with other hardware, application, or system.

• Security Testing: Testing of database and network software in order to keep company data and resources secure from mistaken/accidental users, hackers, and other malevolent attackers.

• Sanity Testing: Sanity testing is performed whenever cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It normally includes testing basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.

• Regression Testing: Testing with the intent of determining if bug fixes have been successful and have not created any new problems.

• Acceptance Testing: Testing the system with the intent of confirming readiness of the product and customer acceptance. Also known as User Acceptance Testing.

• Installation Testing: Testing with the intent of determining if the product is compatible with a variety of platforms and how easily it installs.

• Recovery/Error Testing: Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

• Adhoc Testing: Testing without a formal test plan or outside of a test plan. With some projects this type of testing is carried out as an addition to formal testing. Sometimes, if testing occurs very late in the development cycle, this will be the only kind of testing that can be performed – usually done by skilled testers. Sometimes ad hoc testing is referred to as exploratory testing.

• Configuration Testing: Testing to determine how well the product works with a broad range of hardware/peripheral equipment configurations as well as on different operating systems and software.

• Load Testing: Testing with the intent of determining how well the product handles competition for system resources. The competition may come in the form of network traffic, CPU utilization or memory allocation.

• Penetration Testing: Penetration testing is testing how well the system is protected against unauthorized internal or external access, or willful damage. This type of testing usually requires sophisticated testing techniques.

• Stress Testing: Testing done to evaluate the behavior when the system is pushed beyond the breaking point. The goal is to expose the weak links and to determine if the system manages to recover gracefully.

• Smoke Testing: A random test conducted before the delivery and after complete testing.

• Pilot Testing: Testing that involves the users just before actual release to ensure that users become familiar with the release contents and ultimately accept it. Typically involves many users, is conducted over a short period of time and is tightly controlled. (See beta testing)

• Performance Testing: Testing with the intent of determining how efficiently a product handles a variety of events. Automated test tools geared specifically to test and fine-tune performance are used most often for this type of testing.

• Exploratory Testing: Any testing in which the tester dynamically changes what they're doing for test execution, based on information they learn as they're executing their tests.

• Beta Testing: Testing after the product is code complete. Betas are often widely distributed or even distributed to the public at large.

• Gamma Testing: Gamma testing is testing of software that has all the required features, but it did not go through all the in-house quality checks.

• Mutation Testing: A method to determine to test thoroughness by measuring the extent to which the test cases can discriminate the program from slight variants of the program.

• Glass Box/Open Box Testing: Glass box testing is the same as white box testing. It is a testing approach that examines the application's program structure, and derives test cases from the application's program logic.

• Compatibility Testing: Testing used to determine whether other system software components such as browsers, utilities, and competing software will conflict with the software being tested.

Comparison Testing: Testing that compares software weaknesses and strengths to those of competitors' products.

• Alpha Testing: Testing after code is mostly complete or contains most of the functionality and prior to reaching customers. Sometimes a selected group of users are involved. More often this testing will be performed in-house or by an outside testing firm in close cooperation with the software engineering department.

• Independent Verification and Validation (IV&V): The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn't fail in an unacceptable manner. The individual or group doing this work is not part of the group or organization that developed the software.

• Closed Box Testing: Closed box testing is same as black box testing. A type of testing that considers only the functionality of the application.

• Bottom-up Testing: Bottom-up testing is a technique for integration testing. A test engineer creates and uses test drivers for components that have not yet been developed, because, with bottom-up testing, low-level components are tested first. The objective of bottom-up testing is to call low-level components first, for testing purposes.

• Bug: A software bug may be defined as a coding error that causes an unexpected defect, fault or flaw. In other words, if a program does not perform as intended, it is most likely a bug.

• Error: A mismatch between the program and its specification is an error in the program.

• Defect: Defect is the variance from a desired product attribute (it can be a wrong, missing or extra data). It can be of two types – Defect from the product or a variance from customer/user expectations. It is a flaw in the software system and has no impact until it affects the user/customer and operational system. 90% of all the defects can be caused by process problems.

• Failure: A defect that causes an error in operation or negatively impacts a user/ customer.

• Quality Assurance: Is oriented towards preventing defects. Quality Assurance ensures all parties concerned with the project adhere to the process and procedures, standards and templates and test readiness reviews.

• Quality Control: quality control or quality engineering is a set of measures taken to ensure that defective products or services are not produced, and that the design meets performance requirements.

• Verification: Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, walkthroughs and inspection meetings.

• Validation: Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verifications are completed.

Testing Levels and Types

There are basically three levels of testing i.e. Unit Testing, Integration Testing and System Testing.

Various types of testing come under these levels.

Unit Testing: To verify a single program or a section of a single program.

Integration Testing: To verify interaction between system components

Prerequisite: unit testing completed on all components that compose a system

System Testing: To verify and validate behaviors of the entire system against the original system objectives

Software testing is a process that identifies the correctness, completeness, and quality of software.

Reading: Defect Density

What are various ways of calculating defect density?

The formula itself is simple: Density = Total Defects Found / Size

if we see defect density at granular level say Codesize of a particular functionality X in a application Y along with number of files, then we may draw some good observations like
Taking an example here:- Lets say we have an application ABC, which have three functionality/modules A, B and C.
Code files for A =10 and KLOC=5k
Code files for B =5 and KLOC=1k
Code files for C =1 and KLOC=25k
Bugs found in A=40, B=50, and C=5

Defect density = Total number of defects/LOC (lines of code)

Defect density = Total number of defects/Size of the project

Size of Project can be Function points, feature points, use cases, KLOC etc

Defect Density can be used to:

1) Predict the remaining defects when compared to the expected defect density,

2) Determine if the amount of testing is sufficient.

3) Establish a database of standard defect densities.

What are you going to do with the defect density information you collect?
Depending on what you want / expect to discover, you could pilot some different measurements on different parts of the code base and see which versions of the metric were most measurable.

A study suggests software glitches cost the U.S. economy about US$59.5 billion a year, although better testing could prevent a third of that loss.

Costs are logarithmic; they increase in size tenfold as the time increases. A bug found and fixed during the early stages – requirements or product spec stage can be fixed by a brief interaction with the concerned and might cost next to nothing.

During coding, a suddenly marked mistake may take only very less effort to fix. During integration testing, it costs the paperwork of a bug report and a formally documented fix, as well as the delay and expense of a re-test.

During system testing it costs even more time and may postponement delivery. Finally, during operations it may cause anything from a trouble to a system failure, possibly with catastrophic consequences in a safety-critical system such as an aircraft or an emergency service.

Software bugs cost the U.S. economy an estimated US$59.5 billion annually, or approximately 0.6 percent of the gross domestic product, according to a study published by the National Institute of Standards and Technology (NIST) a national agency that develops and promotes measurements, standards, and technology across industries. More than half of the costs are shouldered by software users; the rest fall on software developers/vendors.

According to the study, not all software errors are preventable, but more than a third of the losses (approximately $22.2 billion) could be avoided by an improved testing infrastructure that lets developers and vendors identify and remove software defects earlier and more effectively -- closer to the development stages in which they're introduced. Currently, most errors are discovered only later in the development process, or during post-sale software use.

NIST funded the study, which was conducted by the Research Triangle Institute (RTI) in North Carolina.

"More than half of the costs are borne by software users, and the remainder by software developers and vendors," NIST said in summarizing the findings. "More than a third of these costs … could be eliminated by an improved testing infrastructure that enables earlier and more effective identification and removal of software defects."

You can read the full report at:

Remember: True Quality Begins Long Before Testing
More than 80% of software errors have their roots in the beginning stages of the product life cycle (in the analysis and design planning phases) before a single line of software code is written. Unfortunately, most of these errors aren’t found until the typical testing stage at the very end of the development cycle, when the cost of repair and rework are extremely high—up to 50 times more expensive

Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc.

A Test Case is:
- A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.

- A detailed procedure that fully tests a feature or an aspect of a feature. Whereas the test plan describes what to test, a test case describes how to perform a particular test. You need to develop a test case for each test listed in the test plan.

Test cases should be written by a team member who understands the function or technology being tested, and each test case should be submitted for peer review.

Organizations take a variety of approaches to documenting test cases; these range from developing detailed, recipe-like steps to writing general descriptions. In detailed test cases, the steps describe exactly how to perform the test. In descriptive test cases, the tester decides at the time of the test how to perform the test and what data to use.

Detailed test cases are recommended to test a software because determining pass or fail criteria is usually easier with this type of case. In addition, detailed test cases are reproducible and are easier to automate than descriptive test cases. This is particularly important if you plan to compare the results of tests over time, such as when you are optimizing configurations. Detailed test cases are more time-consuming to develop and maintain. On the other hand, test cases that are open to interpretation are not repeatable and can require debugging, consuming time that would be better spent on testing.

When planning your tests, remember that it is not feasible to test everything. Instead of trying to test every combination, prioritize your testing so that you perform the most important tests — those that focus on areas that present the greatest risk or have the greatest probability of occurring — first.

Once the Test Lead prepared the Test Plan, the role of individual testers will start from the preparation of Test Cases for each level in the Software Testing like Unit Testing, Integration Testing, System Testing and User Acceptance Testing and for each Module.

General Guidelines to Prepare Test Cases

As a tester, the best way to determine the compliance of the software to requirements is by designing effective test cases that provide a thorough test of a unit. Various test case design techniques enable the testers to develop effective test cases. Besides, implementing the design techniques, every tester needs to keep in mind general guidelines that will aid in test case design:

a. The purpose of each test case is to run the test in the simplest way possible. [Suitable techniques - Specification derived tests, Equivalence partitioning]
b. Concentrate initially on positive testing i.e. the test case should show that the software does what it is intended to do. [Suitable techniques - Specification derived tests, Equivalence partitioning, State-transition testing]
c. Existing test cases should be enhanced and further test cases should be designed to show that the software does not do anything that it is not specified to do i.e. Negative Testing [Suitable techniques - Error guessing, Boundary value analysis, Internal boundary value testing, State transition testing]
d. Where appropriate, test cases should be designed to address issues such as performance, safety requirements and security requirements [Suitable techniques - Specification derived tests]
e. Further test cases can then be added to the unit test specification to achieve specific test coverage objectives. Once coverage tests have been designed, the test procedure can be developed and the tests executed [Suitable techniques - Branch testing, Condition testing, Data definition-use testing, State-transition testing]

Test Case Template

To prepare these Test Cases each organization uses their own standard template, an ideal template is providing below to prepare Test Cases.
Common Columns in Test cases that are present in all Test case formats

Fig 1: Common Columns in Test cases that are present in all Test case formats

Low Level Test Case format

Fig 2: A very details Low Level Test Case format

The Name of this Test Case Document itself follows some name convention like below so that by seeing the name we can identify the Project Name and Version Number and Date of Release.

DTC_Functionality Name_Project Name_Ver No

DTC – Detailed Test Case
Functionality Name: For which the test cases is developed
Project Name: Name of the Project
Ver No: Version number of Software
(You can add Release Date also)

The bolded words should be replaced with the actual Project Name, Version Number and Release Date. For eg. Bugzilla Test Cases 01_12_04

On the Top-Left Corner we have company emblem and we will fill the details like Project ID, Project Name, Author of Test Cases, Version Number, Date of Creation and Date of Release in this Template.

And we will maintain the fields Test Case ID, Requirement Number, Version Number, Type of Test Case, Test Case Name, Action, Expected Result, Cycle#1, Cycle #2, Cycle#3, Cycle#4 for each Test Case. Again this Cycle is divided into Actual Result, Status, Bug ID and Remarks.

Test Case ID:

To Design the Test Case ID also we are following a standard: If a test case belongs to application not specifically related to a particular Module then we will start them as TC001, if we are expecting more than one expected result for the same test case then we will name it as TC001.1. If a test case is related to Module then we will name it as M01TC001, and if a module is having a sub-module then we name that as M01SM01TC001. So that we can easily identify to which Module and which sub-module it belongs to. And one more advantage of this convention is we can easily add new test cases without changing all Test Case Number so it is limited to that module only.

Requirement Number:

It gives the reference of Requirement Number in SRS/FRD for Test Case. For Test Case we will specify to which Requirement it belongs to. The advantage of maintaining this one here in Test Case Document is in future if a requirement will get change then we can easily estimate how many test cases will affect if we change the corresponding Requirement.

Version Number:

Under this column we will specify the Version Number, in which that particular test case was introduced. So that we can identify finally how many Test Cases are there for each Version.

Type of Test Case:

It provides the List of different type of Test Cases like GUI, Functionality, Regression, Security, System, User Acceptance, Load, Performance etc., which are included in the Test Plan. So while designing Test Cases we can select one of this option. The main objective of this column is we can predict totally how many GUI or Functionality test cases are there in each Module. Based on this we can estimate the resources.

Test Case Name:

This gives more specific name like particular Button or text box name, for which that particular Test Case belongs to. I mean to say we will specify the Object name for which it belongs to. For eg., OK button, Login form.

Action (Input):

This is very important part in Test Case because it gives the clear picture what you are doing on the specific object. We can say the navigation for this Test Case. Based the steps we have written here we will perform the operations on the actual application.

Expected Result:

This is the result of the above action. It specifies what the specification or user expects from that particular action. It should be clear and for each expectation we will sub-divide that Test Case. So that we can specify pass or fail criteria for each expectation.

Up to the above steps we will prepare the Test Case Document before seeing the actual application and based on System Requirement Specification/Functional Requirement Document and Use Cases. After that we will send this document to the concerned Test Lead for approval. He will review this document for coverage of all user Requirements in the Test Cases. After that he approved the Document.

Now we are ready for testing with this Document and we will wait for the Actual Application. Now we will use the Cycle #1 parts.

Under each Cycle#1 we are having Actual, Status, Bug ID and Remarks.

Number of Cycles is based on the Organization. Some organizations document Three Cycles some organizations maintain the information for Four Cycles.
But here I provided only one Cycle in this Template but you have to add more cycles based on your requirement.


We will test the actual application against each Test Case and if it matches the Expected result then we will say it as “As Expected” else we will write the actually what happened after doing those action.

It simply indicates Pass or Fail status of that particular Test Case. If Actual and Expected both mismatch then the Status is Fail else it is Pass. For Passed Test Cases Bug ID should be null and for failed Test Cases Bug ID should be Bug ID in the Bug Report corresponding to that Test Case.

Bug ID:

This is gives the reference of Bug Number in Bug Report. So that Developer/Tester can easily identify the Bug associated with that Test Case.


This part is optional. This is used for some extra information.

Following are the most common software errors that aid you in software testing. This helps you to identify errors systematically and increases the efficiency and productivity of software testing.

This topic surely helps in finding more bugs more effectively :)

Types of errors with examples

- User Interface Errors: Missing/Wrong Functions, Doesn’t do what the user expects, Missing information, Misleading, Confusing information, Wrong content in Help text, Inappropriate error messages. Performance issues - Poor responsiveness, Can't redirect output, Inappropriate use of key board.

- Error Handling: Inadequate - protection against corrupted data, tests of user input, version control; Ignores – overflow, data comparison, Error recovery – aborting errors, recovery from hardware problems.

- Boundary related errors: Boundaries in loop, space, time, memory, mishandling of cases outside boundary.

- Calculation errors: Bad Logic, Bad Arithmetic, Outdated constants, Calculation errors, Incorrect conversion from one data representation to another, Wrong formula, Incorrect approximation.

- Initial and Later states: Failure to - set data item to zero, to initialize a loop-control variable, or re-initialize a pointer, to clear a string or flag, Incorrect initialization.

- Control flow errors: Wrong returning state assumed, Exception handling based exits, Stack underflow/overflow, Failure to block or un-block interrupts, Comparison sometimes yields wrong result, Missing/wrong default, Data Type errors.

- Errors in Handling or Interpreting Data: Un-terminated null strings, Overwriting a file after an error exit or user abort.

- Race Conditions: Assumption that one event or task finished before another begins, Resource races, Tasks starts before its prerequisites are met, Messages cross or don't arrive in the order sent.

- Load Conditions: Required resources are not available, No available large memory area, Low priority tasks not put off, Doesn't erase old files from mass storage, Doesn't return unused memory.

- Hardware: Wrong Device, Device unavailable, Underutilizing device intelligence, Misunderstood status or return code, Wrong operation or instruction codes.

- Source, Version and ID Control: No Title or version ID, Failure to update multiple copies of data or program files.

- Testing Errors: Failure to notice/report a problem, Failure to use the most promising test case, Corrupted data files, Misinterpreted specifications or documentation, Failure to make it clear how to reproduce the problem, Failure to check for unresolved problems just before release, Failure to verify fixes, Failure to provide summary report.

A definition of Equivalence Partitioning from our software testing dictionary:

Equivalence Partitioning: An approach where classes of inputs are categorized for product or function validation. This usually does not include combinations of input, but rather a single state value based by class. For example, with a given function there may be several classes of input that may be used for positive testing. If function expects an integer and receives an integer as input, this would be considered as positive test assertion. On the other hand, if a character or any other input class other than integer is provided, this would be considered a negative test assertion or condition.


Concepts: Equivalence partitioning is a method for deriving test cases. In this method, classes of input conditions called equivalence classes are identified such that each member of the class causes the same kind of processing and output to occur.

In this method, the tester identifies various equivalence classes for partitioning. A class is a set of input conditions that are is likely to be handled the same way by the system. If the system were to handle one case in the class erroneously, it would handle all cases erroneously.


Equivalence partitioning drastically cuts down the number of test cases required to test a system reasonably. It is an attempt to get a good 'hit rate', to find the most errors with the smallest number of test cases.


To use equivalence partitioning, you will need to perform two steps:

1. Identify the equivalence classes
2. Design test cases


Take each input condition described in the specification and derive at least two equivalence classes for it. One class represents the set of cases which satisfy the condition (the valid class) and one represents cases which do not (the invalid class).

Following are some general guidelines for identifying equivalence classes:

a) If the requirements state that a numeric value is input to the system and must be within a range of values, identify one valid class inputs which are within the valid range and two invalid equivalence classes inputs which are too low and inputs which are too high. For example, if an item in inventory can have a quantity of - 9999 to + 9999, identify the following classes:

1. One valid class: (QTY is greater than or equal to -9999 and is less than or equal to 9999). This is written as (- 9999 < = QTY < = 9999)
2. the invalid class (QTY is less than -9999), also written as (QTY < -9999)
3. the invalid class (QTY is greater than 9999) , also written as (QTY >9999)

b) If the requirements state that the number of items input by the system at some point must lie within a certain range, specify one valid class where the number of inputs is within the valid range, one invalid class where there are too few inputs and one invalid class where there are too many inputs.

For example, specifications state that a maximum of 4 purchase orders can be registered against anyone product. The equivalence classes are: the valid equivalence class: (number of purchase an order is greater than or equal to 1 and less than or equal to 4, also written as (1 < = no. of purchase orders < = 4) the invalid class (no. of purchase orders> 4) the invalid class (no. of purchase orders < 1)

c) If the requirements state that a particular input item match one of a set of values and each case will be dealt with the same way, identify a valid class for values in the set and one invalid class representing values outside of the set.
Says that the code accepts between 4 and 24 inputs; each is a 3-digit integer:
- One partition: number of inputs
- Classes “x<4”, “4<=x<=24”, “24- Chosen values: 3,4,5,14,23,24,25

What is boundary value analysis in software testing?

Concepts: Boundary value analysis is a methodology for designing test cases that concentrates software testing effort on cases near the limits of valid ranges Boundary value analysis is a method which refines equivalence partitioning. Boundary value analysis generates test cases that highlight errors better than equivalence partitioning. The trick is to concentrate software testing efforts at the extreme ends of the equivalence classes. At those points when input values change from valid to invalid errors are most likely to occur. As well, boundary value analysis broadens the portions of the business requirement document used to generate tests. Unlike equivalence partitioning, it takes into account the output specifications when deriving test cases.

How do you perform boundary value analysis?

Once again, you'll need to perform two steps:
1. Identify the equivalence classes.
2. Design test cases.

But the details vary. Let's examine each step.

Step 1: identify equivalence classes

Follow the same rules you used in equivalence partitioning. However, consider the output specifications as well. For example, if the output specifications for the inventory system stated that a report on inventory should indicate a total quantity for all products no greater than 999,999, then you d add the following classes to the ones you found previously:
6. The valid class ( 0 < = total quantity on hand < = 999,999 ) 7. The invalid class (total quantity on hand <0)> 999,999 )

Step 2: Design test cases

In this step, you derive test cases from the equivalence classes. The process is similar to that of equivalence partitioning but the rules for designing test cases differ.

With equivalence partitioning, you may select any test case within a range and any on either side of it with boundary analysis, you focus your attention on cases close to the edges of the range. The detailed rules for generating test cases follow:

Rules for test cases

Rule 1. If the condition is a range of values, create valid test cases for each end of the range and invalid test cases just beyond each end of the range. For example, if a valid range of quantity on hand is -9,999 through 9,999, write test cases that include:

1. the valid test case quantity on hand is -9,999,
2. the valid test case quantity on hand is 9,999,
3. the invalid test case quantity on hand is -10,000
4. the invalid test case quantity on hand is 10,000

You may combine valid classes wherever possible, just as you did with equivalence partitioning, and, once again, you may not combine invalid classes. Don’t forget to consider output conditions as well. In our inventory example the output conditions generate the following test cases:

1. the valid test case total quantity on hand is 0,
2. the valid test case total quantity on hand is 999,999
3. the invalid test case total quantity on hand is -1
4. the invalid test case total quantity on hand is 1,000,000

Rule 2. A similar rule applies where the, condition states that the number of values must lie within a certain range select two valid test cases, one for each boundary of the range, and two invalid test cases, one just below and one just above the acceptable range.

Rule 3. Design tests that highlight the first and last records in an input or output file.

Rule 4. Look for any other extreme input or output conditions, and generate a test for each of them.

Definition of Boundary Value Analysis from our Software Testing Dictionary:

Boundary Value Analysis (BVA). BVA is different from equivalence partitioning in that it focuses on "corner cases" or values that are usually out of range as defined by the specification. This means that if function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001. BVA attempts to derive the value often used as a technique for stress, load or volume testing. This type of validation is usually performed after positive functional validation has completed (successfully) using requirements specifications and user documentation.

Error guessing and exploratory testing are functional test techniques based on the test engineer’s knowledge, experience, and intuition. The skill in error guessing and exploratory testing is to derive a comprehensive set of tests without missing areas, and without generating redundant tests.

Error guessing and exploratory testing are typically viewed as unstructured approaches to software testing. Some people argue that error guessing is not a valid testing technique; however, highly successful testers are very effective at quickly evaluating a program and running an attack that exposes defects.

Error guessing is usually most productive in falsification type testing, but when coupled with exploratory testing these techniques can be used to design a set of tests that will uncover errors and successfully validate the product works as expected.

Exploratory testing is extremely useful when we are faced with software that is untested, unknown, or unstable. But after the product is more stable and settled, we would like to have a way to ease into a less labor-intensive, hopefully automated, mode of testing. Exploratory testing ventures into the product while it is still in great flux and not yet ready for automation. According to IEEE, exploratory testing is the most widely practiced testing technique. Tests are derived relying on tester skill and intuition, and on the tester's experience with similar programs. More systematic approaches are advised. Exploratory testing might be useful (but only if the tester is really an expert) to identify special tests not easily captured by formalized methods.

The fact is that every tester does exploratory testing. For example:

When the tester first gets the product, with or without a specification, and tries out the features to see how they work, and then tries to do something “real” with the product to develop an appreciation of its design, this is exploratory testing.

When a tester finds a bug, she troubleshoots it a bit, both to find a simple set of reproduction conditions and to determine whether a variant of these conditions will yield a more serious failure. This is classic exploration. The tester decides what to do next based on what she’s learned so far.

When the programmer reports that the bug has been fixed, the tester runs the original failure-revealing test to see if the fix has taken. But the skilled tester also varies the regression test to see whether the fix was general enough and whether it had a side effect that broke some other part of the program. This is exploratory testing.

Error Guessing is not in itself a testing technique but rather a skill that can be applied to all of the other testing techniques to produce more effective tests (i.e, tests that find defects)

Error Guessing is the ability to find errors or defects in the AUT by what appears to be intuition. In fact, testers who are effective at error guessing actually use a range of techniques, including:

- Knowledge about the AUT, such as the design method or implementation technology
- Knowledge of the results of any earlier testing phases (particularly important in Regression Testing)
- Experience of testing similar or related systems (and knowing where defects have arisen previously in those systems)
- Knowledge of typical implementation errors (such as division by zero errors)
- General testing rules of thumb of heuristics.

Improve your error guessing techniques.

# Improve your memory:

- List interesting error-types you come across
- Use existing bugs lists (I like the huge one provided as an appendix in Testing Computer Software, 2nd Edition)

# Improve your technical understanding:

- Go into the code, see how things are implemented, understand concepts like buffer overflow, null pointer assignment, array index boundaries, iterators, etc.
- Learn about the technical context in which the software is running, special conditions in your OS, DB or web server.

# Remember to look for errors not only in the code

- Errors in requirements

- Errors in design

- Errors in coding

- Errors in build

- Errors in testing (we never make mistakes, do we?)

- Errors in usage

Part 1: Software Testing Techniques

1. Software Testing Fundamentals

  • Ø Testing objectives
  • Ø Test information flow
  • Ø Test case design

2. White Box Testing

3. Basis Path Testing

  • Ø Flow Graphs
  • Ø Cyclomatic Complexity
  • Ø Deriving Test Cases
  • Ø Graphical Matrices

4. Control Structure Testing

  • Ø Condition Testing
  • Ø Data Flow Testing
  • Ø Loop Testing

5. Black Box Testing

  • Ø Equivalent Partitioning
  • Ø Boundary Value Analysis
  • Ø Cause-Effect Graphing Techniques
  • Ø Comparison Testing

6. Testing for Real-Time Systems

7. Automated Testing Tools

Part 2: Software Testing Strategies

1. A Strategic Approach to Software Testing

  • Ø Verifications and Validations
  • Ø Organizing for Software Testing
  • Ø A Software Testing Strategy
  • Ø Criteria for Completion Testing

2. Unit Testing

  • Ø Unit test considerations
  • Ø Unit test procedures

3. Integration Testing

  • Ø Top-Down integration
  • Ø Bottom-up Integration
  • Ø Comments on Integration Testing

4. Validation Testing

  • Ø Validation test criteria
  • Ø Configuration review
  • Ø Alpha and Beta testing

5. System Testing

  • Ø Recovery Testing
  • Ø Security Testing
  • Ø Stress Testing

6. The Art of Debugging

  • Ø The Debugging Process
  • Ø Psychological Considerations
  • Ø Debugging Approaches
  • Ø Conclusion

Branch Testing

In branch testing, test cases are designed to exercise control flow branches or decision points in a unit. This is usually aimed at achieving a target level of Decision Coverage. Branch Coverage, need to test both branches of IF and ELSE. All branches and compound conditions (e.g. loops and array handling) within the branch should be exercised at least once.

Branch coverage (sometimes called Decision Coverage) measures which possible branches in flow control structures are followed. Clover does this by recording if the Boolean expression in the control structure evaluated to both true and false during execution.

Branch testing comes under white box testing or black box testing?
Branch testing is done while doing white box testing, where focus is given on code.There are many other white box technique. Like Loop testing.

Condition Testing

The object of condition testing is to design test cases to show that the individual components of logical conditions and combinations of the individual components are correct. Test cases are designed to test the individual elements of logical expressions, both within branch conditions and within other expressions in a unit.

Condition testing is a test case design approach that exercises the logical conditions contained in a program module. A simple condition is a Boolean variable or a relational expression, possibly with one NOT operator. A relational expression takes the form:

E1 <>E 2

where are arithmetic expressions and relational operator is one of the following <, =, , (nonequality) >, or . A compound condition is made up of two or more simple conditions, Boolean operators, and parentheses. We assume that Boolean operators allowed in a compound condition include OR, AND and NOT.

The condition testing method concentrates on testing each condition in a program. The purpose of condition testing is to determine not only errors in the conditions of a program but also other errors in the program. A number of condition testing approaches have been identified. Branch testing is the most basic. For a compound condition, C, the true and false branches of C and each simple condition in C must be executed at least once.

Domain testing needs three and four tests to be produced for a relational expression. For a relational expression of the form:

E1 <>E 2

Three tests are required the make the value of greater than, equal to and less than , respectively.

Data Definition – Use Testing

Data definition-use testing designs test cases to test pairs of data definitions and uses. Data definition is anywhere that the value of a data item is set. Data use is anywhere that a data item is read or used. The objective is to create test cases that will drive execution through paths between specific definitions and uses.

Unit Test Case Preparation Guidelines

The following are the suggested action points based on which a test case can be derived and executed for unit testing.

# Test case action which acts as input to the AUT

1. Validation rules of data fields do not match with the program/data specification.
2. Valid data fields are rejected.
3. Data fields of invalid class, range and format are accepted.
4. Invalid fields cause abnormal program end.

# Test case action point to check output from the AUT

1. Output messages are shown with misspelling, or incorrect meaning, or not uniform.
2. Output messages are shown while they are supposed not to be; or they are not shown while they are supposed to be.
3. Reports/Screens do not conform to the specified layout, with misspelled data labels/titles, mismatched data label and information content, and/or incorrect data sizes.
4. Reports/Screens page numbering is out of sequence.
5. Reports/Screens breaks do not happen or happen at the wrong places.
6. Reports/Screens control totals do not tally with individual items.
7. Screen video attributes are not set/reset as they should be.

# Test case action points to check File Access

1. Data fields are not updated as input.
2. “No-file” cases cause program abnormal end and/or error messages.
3. “Empty-file” cases cause program abnormal end and/or error messages.
4. Program data storage areas do not match with the file layout.
5. The first and last input record (in a batch of transactions) is not updated.
6. The first and last record in a file is not read while it should be.
7. Deadlock occurs when the same record/file is accessed by more than 1 user.

# Test case action points to check internal Logic of the AUT

1. Counters are not initialized, as they should be.
2. Mathematical accuracy and rounding do not conform to the prescribed rules.

# Test case action points to check Job Control Procedures

1. A wrong program is invoked and/or the wrong library/files are referenced.
2. Program execution sequence does not follow the JCL condition codes setting.
3. Run time parameters are not validated before use.

# Test case action point to check the program documentation

Supportive documentation (Inline Help, Manual etc.) is not consistent with the program behavior.The information inside the operation manual is not clear and concise with the application system.The operational manual does not cover all the operation procedures of the system.

# Test case action point to check program structure (through program walkthrough)

Coding structure does not follow coding standards.

# Test case action point to check the performance of the AUT

The program runs longer than the specified response time.

Sample Test Cases

1. Screen label checks.
2. Screen video checks with test data set.
3. Creation of record with valid data set.
4. Rejection of record with invalid data set.
5. Error handling upon empty file.
6. Batch program run with test data set.

Integration Test Case Preparation Guidelines

The following are the suggested action points based on which the test case can be derived and executed for integration testing.

# Test case action point to check global data (e.g. Linkage Section)

Global variables have different definition and/or attributes in the programs that referenced them.

# Test case action point to check program interfaces

1. The called programs are not invoked while they are supposed to be.
2. Any two interfaced programs have different number of parameters, and/or the attributes of these parameters are defined differently in the two programs.
3. Passing parameters are modified by the called program while they are not supposed to be.
4. Called programs behave differently when the calling program calls twice with the same set of input data.
5. File pointers held in the calling program are destroyed after another program is called.

#Test case action point to check consistency among programs

The same error is treated differently (e.g. with different messages, with different termination status etc.) in different programs.

Sample Test Cases

1. Interface test between programs xyz, abc & jkl.
2. Global (memory) data file 1 test with data set 1.

You can read the following document by “Cem Kaner “ – What is a good test case?