For the betterment, reliability and performance of an Information System, it is always better to involve the Testing team right from the beginning of the Requirement Analysis phase. The active involvement of the testing team will give the testers a clear vision of the functionality of the system by which we can expect a better quality and error-free product.

Once the Development Team-lead analyzes the requirements, he will prepare the System Requirement Specification, Requirement Traceability Matrix. After that he will schedule a meeting with the Testing Team (Test Lead and Tester chosen for that project). The Development Team-lead will explain regarding the Project, the total schedule of modules, Deliverables and Versions.

The involvement of Testing team will start from here. Test Lead will prepare the Test Strategy and Test Plan, which is the scheduler for entire testing process. Here he will plan when each phase of testing such as Unit Testing, Integration Testing, System Testing, User Acceptance Testing. Generally Organizations follow the V – Model for their development and testing.

After analyzing the requirements, Development Team prepares System Requirement Specification, Requirement Traceability Matrix, Software Project Plan, Software Configuration Management Plan, Software Measurements/metrics plan, Software Quality Assurance Plan and move to the next phase of Software Life Cycle ie., Design. Here they will prepare some important Documents like Detailed Design Document, Updated Requirement Traceability Matrix, Unit Test Cases Document (which is prepared by the Developers if there are no separate White-box testers), Integration Test Cases Document, System Test Plan Document, Review and SQA audit Reports for all Test Cases.

After preparation of the Test Plan, Test Lead distributes the work to the individual testers (white-box testers & black-box testers). Testers work will start from this stage, based on Software Requirement Specification/Functional Requirement Document they will prepare Test Cases using a standard Template or Automation Tool. After that they will send them for review to the Test Lead. Once the Test Lead approves it, they will prepare the Test Environment/Test bed, which is specifically used for Testing. Typically the Test Environment replicates the Client side system setup. We are ready for Testing. While testing team will work on Test strategy, Test plan, Test Cases simultaneously the Development team will work on their individual Modules. Before three or four days of First Release they will give an interim Release to the Testing Team. They will deploy that software in Test Machine and the actual testing will start. The Testing Team handles configuration management of Builds.

After that the Testing team do testing against Test Cases, which are already prepared and report bugs in a Bug Report Template or automation Tool (based on Organization). They will track the bugs by changing the status of Bug at each and every stage. Once Cycle #1 testing is done, then they will submit the Bug Report to the Test Lead then he will discuss these issues with Development Team-lead after which they work on those bugs and will fix those bugs. After all the bugs are fixed they will release next build. The Cycle#2 testing starts at this stage and now we have to run all the Test Cases and check whether all the bugs reported in Cycle#1 are fixed or not.

And here we will do regression testing means, we have to check whether the change in the code give any side effects to the already tested code. Again we repeat the same process till the Delivery Date. Generally we will document 4 Cycles information in the Test Case Document. At the time of Release there should not be any high severity and high priority bugs. Of course it should have some minor bugs, which are going to be fixed in next iteration or release (generally called Deferred bugs). And at the end of Delivery Test Lead and individual testers prepare some reports. Some times the Testers also participate in the Code Reviews, which is static testing. They will check the code against historical logical errors checklist, indentation, proper commenting. Testing team is also responsible to keep the track of Change management to give qualitative and bug-free product.

You may also read:

Need for Software Testing

Software Testing Definition:

Software Testing is a set of activities that can be planned in advance and conducted systematically to uncover the defects or errors to assess the quality.

Why testing is necessary?

Without testing, we have no way to establishing the quality of an information system. This means that the product we are developing could have faults that may damage our business when we implement it. Some faults can cause minor disruptions, but others can be potentially life threatening. Therefore, we need to establish which faults are nested in an information system before it is released.

Some developers believe that the goal of testing is to ensure that an information system does what it is supposed to do. While this is certainly an important part of testing, it is not a complete requirement. When we test an information system, we want to add value to it – testing costs money, so there must be a cost benefit in performing tests. This means adding quality through the identification of issues, and reporting them to the development team responsible for fixing them. Therefore, the assumption that we should start with is not that an information system works correctly, but rather that it contains faults, which must be identified. Testing should aim to identify as many faults as possible. Looking at testing in this light can give you a very different mindset – you will be looking for faults and failures instead of monitoring if the information system performs as described. Our test cases will be more focused on the thing testers really want, finding the bugs.

Not all encountered failures have to be solved before an information system can be taken into production – instead of ‘unknown bugs’, they become ‘known errors’ and can be scheduled for debugging by a developer. Testers provide this information to the decision makers in an Organization so that a well-founded risk assessment can be undertaken prior to an implementation decision being made.

  • Testing is always risk based – where there are no risks, we should not put any effort into testing an information system.
  • You cannot test everything – some risks will always remain, even in a fully tested system.
  • You can demonstrate that there are faults in an information system, but you cannot say with certainly that there are no faults.
  • The bugs are social creatures – if you find errors in one area, you would expect to find more errors in that specific area.
  • Testing is a job that is very distinct from development, requiring special skills, even if it is not always looked upon that way.

In previous years, testing was considered to be just another phase in the cycle of system development – testing followed coding as night follows day. This implied that a precondition to testing was that coding was largely finished, and that any code changes identified by testing would be minimal. Testing was always on a critical path, and every delay that the development team suffered meant less time for testing. Faults in an information system were identified in the testing phase only by the obvious method of executing the code.

Firstly, it has been recognized that faults cannot be found only by executing the code – potential defects can be identified much earlier in the development cycle. For example, every product developed during a project can be tested by using techniques such as reviews, structured walkthroughs, and inspections.

Software Testing Objectives

Testing is a process of executing a program with the intent of finding an error.

Software testing is a critical element of software quality assurance and represents the ultimate review of system specification, design and coding. Testing is the last chance to uncover the errors / defects in the software and facilitates delivery of quality system.

Software Testing Principles

The basic principles for effective software testing are as follows:

  • A good test case is one that has a high probability of finding an as-yet undiscovered error.
  • A successful test is one that uncovers an as-yet-undiscovered error.
  • All tests should be traceable to the customer requirements
  • Tests should be planned long before testing begins
  • Testing should begin “ in the small” and progress towards testing “in the large”
  • Exhaustive testing is not possible
You may also read:

How and when Testing starts

Software testing is not an activity to take up when the product is ready. An effective testing begins with a proper plan from the user requirements stage itself. Software testability is the ease with which a computer program is tested. Metrics can be used to measure the testability of a product. The requirements for effective testing are given in the following sub-sections.

Operability:

The better the software works, the more efficiently it can be tested.

•The system has few bugs (bugs add analysis and reporting overhead to the test process)
•No bugs block the execution of tests

The product evolves in functional stages (allows simultaneous development & testing)

Observability:

What is seen is what is tested

•Distinct output is generated for each input
•System states and variables are visible or queriable during execution
•Past system states and variables are visible or queriable (eg., transaction logs)
•All factors affecting the output are visible
•Incorrect output is easily identified
•Incorrect input is easily identified
•Internal errors are automatically detected through self-testing mechanism
•Internally errors are automatically reported
•Source code is accessible

Controllability:

The better the software is controlled, the more the testing can be automated and optimized.

•All possible outputs can be generated through some combination of input
•All code is executable through some combination of input
•Software and hardware states can be controlled directly by testing
•Input and output formats are consistent and structured
•Tests can be conveniently specified, automated, and reproduced.

Decomposability:

By controlling the scope of testing, problems can be isolated quickly, and smarter testing can be performed.

•The software system is built from independent modules
•Software modules can be tested independently

Simplicity:

The less there is to test, the more quickly it can be tested
•Functional simplicity
•Structural simplicity
•Code simplicity

Stability:

The fewer the changes, the fewer the disruptions to testing
•Changes to the software are infrequent
•Changes to the software are controlled
•Changes to the software do not invalidate existing tests
•The software recovers well from failures

Understandability:

The more information we have, the smarter we will test
•The design is well understood
•Dependencies between internal external and shared components are well understood.
•Changes to the design are communicated.
•Technical documentation is instantly accessible
•Technical documentation is well organized
•Technical documentation is specific and detailed
•Technical documentation is accurate

You may also read:

Need for Software Testing

How and when Testing Starts

Testing is very important phase in the Life Cycle of Software Development so whatever the work testers and test lead are doing in this Phase it should be properly documented and testers should be able to provide the needful information in proper Reports. Here are some important documents, their frequency and the reason for those documents have a look at it:

status-report

Test Documentation

Frequency

Reason

High Level Test Plan

Each project

A test plan is produced for each project. It defines what functions will be tested and what functions will not be tested. This document will detail any risks and contingencies as well as stating assumptions and defining required resources.

Test Specification

Each stage

This document details the test conditions and test cases for each stage of testing. This will be used by the testers in the running of their tests.

Test Logs

Each stage

Test Logs are to be produced by each tester – this will enable progress to be monitored and controlled. It will also provide a suitable test audit at the end of the project.

Test Progress Report

Weekly

This report will provide managers with a weekly progress on the testing being carried out. ‘S’ curves to show test progress will be used as well as faults found/fixed.

Test Summary Report

When required

This summary report will be produced when requested by Senior Management at any stage of the development lifecycle. This will be a condensed version of the Test Progress Report specifically aimed at Senior Management.

Post Project Report

Each project

At the end of each project a ‘Post Project Review’ will be carried. This activity will contain an analysis of what went well and what didn’t go well in terms of the project. A report will be produced detailing the changes for process improvement.

Some or all of the following charts should be used to monitor progress. It is recommended that the graphs be displayed clearly on a wall so everyone is made aware of the current situation. Charts, rather than tables, are also recommended since they are easier to read and are more likely to attract attention and be taken notice of:

Faults Found:

Fault Found in each day of testing

This chart simply logs the number of faults found during each day of the test schedule. The severity of faults could be indicated by using different columns on the chart.

Faults Found vs. Faults Fixed:

Fault Found vs Faults fixed

Monitoring the number of faults found together with the number of faults fixed is a useful way to spot potential scheduling problems early. In the example above the increasing gap between the number of faults found and the number fixed suggests that more development effort is needed to fix more of the outstanding faults. Leaving these until the last moment invariably ends in disaster (delayed release, poor quality system, or both).

Tests Run and Tests Passed vs. Tests Planned:

Tests Run and tests Passed vs Planned

Since we will know at the start of a test effort how many tests we are intending to run (defined by the scope) it is possible to plan the number of tests that should be completed each day of the test effort. This will normally look like an S-Curve, as shown on the example graph above (this is a third-order polynomial). Plotting both the number of tests actually run and the number of tests that have passed will quickly highlight any of a number of problems. For example, if the number of tests run falls below the number planned then either more testers or more time will be needed. If the number of tests passed falls much below the number run (i.e. if a large number of tests fail) then more faults than expected are being found and this will impact both the test team and the developers.

Faults Per Owner:

Faults per Owner

It is important to know how many faults are being worked on at any one time and by whom. This chart will show whether there is a balance of faults being assigned to the team. Each “column” on the bar chart can show priority and/or severity for further analysis.

Number of Defects for each Cycle:

This Graph shows the number of bugs (New bugs, Assigned Bugs, Open Bugs, Fixed Bugs, Reviewed-not-ok Bugs, Closed Bugs, Deferred Bugs) in each cycle for every release. The status of the bugs may be different from company to company. So based on the company the testers have to change the Status in Graph.

Testing Cycle vs Defect Status

Number of different severity bugs in each Cycle:

This Graph shows the number of bugs of different severity (Critical bugs, Major Bugs, Minor Bugs, Cosmetic Bugs) in each cycle for every release. This Severity also each and every organization maintains their own standards so it may be different from company to company. Based on their own standards they have to change the Graph.

Test Cycles Vs Severity

Number of different severity bugs in each Test Level:

This Graph shows the number of bugs of different severity (Critical bugs, Major Bugs, Minor Bugs, Cosmetic Bugs) in test level (Unit Testing, Integration Testing, System Testing and User Acceptance Testing) for every release. This Severity also each and every organization maintains their own standards so it may be different from company to company. Based on their own standards they have to change the Graph.

Type of testing Vs Bug Severity

Number of different severity bugs in each Version:

This Graph shows the number of bugs of different severity (Critical bugs, Major Bugs, Minor Bugs, Cosmetic Bugs) in each Version. This Severity also each and every organization maintains their own standards so it may be different from company to company. Based on their own standards they have to change the Graph.

Version vs Bug Severity