Smoke Testing
Initially, smoke testing originated during the testing of hardware for the first time and it was considered a success in case of no smoke and fire coming out. Hence the name smoke testing. It is basically done to check if a build received can be accepted to perform testing any further or not. You can consider this a shallow approach that in smoke testing, most critical areas of the application are testing without getting into too much details.In order words, it is done to check the build stability. Then only it is accepted for further testing. Otherwise the build is marked as fail and QA team doesnt perform any tests on the build. Smoke testing is also called build verification testing.
Some essential points about smoke testing:-
# A smoke test is a subset of all test cases that are a part of overall test plan.
# Smoke tests are good for verifying proper deployment or other non invasive changes.
# Smoke tests can never replace actual functional testing.
# A smoke test is usually coded, either using a written set of tests or an automated test as it is performed on each and every build.
# You can consider smoke tests as a normal health check up of a build before taking it to testing in depth.
Sanity testing:
After accepting the smoketest or certifying the build, a subset of regression test cases are executed that to check no other defect has been introduced in the new build. Sometimes, when multiple cycles of regression testing are executed, sanity testing of the software can be done at later cycles after through regression test cycles. If we are moving a build from staging / testing server to production server, sanity testing of the software application can be done to check that whether the build is sane enough to move to further at production server or not.
Some essential points about sanity testing are:-
# A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep.
# A sanity test cases are not automated usually
# A Sanity test is used to determine a small section of the application is still working after a minor change.
# Sanity testing is a cursory testing, it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing.
# Sanity testing is to verify whether requirements are met or not, checking all features breadth-first.
The intention of product development is to somehow go from the vision of a product to the final product. To do this a development project is usually established and carried out. The time from the initial idea for a product until it is delivered is the development life cycle.
When the product is delivered, its real life begins. The product is in use or deployed until it is disposed of. The time from the initial idea for a product until it is disposed of is called the product life cycle, or software life cycle, if we focus on software products.
Testing is a necessary process in the development project, and testing is also necessary during deployment, both as an ongoing monitoring of how the product is behaving and in the case of maintenance (defect correction and possibly evolution of the product).
Testing fits into any development model and interfaces with all the other development processes, such as requirements definition and coding. Testing also interfaces with the processes we call supporting processes, such as, for example, project management.
Testing in a development life cycle is broken down into a number of test levels—for example component testing and system testing. Each test level has it own characteristics.
Everything we do in life seems to follow a few common steps, namely: conceive, design, implement, and test (and possibly subsequent correction and retest).
The same activities are recognized in software development, though there are normally called:
- Requirements engineering
- Design
- Coding
- Testing (possibly with retesting, and regression testing).
The way the development processes are structured is the development life cycle or the development model. A life cycle model is a specification of the order of the processes and the transition criteria for progressing from one process to the next, that is, completion criteria for the current process and entry criteria for the next.
Software development models provide guidance on the order in which the major processes in a project should be carried out, and define the conditions for progressing to the next process. Many software projects have experienced problems because they pursued their development without proper regard for the process and transition criteria.
A number of software development models have been deployed throughout the industry over the years. They are usually grouped according to one of the following concepts:
- Sequential
- Iterative
- Incremental
The sequential model is characterized by including no repetition other than perhaps feedback to the preceding phase. This makeup is used in order to avoid expensive rework.
Sequential Models
The assumptions for sequential models are:
- The customer knows what he or she wants.
- The requirements are frozen (changes are exceptions).
- Phase reviews are used as control and feedback points.
- Stable requirements
- Stable environments
- Focus on the big picture
- One, monolithic delivery
The goals of the waterfall model are achieved by enforcing fully elaborated documents as phase completion criteria and formal approval of these (signatures) as entry criteria for the next.
The V-model is an expansion of the pure waterfall model introducing more test levels, and the concept of testing not only being performed at the end of the development life cycle, even though it looks like it.
The V-model describes a course where the left side of the V reflects the processes to be performed in order to produce the pieces that make up the physical product, for example, the software code. The processes on the right side of the V are test levels to ensure that we get what we have specified as the product is assembled.
The pure V-model may lead you to believe that you develop first (the left side) and then test (the right side), but that is not how it is supposed to work.
A W-model has been developed to show that the test work, that is, the production of testing work products, starts as soon as the basis for the testing has been produced. Testing includes early planning and specification and test execution when the objects to test are ready. The idea in the V-model and the W-model is the same; they are just drawn differently.
When working like this, we describe what the product must do and how (in the requirements and the design), and at the same time we describe how we are going to test it (the test plan and the specification). This means that we are starting our testing at the earliest possible time.
The planning and specification of the test against the requirements should, for example, start as soon as the requirements have reached a reasonable state.
A W-model-like development model provides a number of advantages:
- More time to plan and specify the test
- Extra test-related review of documents and code
- More time to set up the test environment(s)
- Better chance of being ready for test execution as soon as something is ready to test
Iterative and Incremental Models
In iterative and incremental models the strategy is that frequent changes should and will happen during development. To cater for this the basic processes are repeated in shorter circles, iterations. These models can be seen as a number of mini W-models; testing is and must be incorporated in every iteration within the development life cycle.
This is how we could illustrate an iterative or incremental development model.
The goals of an iterative model are achieved through various prototypes or subproducts. These are developed and validated in the iterations. At the end of each iteration an operational (sub)product is produced, and hence the product is expanding in each iteration. The direction of the evolution of the product is determined by the experiences with each (sub)product.
Note that the difference between the two model types discussed here is:
In iterative development the product is not released to the customer until all the planned iterations have been completed.
In incremental development a (sub)product is released to the customer after each iteration.
The assumptions for an iterative and incremental model are:
- The customer cannot express exactly what he or she wants.
- The requirements will change.
- Reviews are done continuously for control and feedback.
- Fast and continuous customer feedback;
- Floating targets for the product;
- Focus on the most important features;
- Frequent releases.
These models are suited for a class of applications where there is a close and direct contact with the end user, and where requirements can only be established through actual operational experience.
A number of more specific iterative models are defined. Among these the most commonly used are the RAD model and the Spiral model.
The RAD model (Rapid Application Development) is named so because it is driven by the need for rapid reactions to changes in the market. James Martin, consultant and author, called the "guru of the information age", was the first to define this model. Since then the term RAD has more or less become a generic term for many different types of iterative models.
The original RAD model is based on development in timeboxes in few—usually three—iterations on the basis of fundamental understanding of the goal achieved before the iterations start. Each iteration basically follows a waterfall model.
When the last iteration is finished, the product is finalized and implemented as a proper working product to be delivered to the customer.
Barry Boehm, TRW Professor of Software Engineering at University of Southern California, has defined a so-called Spiral Model. This model aims at accommodating both the waterfall and the iterative model. The model consists of a set of full cycles of development, which successively refines the knowledge about the future product. Each cycle is risk driven and uses prototypes and simulations to evaluate alternatives and resolve risks while producing work products. Each cycle concludes with reviews and approvals of fully elaborated documents before the next cycle is initiated.
The last cycle, when all risks have been uncovered and the requirements, product design, and detailed design approved, consists of a conventional waterfall development of the product.
In recent years a number of incremental models, called evolutionary or agile development models, have appeared. In these models the emphasis is placed on values and principles, as described in the "Manifesto of Software Development." These are:
Individuals and interactions are valued over processes and tools
Customer collaboration is valued over contract negotiation
Responding to change is valued over following a plan
One popular example of these models is the eXtreme Programming model, (XP). In XP one of the principles is that the tests for the product are developed first; the development is test-driven.
The development is carried out in a loosely structured small-team style. The objective is to get small teams (3–8 persons) to work together to build products quickly while still allowing individual programmers and teams freedom to evolve their designs and operate nearly autonomously.
These small teams evolve features and whole products incrementally while introducing new concepts and technologies along the way. However, because developers are free to innovate as they go along, they must synchronize frequently so product components all work together.
Testing is perhaps even more important in iterative and incremental development than in sequential development. The product is constantly evolved and extensive regression testing of what has previously been agreed and accepted is imperative in every iteration.
User acceptance testing is -
official tests carried out on the system to ensure it meets the acceptance criteria, before the system is put into production. [Most of the time it is made by the users / clients]
the incremental process of approving or rejecting the system during development and maintenance.
Acceptance Testing checks the system against user requirements. It is made by real people using real data and real documents to ensure ease of use and functionality of systems. Users who understand the business functions run the tests as specified in the acceptance test plans, including installation and hard copies on-line help of user documentation are also being reviewed for ergonomics and accuracy. Testers / users to formally document the results of each test, and provide error reports, requests for correction for developers.
Myth in user acceptance testing - Passing the AUT recognizes that the system is fit to use and it also recognizes the development process was good enough
Now a days we use Agile and incremental software development models. While acceptance testing should be the current activity. It needs to be involved in the development process and need to be approximate correction made whenever it fails the acceptance criteria.
During Acceptance Testing software allows:
- Early detection of software problem.
- Early the needs of users when developing software.
- Ensure users are involved in the scheme criteria and acceptance.
- Decision involved based on the results.
Performance testing
It is performed to evaluate the performance of individual system components in a specific situation. It is very broad term. It includes: Load Testing, stress testing, capacity testing, volume tests, endurance tests, spike testing, scalability testing and reliability testing etc. This type of testing usually does not succeed or fail. It is mainly done to set the standard and standard of the application against the Concurrence / debit card, the server response time, latency, response time etc. Render In other words, you can say it is technical and formal evaluation of the responsiveness, speed, scalability and stability characteristics.
Load testing
It is a subset of performance tests. She is constantly increasing the load on the test application until it reaches the limit. The main purpose of load testing is to identify the upper limit of the system in terms of database, hardware and network, etc. The common goal of load testing is to define the SLA for the application. Example of load test may be:
Running multiple applications simultaneously on a computer - beginning with an application and then start the second application, then third, and so on.Now see the performance of your computer.
Endurance test is also part of the load tests to calculate metrics such as Mean Time Between Failure and mean time to failure.
Load testing helps to determine:
- Flow
- The charge of peak production
- Adequacy of H / W environment
- Requirements for load balancing
- Application How many users can manipulate the results with optimal performance
- How many users can manipulate the material with the best performance results
It is done to evaluate the behavior of the application beyond the normal load conditions or advanced. These are essentially test the functionality of the application under heavy loads. Normally they are related to timing issues, memory leaks or race conditions etc. Some experts also call for testing such as fatigue tests. Sometimes it becomes difficult to set up a controlled environment before running the test. Example of stress test is as follows:
A banking application may take a maximum user load of 20,000 concurrent users. Increase the load to 21000 and make some transactions such as deposits or withdrawals. Once you have made the transaction, database, application server will synchronize with ATM banking database server. Now, check with the responsibility of the user of the 21000 does sync happened successfully. Now repeat the same test with 22 million concurrent users and so on.
Spike test is also part of the stress tests that is run when the application is loaded with heavy weights repeatedly and increase beyond the production operations of short duration.
Stress Testing can help to determine:
- Errors in the slowness and expense of the user peak
- All security loop holes with loads more
- How does the material with loads more
- Problems with data corruption charges over