A solid testing strategy provides the framework necessary to implement your testing methodology. A separate strategy should be developed for each system being developed taking into account the development methodology being used and the specific application architecture.

The heart of any testing strategy is the master testing strategy document. It aggregates all the information from the requirements, system design and acceptance criteria into a detailed plan for testing. A detailed master strategy should cover the following:

Project Scope

Restate the business objective of the application and define the scope of the testing. The statement should be a list of activities that will be in scope or out of scope. A sample list would include:

* List of software to be tested
* Software configurations to be tested
* Documentation to be validated
* Hardware to be tested

Test Objectives

The system under test should be measured by its compliance to the requirements and the user acceptance criteria. Each requirement and acceptance criteria must be mapped to specific test plans that validate and measure the expected results for each test being performed. The objectives should be listed in order of importance and weighted by Risk.

Features and Functions to be Tested

Every feature and function must be listed for test inclusion or exclusion, along with a description of the exceptions. Some features may not be testable due to a lack of hardware or lack of control etc. The list should be grouped by functional area to add clarity. The following is a basic list of functional areas:

* Backup and recovery
* Workflow
* Interface design
* Installation
* Procedures (users, operational, installation)
* Requirements and design
* Messaging
* Notifications
* Error handling
* System exceptions and third-party application faults

Testing Approach

The approach provides the detail necessary to describe the levels and types of testing. The basic V-Model shows what types of testing are needed to validate the system.
More specific test types include functionality, performance testing, backup and recovery, security testing, environmental testing, conversion testing, usability testing, installation and regression testing. The specific testing methodology should be described and the entry/exit criteria for each phase noted in a matrix by phase. A project plan that list the resources and schedule for each testing cycle should also be created that maps the specific testing task to the overall development project plan.

Testing Process and Procedures

The order of test execution and the steps necessary to perform each type of test should be described in sufficient detail to provide clear input into the creation of test plans and test cases. Procedures should include how test data is created, managed and loaded. Test cycles should be planned and scheduled based on system availability and deliverable dates from development. All application and environmental dependencies should be identified along with the procedures necessary to gain access to all the dependent systems.

Test Compliance

Every level of testing must have a defined set of entry/exit criteria which is used to validate that all prerequisites for a valid test have been met. All mainstream software testing methodologies provide an extensive list of entry/exit criteria and checklist. In addition to the standard list, additional items should be added based on specific testing needs. Some common additions are, environmental availability, data availability, and validated code which is ready to be tested.

Each level of testing should define specific pass/fail acceptance criteria, to ensure to ensure that all quality gates have been validated and that the test plan focuses on developing test that validate the specific criteria defined by the user acceptance plan.

Testing Tools

All testing tools should be identified and their use, ownership and dependencies defined. The tools category includes manual tools, such as templates in spreadsheets and documents as well as automated tools for test management, defect tracking, regression testing and performance/load testing. Any specific skill sets should be identified and compared against the existing skills identified for the project to highlight any training needs.

Defect Resolution

A plan to address the resolution of failed tests needs to be created that lists the escalation procedures to seek correction and retest of the failed tests along with a risk mitigation plan for high-risk test. Defect tracking should include basic metrics for compliance based on number and type of defect found.

Roles and Responsibilities

A matrix listing the roles and responsibilities of everyone involved in the testing activities, along with the anticipated amount of their time allocated to the project, must be prepared.

Process Improvement

The entire testing process should be focused on process improvement. The strategy should list ways to monitor progress and provide constant feedback. This feedback can serve to enhance the process, deliverables and metrics used in the testing. Root cause analysis should be performed on all reported defects to help isolate the true nature of the problem and prevent unnecessary repeat offenses.

Deliverables

All deliverables should be defined and their location specified. Common deliverables are test plans, test cases, test scripts, test matrix and a defect log.

Schedule

All testing activities should be combined into one master testing schedule. The schedule should include an estimate of time for each task and the dependences for each. Testing resources should be assigned to each task and quality gates should be listed to insure oversight of the entire process.

Environmental Needs

All the requirements of the testing environment need to be listed. Common ones include a description of the environment's use, management, hardware and software, specific tools needed, data loading and security requirements.

Resource Management

The skills of all personnel involved in the testing effort need to be assessed and the gaps noted so that a comprehensive training program can be designed. Specialty skills that will not be filled with in-house staff will require job descriptions and budgeting.

Risk and Contingencies

Planning for risk in advance and ways to mitigate it are essential for a robust strategy. A risk assessment that is prioritized by severity of risk and covers technology, resource, schedule and environmental issues should feed a detailed plan to mitigate each red flag.

Approvals and Workflow

All items on the critical path must go through an approval cycle. The procedures for approval and escalation must be well defined and assigned to resources prior to the start of the testing.

The above covers the main sections of a well-drafted and documented testing strategy. The more detail that you include in the strategy document, the less ambiguity and chance for deviation there will be throughout the project

The completion of the strategy signals the beginning of the test planning phase. For each type of testing identified in the master test strategy there should be a test plan identifying the components to be tested, the location of the test data, the test environment needs, the test procedures, resources required, and the tests schedule. For each plan a series of test conditions should be identified so that test cases with expected results can be generated for later execution.

A solid Testing Strategy provides the framework necessary to implement your testing methodology. A separate strategy should be developed for each system being developed taking into account the development methodology being used and the specific application architecture.

The heart of any testing strategy is the master testing strategy document. It aggregates all the information from the requirements, system design and acceptance criteria into a detailed plan for testing. A detailed master strategy should cover the following:

Project Scope

Restate the business objective of the application and define the scope of the testing. The statement should be a list of activities that will be in scope or out of scope. A sample list would include:

* List of software to be tested
* Software configurations to be tested
* Documentation to be validated
* Hardware to be tested

Test Objectives

The system under test should be measured by its compliance to the requirements and the user acceptance criteria. Each requirement and acceptance criteria must be mapped to specific test plans that validate and measure the expected results for each test being performed. The objectives should be listed in order of importance and weighted by Risk.

Features and Functions to be Tested

Every feature and function must be listed for test inclusion or exclusion, along with a description of the exceptions. Some features may not be testable due to a lack of hardware or lack of control etc. The list should be grouped by functional area to add clarity. The following is a basic list of functional areas:

* Backup and recovery
* Workflow
* Interface design
* Installation
* Procedures (users, operational, installation)
* Requirements and design
* Messaging
* Notifications
* Error handling
* System exceptions and third-party application faults

Testing Approach

The approach provides the detail necessary to describe the levels and types of testing. The basic V-Model shows what types of testing are needed to validate the system.
More specific test types include functionality, performance testing, backup and recovery, security testing, environmental testing, conversion testing, usability testing, installation and regression testing. The specific testing methodology should be described and the entry/exit criteria for each phase noted in a matrix by phase. A project plan that list the resources and schedule for each testing cycle should also be created that maps the specific testing task to the overall development project plan.

Testing Process and Procedures

The order of test execution and the steps necessary to perform each type of test should be described in sufficient detail to provide clear input into the creation of test plans and test cases. Procedures should include how test data is created, managed and loaded. Test cycles should be planned and scheduled based on system availability and deliverable dates from development. All application and environmental dependencies should be identified along with the procedures necessary to gain access to all the dependent systems.

Test Compliance

Every level of testing must have a defined set of entry/exit criteria which is used to validate that all prerequisites for a valid test have been met. All mainstream software testing methodologies provide an extensive list of entry/exit criteria and checklist. In addition to the standard list, additional items should be added based on specific testing needs. Some common additions are, environmental availability, data availability, and validated code which is ready to be tested.

Each level of testing should define specific pass/fail acceptance criteria, to ensure to ensure that all quality gates have been validated and that the test plan focuses on developing test that validate the specific criteria defined by the user acceptance plan.

Testing Tools

All testing tools should be identified and their use, ownership and dependencies defined. The tools category includes manual tools, such as templates in spreadsheets and documents as well as automated tools for test management, defect tracking, regression testing and performance/load testing. Any specific skill sets should be identified and compared against the existing skills identified for the project to highlight any training needs.

Defect Resolution

A plan to address the resolution of failed tests needs to be created that lists the escalation procedures to seek correction and retest of the failed tests along with a risk mitigation plan for high-risk test. Defect tracking should include basic metrics for compliance based on number and type of defect found.

Roles and Responsibilities

A matrix listing the roles and responsibilities of everyone involved in the testing activities, along with the anticipated amount of their time allocated to the project, must be prepared.

Process Improvement

The entire testing process should be focused on process improvement. The strategy should list ways to monitor progress and provide constant feedback. This feedback can serve to enhance the process, deliverables and metrics used in the testing. Root cause analysis should be performed on all reported defects to help isolate the true nature of the problem and prevent unnecessary repeat offenses.

Deliverables

All deliverables should be defined and their location specified. Common deliverables are test plans, test cases, test scripts, test matrix and a defect log.

Schedule

All testing activities should be combined into one master testing schedule. The schedule should include an estimate of time for each task and the dependences for each. Testing resources should be assigned to each task and quality gates should be listed to insure oversight of the entire process.

Environmental Needs

All the requirements of the testing environment need to be listed. Common ones include a description of the environment's use, management, hardware and software, specific tools needed, data loading and security requirements.

Resource Management

The skills of all personnel involved in the testing effort need to be assessed and the gaps noted so that a comprehensive training program can be designed. Specialty skills that will not be filled with in-house staff will require job descriptions and budgeting.

Risk and Contingencies

Planning for risk in advance and ways to mitigate it are essential for a robust strategy. A risk assessment that is prioritized by severity of risk and covers technology, resource, schedule and environmental issues should feed a detailed plan to mitigate each red flag.

Approvals and Workflow

All items on the critical path must go through an approval cycle. The procedures for approval and escalation must be well defined and assigned to resources prior to the start of the testing.

The above covers the main sections of a well-drafted and documented testing strategy. The more detail that you include in the strategy document, the less ambiguity and chance for deviation there will be throughout the project

The completion of the strategy signals the beginning of the test planning phase. For each type of testing identified in the master test strategy there should be a test plan identifying the components to be tested, the location of the test data, the test environment needs, the test procedures, resources required, and the tests schedule. For each plan a series of test conditions should be identified so that test cases with expected results can be generated for later execution.

The object model FileSystemObject (FSO) lets you use the syntax object.method familiar with a rich set of properties, methods and events to process folders and files.

Use this tool based objects with:

* HTML to create Web pages

* Windows Scripting Host to create batch files for Microsoft Windows

* Script Control to provide a capacity of scripts for applications developed in other languages

Because the use of the FSO on the client side poses serious security problems in access to the potentially adverse local filesystem of a client, this documentation assumes the use of the FSO object model to create scripts executed by web pages on the server side. Since the server side is used, the default Internet Explorer security does not use client-side of the FileSystemObject. Redefinition of these defaults may expose a local computer to access poorly received by the file system, which could result in total destruction of the integrity of system files, causing data loss or worse.

The FSO object model gives your server-side applications can create, edit, move and delete files or to detect if particular folders exist, and if so, where. You can also find information on issues such as their name, date of creation or last modified, and so on.

The FSO object model also facilitates the processing of cases. When processing files, the main objective is to store data in a space and resource efficient, easy to format Access. You must be able to create files, insert and modify data, and output (read) data. Since data storage in a database such as Access or SQL Server adds a significant amount of overhead to your application, storing your data in a binary file or text may be the most effective solution. You may prefer not to have this overhead, or your needs Access to data may not require any additional functionality associated with a database rich in features.

The FSO object model, which is contained in the library script type (Scrrun.dll), supports text file creation and manipulation through the object TextStream. Although it does not yet support the creation or manipulation of binary files, future support is provided binaries.

The advantages of Keyword driven testing using QTP are:

1. In QTP, test-driven keyword lets you design your tests at the company level rather than at the object. For example, QTP may recognize a unique selection option in your application as several steps: clicking a button object, a mouse operation on an object list, then a keyboard operation on a sub-list-item. You can create one based on an appropriate name to represent all of these lower-level operations into one company by keyword.

2. By integrating the technical operations, as a declaration of synchronization waiting communications client-server to the end, keyword level, tests are easy to read and easier for application testers less technical to maintain when the application rates.

3. Keyword-driven testing naturally leads to a more effective separation between the maintenance and servicing of test assets. This allows automation experts to focus on maintaining the objects and functions, while application testers focused on maintaining the test structure and design.

4. When testing for registration, you may not notice that new items are added to the local object repository. This may lead to many testers managing local object repositories with copies of the same objects. When you use a keyword-based methodology, you select items for your steps from the existing object repository. When you need a new object, you can add it to your local object repository temporarily, but you are also aware that you must add the shared object repository for future use.

5. When you save a test, QTP between good objects, methods and values of arguments for you. Therefore, it is possible to create a test with little preparation or planning. While this makes it easier to create tests quickly, these tests are more difficult to maintain when the application changes and often require re-registration of large parts of the test.

6. When you use a keyword-based methodology, you select from existing objects and operation keywords. Therefore, you must be familiar with the two object repositories and libraries of functions that are available. You must also have a good idea of what you want your test to look like before starting the integration steps. This usually results in well planned and structured tests, which also leads to easier maintenance long term.

7. Automation experts can add objects and functions based on detailed product specifications before the feature was added to a product. Using test-driven keyword, you can begin to develop tests for a new product or function earlier in the development cycle.

Ans 1: Ans: use DHTML properties ....uses object.attributes...u will get all DHTML properties of that particular element in a collection.

Ans 2: HKEY_LOCAL_MACHINE\SOFTWARE\Mercury Interactive\QuickTest Professional\Configuration_UI\ScriptMgr

Create a new String value with name AltProgID and value "Mercury.JSScriptMgr" and change the value of "Create" to 1

Create a new child key inside the key
HKEY_LOCAL_MACHINE\SOFTWARE\Mercury Interactive\QuickTest Professional\Configuration_UI\ScriptConstants
{0ECD5785-BE7D-11d4-8EAF-11135659EC56}

Repeat the same steps for the below key
HKEY_LOCAL_MACHINE\SOFTWARE\Mercury Interactive\QuickTest Professional\Configuration

Open
C:\Program Files\Mercury Interactive\QuickTest Professional\bin\QTEditor.ini
Add the below parameter to [General options]
AllowJavaScript = 1
DisableVBScript = 1

Ans 3: In QTP point of view we automation applications where there is proper addin.There are few cases where we can use win32 API and shell programming to acheive.

EX use win32 query to get the handle or processid of the app and use some api to act on it...this will be a work around not a solution.

Ans 4: Disadvantages of Smart Identification: There are many out of which is time consuming it eats lot of time and some times it yields wrong results also in case of dynamic environment.

If u enable smart identification when particular object description is mismatched it will take much time to
recognize that object.And for identifying which object decryption is mismatched we have to verify the qtp results.

If disable smart identification qtp will through error in that particular statement

Ans 7: U can record but not as a web application but a standard windows application

ex:
Window("Mozilla Firefox").WinObject("MozillaWindowClass").Type "naveen"

Ans 8: Repository parameters enable you to specify that certain property values should be parameterized, but leave the actual parameterization to be defined in each test that is associated with the object repository that contains the parameterized test object property values.
Repository parameters are useful when you want to create and run tests on an object that changes dynamically


repositoriescollection is a collection object that enables you to programmatically manage the run-time collection of shared object repository files associated with the current action

for more details refere QTP Help file.

Ans 9: checkout for .VOT files in dat foler of the installation

Ans 10: NO

1. How do you find the color and font of all the links on the page?

2. How can you execute javascript on qtp?

3. How will you automate a window which while spying is not returning any property value ?

4. what are the disadvantages of smart identification.?

5. What are the disadvantages of recovery scenario?

6. What are the basic prerequisites required in automating an infragistics controls based application?

7. Can I record a web based application on Mozilla?

8. What is the difference between Repository Parameter and Repositories Collection ?

9. How to copy Virtual Object Manager from one computer to another computer?

10. Can we use GetRoProperty method to get index or location of link in web application?

To see the answers, click here Answers to QTP Interview Questions

The below QTP script will display a dialog box which is very similar to your msgbox function but it will close automatically after a specified number of seconds.

The script is shown below:

Set WshShell = CreateObject("WScript.Shell")

WshShell.Popup "QTP", 10, "Hello"

In the above QTP script, the title of the dialog box would be "Hello" and the text inside the dialog box would be "QTP" and it will close after 10 seconds as coded above.

The following example shows the usage of PressKey method to send the keyboard input to an application using HP Quicktest Professional(QTP).

'A simple example that presses a key using Mercury DeviceReplay.

Set obj = CreateObject("Mercury.DeviceReplay")

Window("Notepad").Activate

obj.PressKey 63

The PressKey method uses the ASCII value for the key.
63 is the ASCII value for F5.

ASCII values for other keys are listed below:
F1 - 59
F2 - 60
F3 - 61
F4 - 62
F5 - 63
F6 - 64
F7 - 65
F8 - 66
F9 - 67
F10 - 68
F11 - 87
F12 - 88

1. A large set of new 10.00 Quality Center integration capabilities are there in QuickTest Professional 10.0. Integration capabilities such as 1) You can keep versions of assets and baselines. 2) It is an asset Comparison Tool to compare versions of individual assets QTP. 3) It also includes an active viewer for viewing an earlier version of an asset QTP and much more as a tool to upgrade all assets QuickTest to use these new features, etc.

[QuickTest assets include testing, component, application domains, and resources associated with them, such as shared object repositories, library function, the recovery scenarios, and external tables of data. ]

2. You can use the File> Save Resources with Test command to save a copy of your local independent testing and all resource files and access measures advocated. The benefits to the portability.

3. LoadAndRunAction statement allows you to load and execute a project where the step function so that these actions are not loaded every time you open a test.

4. You can centrally manage your work items and the TODO tasks in the To Do pane, which lets you create and control self-defined tasks, and to see a series of compilation of observations of your tasks tests associated function libraries and components.

5. You can now develop checkpoints Bitmap To compare your own algorithm. A comparator is a COM object that compares the bitmap checkpoint to suit your testing requirements. QuickTest then receives the results and reports that compare the usual returns.

6. There are also some improvements in the outcome of such analysis run QuickTest Test Results can now be exported to MS Word and PDF, the use of an image file as a fourth argument Reporter.ReportEvent method, select Go to the step in QuickTest when you right-click any node of test results to show that step in the QuickTest test document, run the test results and components that are executed under a Quality Test Center series now includes the Quality Center server and the project name, etc.

7. The new Delphi Add-in allows you to test Win32 Delphi VCL controls.

8. File> Settings> Local System Monitor allows you to track the client (local-side) computer resources used by the application instance that you test during a session of the execution.

There are some other improvements too:

* You can switch from 9.5 to to QTP 10 without uninstalling 9.5 first.
* The IntelliSense feature is improved.
* The design and functionality of a debugger component is improved.
* The Run maintenance mode now includes new objects identification solutions.
* Add edit control and management activities in the automation scripts.
* A new look for some dialog boxes.
* Improved Web Add-in extensibility and much more.

HP Quicktest Professional QTP 10 has a powerful set of integration capabilities with HP Quality Center 10.0

Some of these integration capabilities are:

* New resources and dependency model to store and manage shared assets
* Support versions of assets and baselines
* Asset Comparison Tool to compare versions of QuickTest individual assets and the Asset Viewer to display a previous version of an asset QuickTest
* A special tool for administrators to Quality Center, which upgrades all QuickTest assets to use these new features.

QTP assets include testing, component, application domains, and resources associated with them, such as shared object repositories, library function, the recovery scenarios, and external tables of data.

The new quality resource center and model dependencies you can store your tests, components and resources so that you can manage the relationships between them and you can analyze the impact on your assets when you change active .

If you work with Premier Quality Center Server Edition, then you can also import and share resources between different projects. You can synchronize the assets in the two projects when changes are made. This feature also allows you to reuse your existing assets instead of creating new assets each time you create a new project. For example, you can create a set of active model to use as a basis for new projects.

Using the resources and dependencies model

In previous versions of QTP and Quality Center, test resource files (such as shared object repositories, libraries function recovery scenarios and data tables outside), were stored in Quality Center as attachments, while the files associated with an application domain have been stored in a file independent resource in the module business components.

10.00 Quality Center introduces a new Test Resources module. This module allows you to store all of these resources as separate entities that are linked to their owners and marked as dependencies. Tests or actions that require more tests are also linked as dependencies.

When you select a QTP test, test, business process, or individual components within the Quality Center test or component tree, you can access these resources and dependent tests in a new tab addiction. The Dependencies tab shows all entities Quality Center, which are used by your document, test, and all entities that use it. For example, if your QTP test is associated with two libraries of functions and actions in your test are associated with three object repositories, these entities will be displayed in the table using the Dependencies tab of the test. If any actions in your test is called by another test, while the test call will be displayed in the table used by.

In QTP, you can also view the dependencies of the specific action in the markers used by tab of the Properties dialog action.

Quality Center recognizes that assets are linked together as belonging to Owner or dependencies and ensures that these important relationships are maintained when importing or creating baselines, rename or move resources, select delete resources, or perform other operations that may affect these relationships.

Note: If necessary, you can continue to use an installation of attachments for all or part of your assets QTP. However, if you use the old model, you will not be able to take advantage of several characteristics associated with resources and dependency model.

I have created a script in HP Quicktest Professional which will automatically add your defects in HP Quality Center.

Dim TDConnection
Set TDConnection = CreateObject("TDApiOle.TDConnection")

TDConnection.InitConnection "http://yovav/tdbin" ' URL for the DB
TDConnection.ConnectProject "TD76","bella","pino" ' Valid login information

If TDConnection.Connected Then
MsgBox("Connected to " + chr (13) + "Server " + TDConnection.ServerName _
+ chr (13) +"Project " + TDConnection.ProjectName )
Else
MsgBox("Not Connected")
End If

'Get the IBugFactory
Set BugFactory = TDConnection.BugFactory

'Add a new empty bug
Set Bug = BugFactory.AddItem (Nothing)

'fill the bug with relevant parameters
Bug.Status = "New"
Bug.Summary = "Connecting to TD"
Bug.Priority = "4-Very High" ' depends on the DB
Bug.AssignedTo = "admin" ' user that must exist in the DB's users list
Bug.DetectedBy = "admin" ' user that must exist in the DB's users list

'Post the bug to DB ( commit )
Bug.Post

The new local system monitoring functionality in QTP (File> Settings> Local System Monitor) allows you to track the client (local-side) computer resources used by the application instance that you test during a session of the execution .

You can follow a number of different system counters to see the effects that your application has on the system. You can also set upper limits for the meters. If any of these counters exceed specified limits, the test fails.

In addition, you can export data from the System Monitoring tab for a variety of file types.

The results generated for the counters you are monitor are displayed as a line chart in a special tab in the System Monitor Test Results window.


The points mentioned in the chart are synchronized with the stages of the Run Results tree. When you select a step in the tree, the (red line) Step current jumps to the corresponding location in the table.

You can also export the data from the chart if it can still be analyzed in other programs.

Improving the portability of saving copies of tests and their resource files

Tests and their resource files are often stored on a network drive or in Quality Center. However, you may need to open or run a test when you can not access the network drive or QC. For example, you may need to create a portable copy of a test for use when traveling to other sites.

You can save a copy of your self test and its resource files to a local drive or another storage device using the File> Save Resources with Test command. When you save a test in this manner, QTP a copy of your test, all files associated resources, and any action called.

Usually, when you insert a call to an external action, the call to action becomes part of the test, and the requested action is loaded each time you open the test inside QTP.

In some situations, you may want to take the statement LoadAndRunAction to load a new project when the step runs, then perform action.

This is useful, for example, if you use multiple conditional statements that call for external actions, and you do not want to load all of these actions each time you open the test, because these actions may not be necessary for the session run.

Recently Hewlett Packard(HP) simplified the automation of functional and regression testing with release of HP QuickTest Professional 10(QTP 10) functional test automation tool. Evaluation version of HP QTP 10 now available for free download and training.

QuickTest Professional 10 works on Windows, VMware and Virtual PC, supports Internet Explorer 8, Firefox 3 and offers additional add-ins for Java, .Net, Oracle, Deplhi, Siebel, Peoplesoft, SAP.

You (or third) can now develop custom comparators for HP QTP bitmap checkpoints. A comparator is a COM object that helps in performing the bitmap comparison for the Quicktest checkpoint to suit your testing requirements. For example, you could define a customized comparator that allows a bitmap checkpoint to pass, even if the image changes in the application of a specified amount when the checkpoint is set.

After you install and register a comparator on the computer QTP, your comparator is available for selection in the Properties dialog box Bitmap Checkpoint. The dialog box provides a space to indicate preferences for additional configuration expected by your comparator.

When QTP Bitmap checkpoint using a comparator, it provides bitmaps expected and achieved the purpose comparator. It then receives & reports the results that the custom compares returns.

If you use the Maintenance Wizard Run to detect the updates for your steps or the object repository, then you can use the To Do pane to monitor and manage TODO comments added during the run session.

Now you can also export your tasks and the comments of the To Do pane TODO one (XLS Excel), CSV (comma separated values) or XML file.
New to do pane which is now available in HP Quicktest Professional(QTP), you can create and manage self-defined tasks, and to see a series of compilation of TODO comments from your testing tasks, components and function libraries associated.

For an example, you can use the Tasks tab to give instructions to someone else during a handover, or create reminders for yourself. The Tasks tab provides checkboxes where you can mark each task as you complete it. In the Comments tab, you can view and sort all your TODO comments. You can also go directly to a selected TODO comment in the test document.

Test Results Analysis with New reporting capabilities

QuickTest 10.0 includes a powerful set of new reporting options to help you analyze and manage your running results in more thorough and effective. These include:

* Jump to step. When you want to know more about a particular node in the Test Results window, right click it and select Go to Step QuickTest. The main QuickTest window comes into focus and the cursor moves to the relevant level. For more information, see Access a step.

* Test results export to Microsoft Word or PDF. Besides HTML, you can now choose to save your HP Quicktest Professional results either in Microsoft Word or PDF. You can then share this information and all data from local system monitoring with your developers and test teams performance. For more information, see Exporting test results.

* New Image Reporting Options:

* Add images in the QTP Results. When you use the method Reporter.ReportEvent to add information to results of the execution, you can now specify an image file as the fourth argument. When you view the results, the image is displayed in the Result Details tab of the Test Results window. For example, you can include an image returned by a step CaptureBitmap in the results of execution. For more information, see ReportEvent method in the Utilities section.

* See bitmaps differences in QTP Bitmap checkpoints. In addition to expected bitmap images and the actual bitmap images being displayed in the Test Results window, you can also choose to see the differences between the two. The difference is a bitmap image in black and white that contains the image of a black pixel for each pixel that is different between the two images. For more information, see Analyzing QTP Bitmap Checkpoint results.

* Include images in exported and printing of test results. When you select the Full Details option to print or export the document now includes all the images. These include screenshots of steps, expected, actual, and bitmaps difference for bitmap checkpoints, and the images sent to run results using the method ReportEvent. For more information, see Printing test results.

* Additional Details in Quality Center. The results for run tests and components that are executed as part of a HP Quality Center test series now includes the Quality Center server and project name.

Test Standard and Custom Objects Using Delphi Add-in Delphi and Delphi Add-in Extensibility

The new Delphi Add-in allows you to test Delphi controls that were created in the IDE and Delphi are based on VCL Win32.

QTP Delphi Add-in extensibility is an SDK (Software Development Kit) package that lets you develop support for applications that use third party custom controls and Delphi are not supported out-of-the-box by the Delphi Professional Add QTP-inch

For more information, see Using the Add-in Delphi

Versioning *. In earlier of HP QTP and HP Quality Center, few options version control are available if your Quality Center server had the Version Control Add-ins, who worked with tools from third-party version control for the audit of release. However, version control is fully integrated with Quality Center and the site administrator can enable the version control on a per project basis.

When QTP is connected to a Quality Center project with the support of version control, you can check in any QTP asset or into or out of the version control database.

Baselining *. In Quality Center, a project administrator can create baselines that provide "snapshots" of a project at different stages of development. In the Management module Libraries-tab, the administrator first creates a library, which specifies the root folder from which to import the data. The administrator then creates the actual base line, which includes the last check in versions of all assets included in the library. The administrator can also import data reference and share all other projects Quality Center.

When a project reaches a milestone in the life cycle of the project, the administrator can create a new baseline of library files there.

In HP Quality Center, these baselines can be viewed and compared in their entirety. In QuickTest, you can view, extract, or compare different assets recorded in any standard reference library. This allows you to consider an asset as it appeared at a particular stage of the timeline of the project.

Please Note:

Baselines are supported in HP Quality Center 10.00 Enterprise and Premier editions only. They are not supported in HP Quality Center Starter edition.

The QuickTest Asset comparison tool lets you compare two versions of an asset QuickTest particular, as a test function library, shared object repository, or recovery scenario. For example, testing baselines or shared projects may use different versions of the same resource. You can use the comparison tool to ensure that each test is to use the correct version of its resources.

The QuickTest Asset comparison tool compares each element of assets in the hierarchical view. The tool also allows you to refine to see a comparison of assets associated with the asset. For example, by comparing two versions of a test, a comparison that may indicate that two library functions, a recovery scenario, and some action steps has evolved between the two versions. You can then refine to see a comparison of two versions of one of these items.

The QuickTest Asset Viewer is very similar to the Asset Comparison Tool, but is used to view the data for a single version of a HP QTP asset.

The QTP Asset Upgrade Tool for Quality Center allows you to add in a lot, all assets in a QTP Quality Center project from an earlier version QTP the current format and convert the test document attachments for new resources and dependency model. QTP assets include:

Tests QTP documents, such as testing components and areas of application
Test document attachments, such as libraries of functions, shared object repositories, and recovery scenarios.

This tool is intended to be used only by the Quality Center administrator and must be run during the process center upgrade the quality before you start working with integration capabilities of Quality Center.

The QTP Asset Upgrade Tool for Quality Center is the only way to upgrade the QTP Quality Center in a 10.00 project, an earlier version of QTP current format.