Enhancing Rational Functional Tester’s default synchronization can be as simple as adding a generic delay to your script. You can use the sleep() method (Sleep() function in the .NET version). This is available to you while recording or via coding.

When you are recording and notice that your application is slow to respond, you can turn to the Script Support Functions button on your toolbar. This is shown in below Figure.

Script Support Functions button on the Recording toolbar

Engaging this button provides you with the option to place a sleep() method into your script. Have a look at the below figure for a visual reference.

Script Support Functions Sleep tab

To use this function, you specify the number of seconds that you think your script will need to handle any latency issues with your application. You then click the Insert button. If you were using the Eclipse version of Rational Functional Tester, you would see a line in your script that looks like the following:


Otherwise, using the .NET Studio version of Rational Functional Tester, you would see:


The code below shows what these lines look like in the context of a test script:


public void testMain(Object[] args)


// Frame: ClassicsCD


Public Function TestMain(ByVal args() As Object) As Object


' Frame: ClassicsCD
Return Nothing
End Function

If you execute either of the scripts displayed in Listing 3.1, execution pauses when it hits the sleep() method. The duration of the pause is determined by the number of seconds that was specified in the argument to the method (the number in the parentheses). In the case of the examples in above code, execution pauses for two seconds.

You also have the ability to code these lines directly into your scripts. For instance, you might not have recorded these delays because you didn’t experience poor performance with your application while recording. However, upon playback of your scripts, you see periodic latency issues that surpass the default synchronization built into Rational Functional Tester. Using, the playback logs, you can find out what object was unable to be found on playback. This enables you to go into your script and code in the necessary sleep() lines. Please remember that you need to provide, as an argument to the method, the number of seconds to pause. This should be a floating number (for example, 2.0, 3.5, and so on).

The benefit of using the sleep() method is that you can now place extended synchronization capabilities right within your scripts, freeing you from the dependency of global playback settings. The downside of using the sleep() method is you are dealing with a static period of time. In other words, your script has to pause for the specified period of time regardless of whether or not the application responds faster than anticipated.

The Storyboard Testing feature includes additional preferences you can modify. I will describes each preference and give an example of how that preference affects your script when the preference is checked and when it is not checked.

Preferences for Application Visuals

The Storyboard Testing preference can be expanded in the left-hand navigation area of the Preference dialog box. Expanding it shows a selection that enables you to change preferences for Application Visuals, as shown in below Figure.

The options within Storyboard Testing for Application Visuals

If this preference is unchecked, the other preferences are not available.

Enable Capturing of Application Visuals

By default, Rational Functional Tester captures application visuals while you are recording. If you click the check box to deselect this preference, application visuals are not recorded for any script you subsequently record. You need to rerecord the script to add application visuals to it. Deselecting this preference also causes the other preferences for Application Visuals to be grayed out.
Insert Data-Driven Commands

If you select to insert data-driven commands, a datapool is associated with your script during recording. As you interact with objects during recording, you can add the objects to the datapool as columns, and the statements include actions to retrieve the data from the datapool instead of using the data you type.

Show Verification Point Dialog

If you select to show Verification Point dialog, the Verification Point wizard dialog box will be presented when you are selecting the Data Verification Points using the Application View.

Enable Capturing of Verification on Test Data

Select the check box to enable the option to capture verification on test data, and during recording, Rational Functional Tester will capture and persist all the verification point data associated with the object available on an application under test page. This will allow you to insert Data Verification Points using the application visuals displayed in the Application View.

Storyboard Testing is enabled by default in Rational Functional Tester; you do not need to do anything to be able to use it unless it has been disabled in your installed copy of Rational Functional Tester.

How to Enable Storyboard Testing

To enable Storyboard Testing, first open the Preferences dialog box by clicking Window > Preferences. In the dialog box, expand Functional Test in the left navigation pane, and then click Simplified Scripting, as shown in below Figure. Finally, click the Apply button to save this preference. If you have no other preferences to change at this time, you can click the OK button to close the dialog box.

The option in Functional Test preferences to enable Storyboard Testing

Considerations for Enabling Storyboard Testing

When you check this preference, from that time forward, all new scripts you create (whether by recording or by editing an empty script) use Storyboard Testing. To return to the traditional perspective, uncheck this preference and all new scripts are recorded and edited in the traditional manner until you change this preference again.

This preference has no effect on scripts that already exist; Rational Functional Tester stores the type of script (Storyboard Testing or traditional) with the script, and opens the appropriate views in the Functional Test Perspective. You can edit Storyboard Testing scripts alongside traditional scripts within the same running instance of Rational Functional Tester and as you switch between the different script types. As you select a different type of script, the Functional Test Perspective changes the views to match the type of script being edited.

The Storyboard Testing feature includes additional preferences you can modify.

Storyboard Testing is a new feature introduced in Rational Functional Tester version 8.1. Its purpose is to enable automated test creation by testers who might have significant subject matter expertise with application under test but might not have programming skills.

Original look at Test Automation

Initially, all test automation was done by programming. To use those early tools, testers had to learn a programming language and become proficient at it. This was a barrier for testers who might have had expertise in the subject matter of the business, but who had no training in programming.

Record-and-playback technology enabled non programmers to create automated test scripts and execute them. However, if the application being tested was changed (as one would expect it would be in the course of maintaining it), then the test scripts often needed to be recorded again. Unless the automated script was executed frequently, the savings of a tester’s time through test automation would decrease because maintenance of test automation required almost as much time as executing the tests manually.

The use of wizards simplify tasks, which otherwise require or re-programming, but also for other tasks, the only recourse was to learn the language, or assign the task to someone who does not know the language. Again non-technical user was shut out.

Whether the script was programmed or recorded, the fact that the script was represented as a program required the tester to visualize what the application looked like at each statement of the script, rather than by seeing the interface itself. This also tended to favor those with programming backgrounds because this kind of visualization is an essential part of a programmer’s training.

Overall, test automation required too much skill in programming. Subject matter experts needed an easier way to create test automation, one that would not require them to become coders.

Rational Functional Tester provides this easier way. Storyboard Testing enables nontechnical users to see their scripts in a human language instead of in a programming language.

This section describes the integrated technique called ScriptAssure®, which is the recognition algorithm.

Property Value Weights

The success of an automated test is highly dependent on the robustness of the test script. Small changes often force development to adapt test scripts before running successfully. This reduces productivity because maintenance must be applied to the test scripts for a successful run.

For example, the old record and playback tools recorded based on recording graphical interaction. The actual x, y coordinates of a selection were stored in the script. When a button was moved or a different screen layout was applied to the application, the script became useless. It resulted in a lot of rework and crash maintenance to get scripts running.

Rational Functional Tester recognizes objects in the application under test. This means that objects can be moved around or changed graphically. Recognition is done based on all the properties of the object. Not only the visible label (the OK), but object properties are recorded. When one or more properties changes, Rational Functional Tester keeps running!

Sample 1: Login

An example is the button GO in a login screen, as shown in below Figure.

The login screen and GO button

The base GO button has the following properties:
  • ID = “GO”
  • TYPE = “submit”
  • NAME = “GO”
  • VALUE = “GO”
One can easily find these values in the HTML source of the button (this is true of any programming environment):

When recording a script, the script contains the following line:


The button_gOsubmit is the object in the Script Explorer view, as shown in below Figure. A click is the action to be performed on that object.

Script, Explorer view

Double-clicking the object in the tree opens the object map; here, you can interrogate the properties of the object, as shown in below Figure.

Test Object Map browser

When the script is run to validate the script, it finds the object with all properties matching. The recognition algorithm is as easy as it is powerful; every mismatch results in 100 times the weight. The object with the minimum penalty points is selected.

One Property Change

Suppose the visible property GO is changed to Login, as shown in below Figure. An automatic test tool that recognizes only object visible labels fails, resulting in the test process stopping and an urgent need for maintenance.

Changing one visible property

Rational Functional Tester uses the same object or the one with minimal difference. In this case, Rational Functional Tester selects the Login, despite the difference in label, position, and format. The Reset button is not selected because there is a difference in all properties.

Two Properties Changed

Now with version 2, there are two properties changed, as shown in below Figure. Does Rational Functional Tester find the button?

Changing two visible properties

With the default settings, Rational Functional Tester stores a message in the test log, as shown in below Figure. The Object Recognition is weak message indicates that Rational Functional Tester has found one object that is comparable. Rational Functional Tester uses this object to continue execution. Additionally, there is a recognition score (failing score) visible that is 19.000. The calculation can be derived by the rule that every mismatch results in 100 times the weight. This example shows a miss at ID (weight 90) and Value (weight 100), resulting in a 19.000. When no differences are found, this recognition score is 0.

Viewing the message in the test log

Three Properties Changed

In the next version of the application, there are three properties changed, as shown in below Figure.

Changing three visible properties

Now the object to be found and the object available in the application differ too much. A window opens to ask the user for advice, as shown in below Figure.

Exception window

With the object browser, one can update the properties of the object to be found. The Playback Options become available just after starting a script. The previous interactive menu can be suppressed by deselecting the option Perform playback in interactive mode in the second screen of the Playback Options, as shown in below Figure.

Specify Playback Options

If you update the properties of the object, you get a similar error message as shown in below Figure.

Test log with error message

Rational Functional Tester does find an object candidate, but it has a failing score of 28500, as described in the log. This is above the value Last chance recognition score, which is set by default to 20000. If you increase this value to 50000, the script does find the Login button and provides a warning, as shown in below Figure.

Test log with warning message

Sample 2: Two Buttons

The previous examples clarify the behavior of ScriptAssure. The following example is somewhat more complex while we have two buttons, which are similar as shown in below Figure.

Sample 2: two buttons

When the script is run, which of the buttons is selected, the “GO” (left) or the “GO” (right) button? Both have one change in a property. A part of the source of this menu is:

The object to be searched for is defined in the object map, as shown in below Figure.

Searched object in the object map

Calculate the penalty points of the “Login” and the “GO” button, as shown in Table 1.1 and Table 1.2.

Table 1.1: Table for the First Go Button




























Table 1.2: Table for a Second Go Button




























If you replay with default ScriptAssure settings, you get the message AmbiguousRecognitionException, as shown in below Figure. This is because the two GOs are much the same.

Log file with two instances of the same test object

When we decrease the Ambiguous recognition scores difference threshold to 200, for example, the script continues. So, the action attached to the GO-TO-TAKE button is used. If you are interested in verifying the properties of the object, the following is created in the object map, as shown in below Figure.

Created object in the object map

When you execute test script next time, the order number will not be 25. This results in penalty points, as shown in below Figure.

Log file displays a weak object recognition

By setting the weight of accessibleContext.accessibleName and text to 0, there is a full match, but the recognition power is weaker. A better approach is to apply a regular expression as a value. This can be created via the contextual menu on the value. In this case, use a decimal definition \d+ as shown in Figure 1.55. The point is preceded by a backslash because the point is also a special character. For additional information about regular expressions, refer to Appendix B, “Regular Expressions in Rational Functional Tester.”

Property value with regular expression set

This sample is about the object to be searched for. For the changing value in the verification point, you can use a regular expression in the verification point.

ScriptAssure Playback Settings

The recognition and the warning levels can be influenced with settings at Window > Preferences > Functional Test > Playback > ScriptAssure.

The standard visualization gives two sliders to move in either direction, as shown in below Figure. In line with the error messages and the calculation described previously, you can use the advanced visualization.

ScriptAssure preferences

If you click the Advance button, you get what’s shown in below Figure.

Advance ScriptAssure preferences

The ScriptAssure Advanced page has the following controls:
  • Maximum acceptable recognition score— Indicates the maximum score an object can have to be recognized as a candidate. Objects with higher recognition scores are not considered as matches until the time specified in Maximum time to attempt to find Test Object has elapsed.
  • Last chance recognition score— Indicates the maximum acceptable score an object must have to be recognized as a candidate, if Functional Tester does not find a suitable match after the time specified in Maximum time to attempt to find Test Object has elapsed. Objects with higher recognition scores are not considered.
  • Ambiguous recognition scores difference threshold— Writes an AmbiguousRecognitionException to the log if the scores of top candidates differ by less than the value specified in this field. If Rational Functional Tester sees two objects as the same, the difference between their scores must be at least this value to prefer one object. You can override the exception by using an event handler in the script.
  • Warn if accepted score is greater than— Writes a warning to the log if Rational Functional Tester accepts a candidate whose score is greater than or equal to the value in this field.
The Maximum time to attempt to find Test Object is defined in the general playback of Rational Functional Tester.

What are interesting settings? Defaults to start with because they work well. If you are in more of a dynamic user interface, you can increase the various values to acceptable levels. If you are doing acceptance testing, you can tighten the values and set the warn if option to 1. You always get a warning when something changes, but Rational Functional Tester continues to run.

You often record tests using specific values, for input and expected values, which become hard-coded (literals) into the script. Even if you choose certain test data to be variable using the data driven test wizard, you may not realize other hard-coded values that will later need to be changed. You can change static test data in a script into dynamic values by adding datapool variables. You can use datapools for both test input and for expected values (verification points).

You can add a data driven code (commands that input test data from a datapool) using the Insert Data Driven Actions wizard as follows:

1. Get the application under test to the appropriate point where the test data should be added. You might want to play back the test script in debug mode, breaking (pausing) at the data input form or dialog.
2. Position the cursor on a blank line in the test script where you want to add the data driven commands.
3. Select the menu Script > Insert Data Driven Commands, which will open the Data Driven Commands wizard as shown in below Figure. This is essentially the same as going into recording mode except that you will not see the recorder monitor.

Insert data driven actions wizard

4. Complete the wizard to select the data input objects. You can find more about the Data Driven Commands wizard in the online Help.

If you had not already created or added a datapool for the test script, then a new one will be created. Note that this wizard will add new lines of code to set (input) datapool values to the test object selected in the wizard. If you had already recorded typing or selecting values then the script will set the values twice. In this case, you can either delete the redundant script lines, or you can replace the literal test input values as described in the following paragraphs.

If you already have a shared datapool then you can add it to your test script as follows:

1. Right-click on the Test Datapool folder in the Script Explorer and select Associate with Datapool.
2. Select one datapool, and then click OK. You can then see the datapool listed in the Script Explorer. You can associate only one datapool with a test script.

At this point, you have associated only the datapool with the test script. You need to replace the literal strings (hard-coded values) with values from the datapool. You do this as follows:

1. Select the menu Script > Find Literals and Replace with Datapool Reference. Script menu selection Find Literals and Replace with Datapool will be enabled if you have a script open that already has a datapool associated with it. Otherwise, this menu line is disabled.
2. In the Datapool Literal Substitution window, as shown in below Figure, select the variable (column) you want to use from the Datapool Variable drop-down list.

Replacing literals with datapool values

3. Click Find repeatedly until the correct value is highlighted in the script and then click the Replace button.
4. Repeat step 2 until you have replaced all occurrences in the script.
5. You can select another Datapool variable from the drop-down list and repeat steps 2 through 3 for other values.
6. Click Close when you have finished replacing values.

You might already have a datapool with some substitutions for script values, but realize you need to replace additional literal values. You can repeat steps 3 through 6 to add more datapool substitutions to your test script at any time.

How to Manually Add Script Code

So far, we have discussed different ways to add recorded lines and test objects, verification points, and datapool commands to a test script using wizards and other recording or capture techniques. If you are familiar with the test script syntax, then you might find it useful to manually add these things directly to the script.

Adding Test Steps

If you have already captured test objects in a map, you can add them to your script along with actions for the test to perform as shown in below Figure, or with verification points to check. You do this as follows:

1. Position the cursor on a blank line in the script where you want to add the new test step.
2. Right-click either a verification point or a test object in the Script Explorer and select Insert at Cursor.

Adding an existing test element to the script

You now have an incomplete reference to either a test object or a verification point in the script. If you add an object, you need to then select an operation (test action) to perform. If you add a verification point, you need an object and not just any object but one capable of returning the expected value. Complete the test script statement for either a test object or verification point:
  • For a test object, select an operation from the drop-down list of methods. For example, select click() to make the script click on the test object during test playback. This action is added after the object reference.
  • For a verification point, you must add a test object reference. You can either type or copy the object name, or you can add it from the Script Explorer using Insert at Cursor. This object must be added before the performTest operation.
With both Java and Visual Basic .NET scripting, if you position your cursor at the end of a test object call (class), just after the parenthesis, and type a period, you will see the list of methods available for that class. If you add the object from the Script Explorer, you might have to press Backspace over the period and retype it. If you then select the method from the list, the editor adds the operation to your script. Figures 1.32 and 1.33 show this for both Eclipse and Visual Studio.

Selecting an operation for a test object in Java

Selecting an operation for a test object in Visual Basic .NET

For Java, you also have to type the ending semi-colon yourself.

Adding Programming Code to Your Tests

Rational Functional Tester scripts are implemented in either Java or Visual Basic .NET and are in fact just programs with specific testing functions. Therefore, in addition to adding test steps as described in the previous section, you can also add virtually any programming devices or functions that you would develop for any other Java or Visual Basic program. This includes not only simple looping or conditional constructs, but also calls to more elaborate programming classes. The constraints to this are that the test script must be an extension of com.rational.test.ft.script.RationalTestScript and they must be executed from Rational Functional Tester execution mechanisms.

Other Script Enhancements

So far, we have discussed several ways to edit test scripts to modify, complete, enhance, or extend their capabilities. Another purpose of editing tests is to improve their readability and potential reuse. This is done by adding comments and descriptions, naming or renaming test elements, and possibly restructuring test scripts into smaller modular tests.

Comments and Descriptions

In a given testing effort, you create many tests, object maps, datapools, and other test elements. There are most likely be other people who have to use or reference these same test artifacts. Comments and descriptions should be added to test artifacts and elements to explain their purposes, usages, and any other relevant information. This makes it much easier for someone else other than the test’s creator to understand. This also increases the value of the tests as software development artifacts.

You add comments directly into the test scripts. You can add as many as you like without affecting the execution, and in general, the more, the better. You can add comments during recording using the Script Support Functions, or at any time after recording. Here are examples of comments in each scripting language:

// This is a comment in Java
' This is a comment in Visual Basic

If you are using Rational Functional Tester with Java scripting, then you can also use Javadoc documentation in your test scripts. Some of this is generated automatically at the beginning of each test script in Java, and you can add more text or tags if you need. More information on Javadoc can be found in the Rational Functional Tester Help.

* Description : This is Javadoc content
* @author You

You can add descriptions for certain test elements including test scripts, test objects, and verification points. Descriptions for test scripts are simply Javadoc comments. You can add a description for test objects by opening the object map, selecting an object, and then selecting Test Object > Description Property from the menu, as shown in below Figure.

Adding a description for a test object

You can add a description for a verification point by opening the verification point and editing the description property, as shown in Figure 1.35.

Adding a description for a verification point

Naming and Reuse

In addition to adding comments and descriptions, you can improve the readability of a test by renaming test elements to more accurately reflect their meaning or purpose. This is perhaps most important for verification points because you interpret test results largely from these. You have a much harder time understanding a test log or report that has a failure on _1695Text than one with a failure on OrderTotal. You might also consider renaming test objects, datapool variables (columns), and test scripts."Names for Your Test Elements"

Names for Your Test Elements

Although this discussion is about renaming things, the best time to name your test elements, especially verification points, is when you first record or develop your test.

Renaming Objects and Verification Points

You can rename test objects and verification points by right-clicking the item in the Script Explorer and selecting Rename, as shown in below Figure. This automatically renames the reference to the object or verification point in the test script.

Renaming script elements in the Script Explorer

Renaming Datapool Variables

You may need to rename datapool variables, especially if they are generated by the test data wizard, which copies the test object names. The variables (columns names) are used to reference the values in the script, and the name should reflect the real value or purpose. You can rename a datapool variable by opening the datapool (or the script for a private datapool), clicking on the variable name (column header), and entering a new name for the variable as shown in below Figure. This automatically renames the reference to the datapool variable in the test scripts.

Renaming datapool columns

If you rename a variable in a shared datapool, the Rational Functional Tester automatically lets you know which scripts are updated with the new name as shown in below Figure.

Updating all scripts with a new datapool variable name

Renaming Scripts

You might want to rename a test script to better reflect its function or purpose or to comply with project naming conventions. You can do this by right-clicking the item in the Project Explorer and selecting Rename. This will automatically rename all the hidden files associated with the test script, such as the helper and verification point files.

There are many reasons why you should edit and augment a test in Rational Functional Tester. You edit a test to:

  • Correct an error or unintended behavior in the test
  • Update tests to work with newer application builds
  • Separate longer tests into smaller modular tests
  • Integrate tests with other automated tests
  • Verify functionality or other system requirements
  • Associate test data with a test
  • Modify or manipulate playback timing
  • Modify or control test flow
  • Add logging and reporting of test results
  • Improve test readability and reuse
How to Correct or Update a Test Script

The most frequent type of editing you will probably perform to a test script is fixing or updating. After you finish reading this book and start employing the best test script development practices, these corrections and updates should be short and simple. The two general steps in this activity are removing unwanted test script lines and adding new lines. This section does not go into the details of debugging, but it does describe the general editing steps for doing this.
Removing Lines from a Test Script

You can remove unwanted lines of a test script by deleting the lines or by commenting them out (making them into comments). You should begin with the latter because there is always a chance that you might need to restore the original lines. You can comment out multiple lines of a test script as follows:

1. Select the multiple lines of script. This assumes that you know which lines of the test you want to disable.
2. Comment the lines: For Java in Eclipse, choose the menu Script > Toggle Comment or press Ctrl+/. For Visual Studio, choose the menu Edit > Advanced > Comment Selection or press Ctrl+K, and then press Ctrl+C.

Adding Lines to a Test Script

You can add new lines to a test script by inserting a recording into an existing script or by manually adding lines.

It is typically easier to add lines with the recorder. This is easy to do, although you have to ensure that the newly recorded steps flow correctly with the existing recorded steps. You can record new lines into an existing script as follows:

1. Get the application under test to the initial state for the new recording. You can play back the test script in debug mode, breaking (pausing) at the spot where you want to add or rerecord steps.
2. Carefully position the cursor on a blank line in the test script where you want to add new steps.
3. Select Script > Insert Recording from the menu, or click the Insert Recording into Active Functional Test Script button from a toolbar. You immediately go into recording mode.
4. Click the Stop button to finish the recording.

Just as you must ensure the starting point of the new recording is carefully set, you must also ensure that the point that you stop recording flows correctly into the next steps of the test script.

How to Use Test Object Maps

The next most frequent type of editing you are likely do to on a test is update and modify test object maps. A test object map is normally created when you record a new test script, as shown in below Figure . You can also create an object map independently from script recording. Every test script has a test object map to use, and every test object map needs a test script to have a purpose.

Editing the test object map

Each test script also contains a list of test objects, visible in the script explorer. This is only a subset of all test objects, as shown in below Figure. The list contains only the objects required for this particular test script.

Script explorer test objects

The most common kind of editing that you perform on a test object map is:
  • Adding new objects
  • Updating objects for a newer version of the application under test
  • Modifying object recognition properties
Test object maps have a separate window where you can view, edit, and manage the objects used by test script. The test script itself contains a reference of an object and the action that is performed on it. The following is an example of a line of test script in Java that references a test object named placeOrder.


If you want to learn more about this test object, you can open it from the Script Explorer, which opens the test object map and highlights the object as shown in below Figure.

Opening a test object from the script

Private Versus Shared

There are two types of test object maps: private and shared. The only difference between the two is that a private map is associated exclusively to one test script, whereas a shared object map can be used by many test scripts. You can open a private test object map from the Script Explorer. You can open a shared test object map from the Project Explorer or from the Script Explorer. The test object map editor and almost all editing capabilities are the same for both private and shared test object maps.

You can create a shared test object map from a private map, and then associate the original test script with the new shared map, changing the private map into a shared one. You can also merge objects from one shared map into another, combining and reducing the number of shared maps. You cannot revert a shared object map back into a private map, but you can merge objects from a shared map into an existing private map.

Ultimately, you need to have primarily shared test object maps instead of private object maps. As a general rule, you should reduce the overall number of different maps.

Adding, Modifying, and Deleting Test Objects

Over time, the objects in the application under test that your test scripts interact with will change. You therefore need to add, modify, and delete test objects in Rational Functional Tester. These changes occur at two levels: in the map containing the test objects and in the test scripts that reference the test objects. The test object map is the primary storage location for a test object. This is where you ultimately maintain the test objects and their current recognition properties.

Adding Test Objects

You can add a new test object to a test script, as shown in below Figure which adds it to the associated test object map as follows:

1. Get the application under test to the point with the object (graphical or other interface object) that you need to add to the test.
2. Position the cursor to a blank line in the test script where you want to add the reference to the new object. This is typically an action on the object, such as clicking or selecting from the object.
3. Select the menu Script > Insert Test Object, which opens the Insert a GUI Object into the Object Map dialog box. This is essentially the same as going into recording mode, except that you do not see the recorder monitor.
4. Use the Object Finder to select the object you want to add, and then click Finish."Selecting an Object"

Insert New Test Object toolbar button
Selecting an Object

Alternately, you could use the Test Object Browser to select the object you want to add. Refer to the product Help documentation for more information on using the Test Object Browser.
As a consequence, this adds the object to the test object map and to the list of test objects for the script, and it will add a reference to the object in the test script where you positioned your cursor. The initial object reference in the script will not be complete since it will not contain any operation (for example, a click action). You can either manually add the desired operation, assuming that it works with the test procedure recorded in the script, or you can simply comment or delete the line and add some actions for the object at a later time.

Modifying Test Objects

You can modify a test object by double-clicking on an object in the script explorer, which will open the test object map and highlight the object. If you want to modify an object that is not in the list of test objects in the script explorer, then you can open the test object map and either browse or search to find the object. There are a number of reasons you might modify a test object.

One kind of modification that you can make to a test object that does not require opening the test object map is renaming. You can rename an object directly from the script explorer, as explained in this chapter. Note that you can rename objects from the test object map as well.

Deleting Test Objects

There are two reasons why you might want to delete a test object. First, you may want to remove an object from a test script but leave it in the test object map. You might do this if the object map is shared and the object is used by other test scripts, or you might simply want to leave the object in the map in case you need to add it back into the script at a later time. The second reason you would delete a test object is when you really know that it is no longer needed by any script and you want to delete it from the test object map.

For the first case, when you simply want to remove an object from a script, you can delete an object from the list of test objects in the script explorer. This will not affect the test object map or any other scripts that may use the same map. This also will not remove the reference to the object in the script; you will have to delete or comment the line of code referencing the deleted object yourself. Rational Functional Tester will automatically indicate an error which makes it easier to clean up the script. If you comment out the lines then it will be easier to add it back again later, if needed.

For the second case, when you want to completely delete the object from a test object map, you can open the map and delete the test object. When you do this, Rational Functional Tester will run a short wizard to help ensure you are not deleting something that you need. The first step simply shows the name and recognition properties of the object. The second step, as shown in below Figure, shows all of the test scripts that will be affected by the deletion.

Deleting a test object from the map

If you realize that you do not want to delete the object you can cancel. Otherwise clicking on Finish will delete the object from the map and all references to the object. Similar to deleting an object from the script explorer, this will not delete the line of code in the script that references the object but it will be reflected in object map as shown in below Figure.

Object in script deleted from the map

How to Add Verification Points to a Test Script

Verification points are what make test script a test as they are primarily what provide the pass or fail results. You will find that it is generally easier to add verification points when you first record a script. However, you might want to first record the user scenarios and steps and then add verification points later. You might also realize additional or more effective verification points after recording the script. In these cases, you can add verification points as shown in below Figure using following steps:
  • Get the application under test to the appropriate point for what you want to check. You might want to play back the test script in debug mode, breaking (pausing) at the spot where you want to add your verification.
Insert New Verification toolbar button
  • Position the cursor on a blank line in the test script where you want to add the verification point.
  • Select the menu Script > Insert Verification Point, which will open the Verification Point wizard. This is essentially the same as going into recording mode except that you will not see the recorder monitor.
  • Complete the wizard to create your verification point. You can find more about the verification point wizard in the online Help.
  • Click on Finish to return to the test script."Choosing a verification point"
Choosing a verification point

Your choice of verification points should not be arbitrary or chosen on the fly. You should always determine the best way to validate the test case or requirement that the test is implementing. You also need to consider the possibility of errors in proper verification across all test environments and conditions.

It is possible to detect differences between what you see in the current application build and the expected result by playing back a test script. To start playback of a script, click the Run Functional Test Script button in the menu or right-click a specific test script, and then select Run. You are prompted to specify a log file name, as shown in below Figure.

The define log window

You can define the name of a test log in the Select Log window. After selecting Finish, the execution begins. Because Rational Functional Tester uses a mouse and keyboard, it is impossible to do work in parallel during execution, or lock the computer.

The progress of the playback can be followed on a screen, and anomalies can be noted. In the playback monitor, you can see which statement Rational Functional Tester executes. Normally, Rational Functional Tester waits for objects to display or to become active.

When the execution ends, Rational Functional Tester returns to its normal state and shows a log file. If the HTML log type is selected, the web browser displays the execution log file. Again, assume that you have default settings active for Rational Functional Tester.
View Results

This section discusses the analysis of the log file in the HTML variant, which is the default setting. After execution of a test script, the browser shows a log file. You can also double-click a log file in Rational Functional Tester to view it. HTML log file shows you three types of information:
  • Failures
  • Warnings
  • Verification Points
You can select any of these options to quickly show you more detail in the main window, as shown in below Figure.

An example of a log file in HTML format

In the case of a failing verification point, you can view the difference between the expected and actual by activating the Verification Point Comparator as shown in below Figure and by selecting View Results of each verification point.

Verification Point Comparator

When a verification point fails, you can view the differences between expected and actual. The baseline can be updated. With the Verification Point Comparator, you can update the baseline with the Replace Baseline with actual result option. It is also possible to start the Verification Point Comparator directly from the logs in Rational Functional Tester.

Rational Functional Tester is quite a flexible tool that enables various ways of working. Here I will describe the process of a basic recording in RFT. Some enhancements are added using wizards. At various points in the scenario, options or additions can be clarified. This is done in a limited way to keep the scenario simple.

Before Recording

Before you start recording, you need to take care of several things:

  • Be sure the application under test is available including the correct setup of its environment. Doing this results in an expected behavior.
  • The application under test must be configured for testing.
  • A Rational Functional Test project is created and available. This is the area where you store your work.
  • Before recording, you should already have a test script describing the interactions and the points of interest to verify. This can be a manual test script defined in Rational Quality Manager.
When recording, all interactions are captured and converted into the test script. Experience proves that it is wise to stop or disable any interfering programs, such as messaging programs.


To start recording, click the Record a Functional Test Script button in the Functional Test perspective, as shown in below Figure.

The Record a Functional Test Script button in the default Rational Functional Tester workbench

A new window opens, as shown in below Figure. You need to enter the test script name and the project where it is stored. With the exception of the dollar sign ($) and the underscore (_) spaces and special characters are not allowed for test script names.

Record a functional test script window where you define the project to store the script and define its name.

When you click Next, a second screen opens where some advanced settings can be defined.

When you click Finish, the Functional Test window disappears and the recording window becomes available, as shown in below Figure. From now on, all interactions are recorded! The recording window shows various icons that give access to wizards and functions, such as verification points and data pooling while recording. The recorded interactions are also displayed.

The recording window

Any interaction against the Recording window is not part of the test script. You first have to start the application under test. Select the Start Application icon and then select the application from the drop-down list, as shown in below Figure.

The Start Application window where you can select the application under text. This starts the application and generates the steps in the script.

Selecting OK starts the application. Remember that this action is recorded and it results in an action statement added to the script. This is also visible in the recording window. It is normal for first-time users to perform actions that are not considered part of the intended test script, and as a consequence, results in erroneous recorded steps in the script. All these user errors can be corrected at a later time.

When the application under test is open as shown in below Figure, you can perform required test steps. First select composer Bach, and then specify the CD selection. Click the Place Order button and log in as the default user.

The ClassicsCD application

To validate expected execution of the application under test, the test script must be enhanced with check points called verification points. A verification point is a check that the current execution corresponds with your expectations, which is called the baseline. A difference between actual and baseline results in a fail status in the execution log. Differences in the consecutive application builds that are not checked with a verification point are not captured and do not result in failures. It is best practice to verify only what makes sense because verification points act as your eyes.

The Verification Point icon shown in below Figure enables access to Verification Point. Let us say that we need to verify that the total price is $15.99. While recording, select the Verification Point icon on the recording window.

The Verification Point icon gives access to the verification points.

The Verification Point and Action Wizard is displayed as shown in below Figure.

The Verification Point and Action Wizard gives you the options to identify the object to be verified.

Drag the hand icon over the $15.99 value. As a preselection, a red square is drawn around the object, as shown in below Figure. When you release the cursor, this object is selected.

When you drag the hand icon to objects, Rational Functional Tester provides a preselection for easy identification.

When you release the cursor, the properties of the object selected become visible at the bottom of the wizard. You can validate that you have selected the correct object. Click the Next button to advance to the next window. In this window, you define what kind of verification point has to be created; following are the available options:
  • Data Verification— Use this for validating the actual data in an object.
  • Properties Verification— Use this for validating one or more properties of an object (for example, if it is selected or it is color).
  • Get a Specific Property Value— Use this to get a specific property value into a Java variable.
  • Wait for Selected Test Object— Rational Functional Tester waits until this object becomes available. Use this as an intelligent mechanism to synchronize with the application under test.
  • Perform Image Verification Point— A graphical verification point.
This scenario uses the Data Verification Point. Click Next. In the next screen, you can give an appropriate name and influence the default wait for settings. Click Next. You can see data here and make modifications if necessary. Click Finish. The Data Verification Point is created and inserted as code in the program. You can continue recording the interactions. While recording, you can add various verification points.

After closing the application under test, you have to stop recording by clicking the Stop Recording button in the Recorder window. When it is selected, Rational Functional Tester’s main screen displays and test script is created.

After Recording

After recording, you can improve the recording by:
  • Adding comments where possible. Any tester should be able to read test script and understand the logic.
  • Correcting the user’s mistakes, which were recorded and converted into statements.
  • Correcting the actions by removing the recorded errors and backspaces.
Validating Execution

A first validation of correctness must be done by executing the test script against the environment where it was recorded. Keep in mind that not only the application is under test, but so is the test environment, which should be reset to its original state. For example, a double creation of the same customer probably results in an execution error.

Timing Issues

It is common for an application to be slower than the Rational Functional Tester expects. For example, an interaction with a web page might be hindered by slow network traffic. This results in a problematic execution. In this case, you have to slow down the execution. Several options are available:
  • Get an overall slowdown using the Rational Functional Test parameter shown in below Figure.
The overall slowdown parameter for Rational Functional Tester; 30 is roughly 1 interaction per second. Reset it again when running in production.
  • Add hard sleep statements: sleep(2.0);
  • Add wait-for-existence: ObjectInApplication().waitForExistence();
  • Lengthen the wait-for parameters in waitForExistence or VerificationPoints: ObjectInApplication().performTest(ObjectSubmit_textVP(), 2.0, 40.0);