Friday, September 26, 2008

Automation Testing

Successful automation mandates a testing process. Just as a developer needs a system development process, testers need a testing process to successfully use test tools. The testing process provides the steps, guidelines and techniques that will ensure practical, successful automation. To achieve the testing and risk management goals of the project, a solid testing process is essential to focus the test automation effort where it can do the most good.

The difficulty in software testing stems from the complexity of software: we can not completely test a program with moderate complexity. Testing is more than just debugging. The purpose of testing can be quality assurance, verification and validation, or reliability estimation.

Testing can be used as a generic metric as well. Correctness testing and reliability testing are two major areas of testing. Software testing is a trade-off between budget, time and quality.

Unlike most physical systems, most of the defects in software are design errors, not manufacturing defects. Software does not suffer from corrosion, wear-and-tear — generally it will not change until upgrades, or until obsolescence. So once the software is shipped, the design defects — or bugs — will be buried in and remain latent until activation. Regardless of the limitations, testing is an integral part in software development. It is broadly deployed in every phase in the software development cycle. Typically, more than 50% percent of the development time is spent in testing. Testing is usually performed for the following purposes:

  • To improve quality
  • For Verification & Validation (V&V)
  • For reliability estimation

Software testing can be very costly. Automation is a good way to cut down time and cost. Software testing tools and techniques usually suffer from a lack of generic applicability and scalability. The reason is straight-forward. In order to automate the process, we have to have some ways to generate oracles from the specification, and generate test cases to test the target software against the oracles to decide their correctness. Today we still don’t have a full-scale system that has achieved this goal. In general, significant amount of human intervention is still needed in testing. The degree of automation remains at the automated test script level. Testing is potentially endless. We can not test till all the defects are unearthed and removed — it is simply impossible. At some point, we have to stop testing and ship the software. The question is when. Realistically, testing is a trade-off between budget, time and quality. It is driven by profit models. The pessimistic, and unfortunately most often used approach is to stop testing whenever some, or any of the allocated resources — time, budget, or test cases — are exhausted. The optimistic stopping rule is to stop testing when either reliability meets the requirement, or the benefit from continuing testing cannot justify the testing cost. This will usually require the use of reliability models to evaluate and predict reliability of the software under test. Each evaluation requires repeated running of the following cycle: failure data gathering — modeling — prediction. This method does not fit well for ultra-dependable systems, however, because the real field failure data will take too long to accumulate.

Manual Testing

What is Testing?
* An examination of the behavior of a program by executing on sample data sets.
* Testing comprises of set of activities to detect defects in a produced material.
* To unearth & correct defects.
* To detect defects early & to reduce cost of defect fixing.
* To avoid user detecting problems.
* To ensure that product works as users expected it to.

Why Testing?
* To unearth and correct defects.
* To detect defects early and to reduce cost of defect fixing.
* To ensure that product works as user expected it to.
* To avoid user detecting problems.

Testing Techniques

Black Box Testing -Testing of a function without knowing internal structure of the program.
White Box Testing -Testing of a function with knowing internal structure of the program.

Regression Testing -To ensure that the code changes have not had an adverse affect to the other modules or on existing functions.

Functional Testing:
* Study SRS
* Identify Unit Functions
* For each unit function
- Take each input function
- Identify Equivalence class
- Form Test cases
- Form Test cases for boundary values
- From Test cases for Error Guessing
* Form Unit function v/s Test cases, Cross Reference Matrix
* Find the coverage

Unit Testing:
* The most ‘micro’ scale of testing to test particular functions or code modules. Typically done by the programmer and not by testers .
* Unit - smallest testable piece of software.
* A unit can be compiled/ assembled/ linked/ loaded; and put under a test harness.
* Unit testing done to show that the unit does not satisfy the functional specification and/ or its implemented structure does not match the intended design structure.

Integration Testing:
* Integration is a systematic approach to build the complete software structure specified in the design from unit-tested modules. There are two ways integration performed. It is called Pre-test and Pro-test.
* Pre-test: the testing performed in Module development area is called Pre-test. The Pre-test is required only if the development is done in module development area.

Alpha testing:
* Testing of an application when development is nearing completion minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.

Beta testing:
* Testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers.

System Testing:
* A system is the big component.
* System testing is aimed at revealing bugs that cannot be attributed to a component as such, to inconsistencies between components or planned interactions between components.
* Concern: issues, behaviors that can only be exposed by testing the entire integrated system (e.g., performance, security, recovery).

Volume Testing:
* The purpose of Volume Testing is to find weaknesses in the system with respect to its handling of large amounts of data during short time periods. For example, this kind of testing ensures that the system will process data across physical and logical boundaries such as across servers and across disk partitions on one server.

Stress testing:
* This refers to testing system functionality while the system is under unusually heavy or peak load; it’s similar to the validation testing mentioned previously but is carried out in a “high-stress” environment. This requires that you make some predictions about expected load levels of your Web site.

Usability testing:
* Usability means that systems are easy and fast to learn, efficient to use, easy to remember, cause no operating errors and offer a high degree of satisfaction for the user. Usability means bringing the usage perspective into focus, the side towards the user.

Security testing:
* If your site requires firewalls, encryption, user authentication, financial transactions, or access to databases with sensitive data, you may need to test these and also test your site’s overall protection against unauthorized internal or external access.

Test Life Cycle

* Identify Test Candidates
* Test Plan
* Design Test Cases
* Execute Tests
* Evaluate Results
* Document Test Results
* Post Shipment Review

Test Plan:
* A Test Plan is a detailed project plan for testing, covering the scope of testing, the methodology to be used, the tasks to be performed, resources, schedules, risks, and dependencies. A Test Plan is developed prior to the implementation of a project to provide a well defined and understood project roadmap.

Test Specification:
* A Test Specification defines exactly what tests will be performed and what their scope and objectives will be. A Test Specification is produced as the first step in implementing a Test Plan, prior to the onset of manual testing and/or automated test suite development. It provides a repeatable, comprehensive definition of a testing campaign.

Tuesday, September 9, 2008

Approach for Test Automation

Description


Approach for Test Automation is used for building a strategy for automation starting from the Requirement phase till the deployment phase. Approach for automation begins with finding out the conditions and business rules given by the client and grouping similar conditions, business rules together as Test cases or Test Scenarios.


Steps involved in Approach for Test Automation are as follows:


· Requirement gathering from the client

· Understanding and Analyzing the Requirement

· Grouping the requirement into Test cases

· Preparing the Design for Automation

· Building Scripts depending on the Design

· Review of Scripts at Offshore

· Delivery to the Client

For example in the Unilever project:


Since it is Project for Automation of SAP, Team was divided into two,

SAP Consultants.

Testing Team.


Requirement gathering from the client included knowledge transfer at onsite, where all the transactions including the customized transactions where explained to the SAP Consultants of satyam, Under standing of requirements was done at the client side, Depending on the flow of transactions, all the transactions where grouped together as different scenarios. While SAP Consultants where onsite for gathering Requirements, Work done at the offshore was to analyze on the appropriate version of the QTP Tool to be used and to get licenses for QTP, user accounts for SAP access, Installation of QTP and SAP.


All the Identified Scenarios which consists of different transactions where explained to Testing Team. Then common functionalities where identified, and grouped together as different Test cases.


After defining different Test cases, design for automation of scripts was done, where Reusability was one of the main aspects, where all the transactions, which are common in the scenarios, are identified as Reusable transactions.

Automation Framework:

For Quality deliverables, important things that should be followed are:

· Following common coding structure for all the scripts

· Consistent coding conventions

· Use of Reusability for making code efficient and minimal

· Efficient use of Object Repository

· Use of Error and Exception handling Functions

· Use of Data table object, Environment variables for using data

For example in the Unilever project:
Coding Structure used is that there will a main Action for all the scripts which is named as corresponding Test case name, This Action intern will call all the other actions (Each Transaction used in the script would be an action) in the script. Importing and Exporting of sheets is also done in this main action. Importing of sheets is done at the beginning before calling other actions and exporting of sheets is done at the last. Sheets corresponding to all the actions in the scripts are imported, so that the data in the fields of the Imported sheets are used as input data to the script, output data from the script is also collected and exported to the corresponding sheet.

There are many transactions which are
reused in the scripts, similar transactions (Actions) are made as reusable so that its not required to record those transactions again and again, which ever transaction was common among scripts, were recorded in one script and is reused in all the scripts where the transaction is required.

Error handling in the unilever automation is done by using Recovery Scenario where each Recovery scenario calls a function which handles the recovery by exiting all the other transactions (actions) when recovery fires. Major types of Recovery used in this project where Popup Window, Object state and on Error Recoveries. In Error handling using Recovery, all the common types of exceptions or errors are given common Recovery scenarios using regular expressions, which minimized the number of Recovery scenarios and Functions used.

Design for Structure of Scripts used in Unilever Project:


Main Action, which calls all the other Called Actions

Actions (transactions) in the script










Calls




Ex: S014_001 (Action Name) Ex: ME21N (Action Name

Which is same as the

Transaction name in

SAP)

Monday, August 18, 2008

WinRunner testing process involves six main stages

• Create GUI Map File so that WinRunner can recognize the GUI objects in the application being tested
• Create test scripts by recording, programming, or a combination of both. While recording tests, insert checkpoints where you want to check the response of the application being tested.
• Debug Test: run tests in Debug mode to make sure they run smoothly
• Run Tests: run tests in Verify mode to test your application.
• View Results: determines the success or failure of the tests.
• Report Defects: If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window.
2. What is Contained in the GUI Map ?
WinRunner stores information it learns about a window or object in a GUI Map. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s description in the GUI map and then looks for an object with the same properties in the application being tested. Each of these objects in the GUI Map file will be having a logical name and a physical description.
There are 2 types of GUI Map files.
• Global GUI Map file: a single GUI Map file for the entire application
• GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created.
3. How Does WinRunner recognize objects on the Application?
WinRunner uses the GUI Map file to recognize objects on the application. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s description in the GUI map and then looks for an object with the same properties in the application being tested.

4. How Does WinRunner evaluates test Results?

Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window.

5. Have You Created test Scripts and what is contained in the Test Scripts?

Yes I have created test scripts. It contains the statement in Mercury Interactive Test Script Language (TSL). These statements appear as a test script in a test window. You can then enhance your recorded test script, either by typing in additional TSL functions and programming elements or by using Win Runner’s visual programming tool, the Function Generator.

6. Have you performed debugging of the scripts?

Yes, I have performed debugging of scripts. We can debug the script by executing the script in the debug mode. We can also debug script using the Step, Step Into, Step out functionalities provided by the WinRunner.

7. How do you run your test scripts?

We run tests in Verify mode to test your application. Each time WinRunner encounters a checkpoint in the test script, it compares the current data of the application being tested to the expected data captured earlier. If any mismatches are found, WinRunner captures them as actual results.

8. How do you analyze the results and report the defects?

Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window. If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window. This information is sent via e-mail to the quality assurance manager, who tracks the defect until it is fixed.

9. What is the use of Test Director Software?

Test Director is Mercury Interactive software test management tool. It helps quality assurance personnel plan and organize the testing process. With Test Director you can create a database of manual and automated tests, build test cycles, run tests, and report and track defects. You can also create reports and graphs to help review the progress of planning tests, running tests, and tracking defects before a software release.

10. How you integrated your automated scripts from Test Director?

When you work with WinRunner, you can choose to save your tests directly to your Test Director database or while creating a test case in the Test Director we can specify whether the script in automated or manual. And if it is automated script then Test Director will build a skeleton for the script that can be later modified into one which could be used to test the AUT.

11. What is the purpose of loading WinRunner Add-Ins?

Add-Ins are used in WinRunner to load functions specific to the particular add-in to the memory. While creating a script only those functions in the add-in selected will be listed in the function generator and while executing the script only those functions in the loaded add-in will be executed else WinRunner will give an error message saying it does not recognize the function. Add-Ins are used in WinRunner to load functions specific to the particular add-in to the memory. While creating a script only those functions in the add-in selected will be listed in the function generator and while executing the script only those functions in the loaded add-in will be executed else WinRunner will give an error message saying it does not recognize the function.

12. What are the reasons that WinRunner fails to identify an object on the GUI?

WinRunner fails to identify an object in a GUI due to various reasons.
• The object is not a standard windows object.
• If the browser used is not compatible with the WinRunner version, GUI Map Editor will not be able to learn any of the objects displayed in the browser window.
13. What do you mean by the logical name of the object?
An object’s logical name is determined by its class. In most cases, the logical name is the label that appears on an object.
14. If the object does not have a name then what will be the logical name?
If the object does not have a name then the logical name could be the attached text.
15. What is the different between GUI map and GUI map files?
1) The GUI map is actually the sum of one or more GUI map files. There are two modes for organizing GUI map files.
• Global GUI Map file: a single GUI Map file for the entire application
• GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created.
2) GUI Map file is a file which contains the windows and the objects learned by the WinRunner with its logical name and their physical description.
16. How do you view the contents of the GUI map?
GUI Map editor displays the content of a GUI Map. We can invoke GUI Map Editor from the Tools Menu in WinRunner. The GUI Map Editor displays the various GUI Map files created and the windows and objects learned in to them with their logical name and physical description.
17. When you create GUI map do you record all the objects of specific objects?
If we are learning a window then WinRunner automatically learns all the objects in the window else we will we identifying those object, which are to be learned in a window, since we will be working with only those objects while creating scripts.
18. What is the purpose of set_window command?
Set_Window command sets the focus to the specified window. We use this command to set the focus to the required window before executing tests on a particular window.

Syntax: set_window (, time);
The logical name is the logical name of the window and time is the time the execution has to wait till it gets the given window into focus.
19. How do you load GUI map? What is the disadvantage of loading the GUI maps through start up scripts?
We can load a GUI Map by using the GUI_load command.
Syntax: GUI_load();
• If we are using a single GUI Map file for the entire AUT then the memory used by the GUI Map may be much high.
• If there is any change in the object being learned then WinRunner will not be able to recognize the object, as it is not in the GUI Map file loaded in the memory. So we will have to learn the object again and update the GUI File and reload it.

20. How do you unload the GUI map? What actually happens when you load GUI map?
We can use GUI_close to unload a specific GUI Map file or else we call use GUI_close_all command to unload all the GUI Map files loaded in the memory.
Syntax: GUI_close(); or GUI_close_all;
When we load a GUI Map file, the information about the windows and the objects with their logical names and physical description are loaded into memory. So when the WinRunner executes a script on a particular window, it can identify the objects using this information loaded in the memory.
21. What is the purpose of the temp GUI map file? What is the extension of GUI map file?
While recording a script, WinRunner learns objects and windows by itself. This is actually stored into the temporary GUI Map file. We can specify whether we have to load this temporary GUI Map file should be loaded each time in the General Options.
The extension for a GUI Map file is “.gui”.
22. How do you find an object in an GUI map?
The GUI Map Editor is been provided with a Find and Show Buttons.
• To find a particular object in the GUI Map file in the application, select the object and click the Show window. This blinks the selected object.
• To find a particular object in a GUI Map file click the Find button, this gives the option to select the object. When the object is selected, if the object has been learned to the GUI Map file it will be focused in the GUI Map file.
23. What different actions are performed by find and show button?
• To find a particular object in the GUI Map file in the application, select the object and click the Show window. This blinks the selected object.
• To find a particular object in a GUI Map file click the Find button, this gives the option to select the object. When the object is selected, if the object has been learned to the GUI Map file it will be focused in the GUI Map file.
24. How do you identify which files are loaded in the GUI map?
The GUI Map Editor has a drop down “GUI File” displaying all the GUI Map files loaded into the memory.
25. How do you modify the logical name or the physical description of the objects in GUI map?
You can modify the logical name or the physical description of an object in a GUI map file using the GUI Map Editor.
26. When do you feel you need to modify the logical name?
Changing the logical name of an object is useful when the assigned logical name is not sufficiently descriptive or is too long.
27. When it is appropriate to change physical description?
Changing the physical description is necessary when the property value of an object changes.

28. How WinRunner handles varying window labels?

We can handle varying window labels using regular expressions. WinRunner uses two “hidden” properties in order to use regular expression in an object’s physical description. These properties are regexp_label and regexp_MSW_class.
• The regexp_label property is used for windows only. It operates “behind the scenes” to insert a regular expression into a window’s label description.
• The regexp_MSW_class property inserts a regular expression into an object’s MSW_class. It is obligatory for all types of windows and for the object class object.
29. What is the purpose of regexp_label property and regexp_MSW_class property?
• The regexp_label property is used for windows only. It operates “behind the scenes” to insert a regular expression into a window’s label description.
• The regexp_MSW_class property inserts a regular expression into an object’s MSW_class. It is obligatory for all types of windows and for the object class object.
30. How do you suppress a regular expression?
We can suppress the regular expression of a window by replacing the regexp_label property with label property
31. How do you copy and move objects between different GUI map files?
We can copy and move objects between different GUI Map files using the GUI Map Editor. The steps to be followed are:
• Choose Tools > GUI Map Editor to open the GUI Map Editor.
• Choose View > GUI Files.
• Click Expand in the GUI Map Editor. The dialog box expands to display two GUI map files simultaneously.
• View a different GUI map file on each side of the dialog box by clicking the file names in the GUI File lists.
• In one file, select the objects you want to copy or move. Use the Shift key and/or Control key to select multiple objects. To select all objects in a GUI map file, choose Edit > Select All.
• Click Copy or Move.
• To restore the GUI Map Editor to its original size, click Collapse
32. How do you select multiple objects during merging the files?
Use the Shift key and/or Control key to select multiple objects. To select all objects in a GUI map file, choose Edit > Select All
33. How do you clear a GUI map files? How do you filter the objects in the GUI map?
We can clear a GUI Map file using the “Clear All” option in the GUI Map Editor
GUI Map Editor has a Filter option. This provides for filtering with 3 different types of options.
• Logical name displays only objects with the specified logical name.
• Physical description displays only objects matching the specified physical description. Use any substring belonging to the physical description.
• Class displays only objects of the specified class, such as all the push buttons.
34. What is the purpose of GUI map configuration? How do you make the configuration and mappings permanent?
GUI Map configuration is used to map a custom object to a standard object
The mapping and the configuration you set are valid only for the current WinRunner session. To make the mapping and the configuration permanent, you must add configuration statements to your startup test script
35. What is the purpose of GUI spy?
Using the GUI Spy, you can view the properties of any GUI object on your desktop. You use the Spy pointer to point to an object, and the GUI Spy displays the properties and their values in the GUI Spy dialog box. You can choose to view all the properties of an object, or only the selected set of properties that WinRunner learns.
36. What is the purpose of obligatory and optional properties of the objects?
For each class, WinRunner learns a set of default properties. Each default property is classified “obligatory” or “optional”.
• An obligatory property is always learned (if it exists).
• An optional property is used only if the obligatory properties do not provide unique identification of an object. These optional properties are stored in a list. WinRunner selects the minimum number of properties from this list that are necessary to identify the object. It begins with the first property in the list, and continues, if necessary, to add properties to the description until it obtains unique identification for the object.
37. When the optional properties are learned?
An optional property is used only if the obligatory properties do not provide unique identification of an object.
38. What is the purpose of location indicator and index indicator in GUI map configuration?
In cases where the obligatory and optional properties do not uniquely identify an object, WinRunner uses a selector to differentiate between them. Two types of selectors are available:
• A location selector uses the spatial position of objects.
The location selector uses the spatial order of objects within the window, from the top left to the bottom right corners, to differentiate among objects with the same description.
• An index selector uses a unique number to identify the object in a window.The index selector uses numbers assigned at the time of creation of objects to identify the object in a window. Use this selector if the location of objects with the same description may change within a window.
39. How do you handle custom objects?
A custom object is any GUI object not belonging to one of the standard classes used by WinRunner. WinRunner learns such objects under the generic “object” class. WinRunner records operations on custom objects using obj_mouse_ statements.
If a custom object is similar to a standard object, you can map it to one of the standard classes. You can also configure the properties WinRunner uses to identify a custom object during Context Sensitive testing.
40. What is the name of custom class in WinRunner and what methods it applies on the custom objects?
WinRunner learns custom class objects under the generic “object” class. WinRunner records operations on custom objects using obj_ statements.
41. In a situation when obligatory and optional both the properties cannot uniquely identify an object what method WinRunner applies?
In cases where the obligatory and optional properties do not uniquely identify an object, WinRunner uses a selector to differentiate between them. Two types of selectors are available:
• A location selector uses the spatial position of objects.
• An index selector uses a unique number to identify the object in a window.
42. What is the purpose of different record methods 1) Record 2) Pass up 3) As Object 4) Ignore. ?
• Record instructs WinRunner to record all operations performed on a GUI object. This is the default record method for all classes. (The only exception is the static class (static text), for which the default is Pass Up.)
• Pass Up instructs WinRunner to record an operation performed on this class as an operation performed on the element containing the object. Usually this element is a window, and the operation is recorded as win_mouse_click.
• As Object instructs WinRunner to record all operations performed on a GUI object as though its class were “object” class.
• Ignore instructs WinRunner to disregard all operations performed on the class.
43. How do you find out which is the start up file in WinRunner?
The test script name in the Startup Test box in the Environment tab in the General Options dialog box is the start up file in WinRunner.
44. What are the virtual objects and how do you learn them?
• Applications may contain bitmaps that look and behave like GUI objects. WinRunner records operations on these bitmaps using win_mouse_click statements. By defining a bitmap as a virtual object, you can instruct WinRunner to treat it like a GUI object such as a push button, when you record and run tests.
• Using the Virtual Object wizard, you can assign a bitmap to a standard object class, define the coordinates of that object, and assign it a logical name.

To define a virtual object using the Virtual Object wizard:
• Choose Tools > Virtual Object Wizard. The Virtual Object wizard opens. Click Next.
• In the Class list, select a class for the new virtual object. If rows that are displayed in the window. For a table class, select the number of visible rows and columns. Click Next.
• Click Mark Object. Use the crosshairs pointer to select the area of the virtual object. You can use the arrow keys to make precise adjustments to the area you define with the crosshairs. Press Enter or click the right mouse button to display the virtual object’s coordinates in the wizard. If the object marked is visible on the screen, you can click the Highlight button to view it. Click Next.
• Assign a logical name to the virtual object. This is the name that appears in the test script when you record on the virtual object. If the object contains text that WinRunner can read, the wizard suggests using this text for the logical name. Otherwise, WinRunner suggests virtual_object, virtual_push_button, virtual_list, etc.
45. What are the two modes of recording?
• Context Sensitive recording records the operations you perform on your application by identifying Graphical User Interface (GUI) objects.
• Analog recording records keyboard input, mouse clicks, and the precise x- and y-coordinates traveled by the mouse pointer across the screen.
46. What is a checkpoint and what are different types of checkpoints?
Checkpoints allow you to compare the current behavior of the application being tested to its behavior in an earlier version.

You can add four types of checkpoints to your test scripts:

• GUI checkpoints verify information about GUI objects. For example, you can check that a button is enabled or see which item is selected in a list.
• Bitmap checkpoints take a “snapshot” of a window or area of your application and compare this to an image captured in an earlier version.
• Text checkpoints read text in GUI objects and in bitmaps and enable you to verify their contents.
• Database checkpoints check the contents and the number of rows and columns of a result set, which is based on a query you create on your database
47. What are data driven tests?
When you test your application, you may want to check how it performs the same operations with multiple sets of data. You can create a data-driven test with a loop that runs ten times: each time the loop runs, it is driven by a different set of data. In order for WinRunner to use data to drive the test, you must link the data to the test script which it drives. This is called parameterizing your test. The data is stored in a data table. You can perform these operations manually, or you can use the DataDriver Wizard to parameterize your test and store the data in a data table.
48. What are the synchronization points?
• Synchronization points enable you to solve anticipated timing problems between the test and your application. For example, if you create a test that opens a database application, you can add a synchronization point that causes the test to wait until the database records are loaded on the screen.
• For Analog testing, you can also use a synchronization point to ensure that WinRunner repositions a window at a specific location. When you run a test, the mouse cursor travels along exact coordinates. Repositioning the window enables the mouse pointer to make contact with the correct elements in the window.
49. What is parameterizing?

In order for WinRunner to use data to drive the test, you must link the data to the test script which it drives. This is called parameterizing your test. The data is stored in a data table.

50. How do you maintain the document information of the test scripts?

Before creating a test, you can document information about the test in the General and Description tabs of the Test Properties dialog box. You can enter the name of the test author, the type of functionality tested, a detailed description of the test, and a reference to the relevant functional specifications document.

51. What do you verify with the GUI checkpoint for single property and what command it generates, explain syntax?

You can check a single property of a GUI object. For example, you can check whether a button is enabled or disabled or whether an item in a list is selected. To create a GUI checkpoint for a property value, use the Check Property dialog box to add one of the following functions to the test script:
• button_check_info
• scroll_check_info
• edit_check_info
• static_check_info
• list_check_info
• win_check_info
• obj_check_info

Syntax:
button_check_info (button, property, property_value );
edit_check_info ( edit, property, property_value );

52. What do you verify with the GUI checkpoint for object/window and what command it generates, explain syntax?
• You can create a GUI checkpoint to check a single object in the application being tested. You can either check the object with its default properties or you can specify which properties to check.
• Creating a GUI Checkpoint using the Default Checks
• You can create a GUI checkpoint that performs a default check on the property recommended by WinRunner. For example, if you create a GUI checkpoint that checks a push button, the default check verifies that the push button is enabled.
• To create a GUI checkpoint using default checks:
• Choose Create > GUI Checkpoint > For Object/Window, or click the GUI Checkpoint for Object/Window button on the User toolbar. If you are recording in Analog mode, press the CHECK GUI FOR OBJECT/WINDOW soft key in order to avoid extraneous mouse movements. Note that you can press the CHECK GUI FOR OBJECT/WINDOW soft key in Context Sensitive mode as well. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens on the screen.
• Click an object.
• WinRunner captures the current value of the property of the GUI object being checked and stores it in the test’s expected results folder. The WinRunner window is restored and a GUI checkpoint is inserted in the test script as an obj_check_gui statement
Syntax: win_check_gui ( window, checklist, expected_results_file, time );
• Creating a GUI Checkpoint by Specifying which Properties to Check
• You can specify which properties to check for an object. For example, if you create a checkpoint that checks a push button, you can choose to verify that it is in focus, instead of enabled.
• To create a GUI checkpoint by specifying which properties to check:
• Choose Create > GUI Checkpoint > For Object/Window, or click the GUI Checkpoint for Object/Window button on the User toolbar. If you are recording in Analog mode, press the CHECK GUI FOR OBJECT/WINDOW soft key in order to avoid extraneous mouse movements. Note that you can press the CHECK GUI FOR OBJECT/WINDOW soft key in Context Sensitive mode as well. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens on the screen.
• Double-click the object or window. The Check GUI dialog box opens.
• Click an object name in the Objects pane. The Properties pane lists all the properties for the selected object.
• Select the properties you want to check.
• To edit the expected value of a property, first select it. Next, either click the Edit Expected Value button, or double-click the value in the Expected Value column to edit it.
• To add a check in which you specify arguments, first select the property for which you want to specify arguments. Next, either click the Specify Arguments button, or double-click in the Arguments column. Note that if an ellipsis (three dots) appears in the Arguments column, then you must specify arguments for a check on this property. (You do not need to specify arguments if a default argument is specified.) When checking standard objects, you only specify arguments for certain properties of edit and static text objects. You also specify arguments for checks on certain properties of nonstandard objects.
• To change the viewing options for the properties of an object, use the Show Properties buttons.
• Click OK to close the Check GUI dialog box. WinRunner captures the GUI information and stores it in the test’s expected results folder. The WinRunner window is restored and a GUI checkpoint is inserted in the test script as an obj_check_gui or a win_check_gui statement.
• Syntax: win_check_gui ( window, checklist, expected_results_file, time );
obj_check_gui ( object, checklist, expected results file, time );

53. What information is contained in the checklist file and in which file expected results are stored?
• The checklist file contains information about the objects and the properties of the object we are verifying.
• The gui*.chk file contains the expected results which is stored in the exp folder
54. What do you verify with the GUI checkpoint for multiple objects and what command it generates, explain syntax?
To create a GUI checkpoint for two or more objects:
• Choose Create > GUI Checkpoint > For Multiple Objects or click the GUI Checkpoint for Multiple Objects button on the User toolbar. If you are recording in Analog mode, press the CHECK GUI FOR MULTIPLE OBJECTS softkey in order to avoid extraneous mouse movements. The Create GUI Checkpoint dialog box opens.
• Click the Add button. The mouse pointer becomes a pointing hand and a help window opens.
• To add an object, click it once. If you click a window title bar or menu bar, a help window prompts you to check all the objects in the window.
• The pointing hand remains active. You can continue to choose objects by repeating step 3 above for each object you want to check.
• Click the right mouse button to stop the selection process and to restore the mouse pointer to its original shape. The Create GUI Checkpoint dialog box reopens.
• The Objects pane contains the name of the window and objects included in the GUI checkpoint. To specify which objects to check, click an object name in the Objects pane. The Properties pane lists all the properties of the object. The default properties are selected.
• To edit the expected value of a property, first select it. Next, either click the Edit Expected Value button, or double-click the value in the Expected Value column to edit it.
• To add a check in which you specify arguments, first select the property for which you want to specify arguments. Next, either click the Specify Arguments button, or double-click in the Arguments column. Note that if an ellipsis appears in the Arguments column, then you must specify arguments for a check on this property. (You do not need to specify arguments if a default argument is specified.) When checking standard objects, you only specify arguments for certain properties of edit and static text objects. You also specify arguments for checks on certain properties of nonstandard objects.
• To change the viewing options for the properties of an object, use the Show Properties buttons.
• To save the checklist and close the Create GUI Checkpoint dialog box, click OK. WinRunner captures the current property values of the selected GUI objects and stores it in the expected results folder. A win_check_gui statement is inserted in the test script.
Syntax: win_check_gui ( window, checklist, expected_results_file, time );
obj_check_gui ( object, checklist, expected results file, time );
55. What do you verify with the bitmap check point for object/window and what command it generates, explain syntax?
• You can check an object, a window, or an area of a screen in your application as a bitmap. While creating a test, you indicate what you want to check. WinRunner captures the specified bitmap, stores it in the expected results folder (exp) of the test, and inserts a checkpoint in the test script. When you run the test, WinRunner compares the bitmap currently displayed in the application being tested with the expected bitmap stored earlier. In the event of a mismatch, WinRunner captures the current actual bitmap and generates a difference bitmap. By comparing the three bitmaps (expected, actual, and difference), you can identify the nature of the discrepancy.
• When working in Context Sensitive mode, you can capture a bitmap of a window, object, or of a specified area of a screen. WinRunner inserts a checkpoint in the test script in the form of either a win_check_bitmap or obj_check_bitmap statement.
• Note that when you record a test in Analog mode, you should press the CHECK BITMAP OF WINDOW soft key or the CHECK BITMAP OF SCREEN AREA soft key to create a bitmap checkpoint. This prevents WinRunner from recording extraneous mouse movements. If you are programming a test, you can also use the Analog function check_window to check a bitmap.
• To capture a window or object as a bitmap:
• Choose Create > Bitmap Checkpoint > For Object/Window or click the Bitmap Checkpoint for Object/Window button on the User toolbar. Alternatively, if you are recording in Analog mode, press the CHECK BITMAP OF OBJECT/WINDOW soft key. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens.
• Point to the object or window and click it. WinRunner captures the bitmap and generates a win_check_bitmap or obj_check_bitmap statement in the script. The TSL statement generated for a window bitmap has the following syntax:
win_check_bitmap ( object, bitmap, time );
• For an object bitmap, the syntax is:
obj_check_bitmap ( object, bitmap, time );
• For example, when you click the title bar of the main window of the Flight Reservation application, the resulting statement might be:
win_check_bitmap ("Flight Reservation", "Img2", 1);
• However, if you click the Date of Flight box in the same window, the statement might be:
• obj_check_bitmap ("Date of Flight:", "Img1", 1);
Syntax: obj_check_bitmap ( object, bitmap, time [, x, y, width, height] );
56. What do you verify with the bitmap checkpoint for screen area and what command it generates, explain syntax?
• You can define any rectangular area of the screen and capture it as a bitmap for comparison. The area can be any size: it can be part of a single window, or it can intersect several windows. The rectangle is identified by the coordinates of its upper left and lower right corners, relative to the upper left corner of the window in which the area is located. If the area intersects several windows or is part of a window with no title (for example, a popup window), its coordinates are relative to the entire screen (the root window).
• To capture an area of the screen as a bitmap:
• Choose Create > Bitmap Checkpoint > For Screen Area or click the Bitmap Checkpoint for Screen Area button. Alternatively, if you are recording in Analog mode, press the CHECK BITMAP OF SCREEN AREA softkey. The WinRunner window is minimized, the mouse pointer becomes a crosshairs pointer, and a help window opens.
• Mark the area to be captured: press the left mouse button and drag the mouse pointer until a rectangle encloses the area; then release the mouse button.
• Press the right mouse button to complete the operation. WinRunner captures the area and generates a win_check_bitmap statement in your script.
• The win_check_bitmap statement for an area of the screen has the following syntax:
win_check_bitmap ( window, bitmap, time, x, y, width, height );
57. What do you verify with the database checkpoint default and what command it generates, explain syntax?
• By adding runtime database record checkpoints you can compare the information in your application during a test run with the corresponding record in your database. By adding standard database checkpoints to your test scripts, you can check the contents of databases in different versions of your application.
• When you create database checkpoints, you define a query on your database, and your database checkpoint checks the values contained in the result set. The result set is set of values retrieved from the results of the query.
• You can create runtime database record checkpoints in order to compare the values displayed in your application during the test run with the corresponding values in the database. If the comparison does not meet the success criteria you
• Specify for the checkpoint, the checkpoint fails. You can define a successful runtime database record checkpoint as one where one or more matching records were found, exactly one matching record was found, or where no matching records are found.
• You can create standard database checkpoints to compare the current values of the properties of the result set during the test run to the expected values captured during recording or otherwise set before the test run. If the expected results and the current results do not match, the database checkpoint fails. Standard database checkpoints are useful when the expected results can be established before the test run.
Syntax: db_check(, );
• You can add a runtime database record checkpoint to your test in order to compare information that appears in your application during a test run with the current value(s) in the corresponding record(s) in your database. You add runtime database record checkpoints by running the Runtime Record Checkpoint wizard. When you are finished, the wizard inserts the appropriate db_record_check statement into your script.

Syntax:
db_record_check(ChecklistFileName,SuccessConditions,RecordNumber );

ChecklistFileName A file created by WinRunner and saved in the test's checklist folder. The file contains information about the data to be captured during the test run and its corresponding field in the database. The file is created based on the information entered in the Runtime Record Verification wizard.
Success Conditions Contains one of the following values:
1. DVR_ONE_OR_MORE_MATCH - The checkpoint passes if one or more matching database records are found.
2. DVR_ONE_MATCH - The checkpoint passes if exactly one matching database record is found.
3. DVR_NO_MATCH - The checkpoint passes if no matching database records are found.
RecordNumber An out parameter returning the number of records in the database.
58. How do you handle dynamically changing area of the window in the bitmap checkpoints?
The difference between bitmaps option in the Run Tab of the general options defines the minimum number of pixels that constitute a bitmap mismatch
59. What do you verify with the database check point custom and what command it generates, explain syntax?
• When you create a custom check on a database, you create a standard database checkpoint in which you can specify which properties to check on a result set.
• You can create a custom check on a database in order to:
• check the contents of part or the entire result set
• edit the expected results of the contents of the result set
• count the rows in the result set
• count the columns in the result set
• You can create a custom check on a database using ODBC, Microsoft Query or Data Junction.
60. What do you verify with the sync point for object/window property and what command it generates, explain syntax?
• Synchronization compensates for inconsistencies in the performance of your application during a test run. By inserting a synchronization point in your test script, you can instruct WinRunner to suspend the test run and wait for a cue before continuing the test.
• You can a synchronization point that instructs WinRunner to wait for a specified object or window to appear. For example, you can tell WinRunner to wait for a window to open before performing an operation within that window, or you may want WinRunner to wait for an object to appear in order to perform an operation on that object.
• You use the obj_exists function to create an object synchronization point, and you use the win_exists function to create a window synchronization point. These functions have the following syntax:
Syntax:
obj_exists ( object [, time ] );
win_exists ( window [, time ] );
61. What do you verify with the sync point for object/window bitmap and what command it generates, explain syntax?
• You can create a bitmap synchronization point that waits for the bitmap of an object or a window to appear in the application being tested.
• During a test run, WinRunner suspends test execution until the specified bitmap is redrawn, and then compares the current bitmap with the expected one captured earlier. If the bitmaps match, then WinRunner continues the test.
• Syntax:
obj_wait_bitmap ( object, image, time );
win_wait_bitmap ( window, image, time );
62. What do you verify with the sync point for screen area and what command it generates, explain syntax?
• For screen area verification we actually capture the screen area into a bitmap and verify the application screen area with the bitmap file during execution
• Syntax: obj_wait_bitmap(object, image, time, x, y, width, height);
63. How do you edit checklist file and when do you need to edit the checklist file?
WinRunner has an edit checklist file option under the create menu. Select the “Edit GUI Checklist” to modify GUI checklist file and “Edit Database Checklist” to edit database checklist file. This brings up a dialog box that gives you option to select the checklist file to modify. There is also an option to select the scope of the checklist file, whether it is Test specific or a shared one. Select the checklist file, click OK which opens up the window to edit the properties of the objects.
64. How do you edit the expected value of an object?
We can modify the expected value of the object by executing the script in the Update mode. We can also manually edit the gui*.chk file which contains the expected values which come under the exp folder to change the values.
65. How do you modify the expected results of a GUI checkpoint?
We can modify the expected results of a GUI checkpoint be running the script containing the checkpoint in the update mode.
66. How do you handle ActiveX and Visual basic objects?
WinRunner provides with add-ins for ActiveX and Visual basic objects. When loading WinRunner, select those add-ins and these add-ins provide with a set of functions to work on ActiveX and VB objects.
67. How do you create ODBC query?
We can create ODBC query using the database checkpoint wizard. It provides with option to create an SQL file that uses an ODBC DSN to connect to the database. The SQL File will contain the connection string and the SQL statement.
68. How do you record a data driven test?
We can create a data-driven testing using data from a flat file, data table or a database.
• Using Flat File: we actually store the data to be used in a required format in the file. We access the file using the File manipulation commands, reads data from the file and assign the variables with data.
• Data Table: It is an excel file. We can store test data in these files and manipulate them. We use the ‘ddt_*’ functions to manipulate data in the data table.
• Database: we store test data in the database and access these data using ‘db_*’ functions.
69. How do you parameterize database check points?
When you create a standard database checkpoint using ODBC (Microsoft Query), you can add parameters to an SQL statement to parameterize the checkpoint. This is useful if you want to create a database checkpoint with a query in which the SQL statement defining your query changes
70. How do you convert a database file to a text file?
You can use Data Junction to create a conversion file which converts a database to a target text file.

71. How do you create parameterize SQL commands?
• A parameterized query is a query in which at least one of the fields of the WHERE clause is parameterized, i.e., the value of the field is specified by a question mark symbol ( ? ). For example, the following SQL statement is based on a query on the database in the sample Flight Reservation application:

SELECT Flights.Departure, Flights.Flight_Number, Flights.Day_Of_Week FROM Flights WHERE (Flights.Departure=?) AND (Flights.Day_Of_Week=?)

SELECT defines the columns to include in the query.
FROM specifies the path of the database.
WHERE (optional) specifies the conditions, or filters to use in the query.
Departure is the parameter that represents the departure point of a flight.
Day_Of_Week is the parameter that represents the day of the week of a flight.
• When creating a database checkpoint, you insert a db_check statement into your test script. When you parameterize the SQL statement in your checkpoint, the db_check function has a fourth, optional, argument: the parameter_array argument. A statement similar to the following is inserted into your test script:

db_check("list1.cdl", "dbvf1", NO_LIMIT, dbvf1_params);

The parameter_array argument will contain the values to substitute for the parameters in the parameterized checkpoint.

72. Explain the following WinRunner Commands?
• db_connect - to connect to a database
db_connect(, );
• db_execute_query - to execute a query
db_execute_query ( session_name, SQL, record_number );
[record_number is the out value]
• db_get_field_value - returns the value of a single field in the specified row_index and column in the session_name database session.
db_get_field_value ( session_name, row_index, column );
• db_get_headers - returns the number of column headers in a query and the content of the column headers, concatenated and delimited by tabs.
db_get_headers ( session_name, header_count, header_content );

• db_get_row - returns the content of the row, concatenated and delimited by tabs.
db_get_row ( session_name, row_index, row_content );
• db_write_records - writes the record set into a text file delimited by tabs.
db_write_records ( session_name, output_file [ , headers [ , record_limit ] ] );
• db_get_last_error - returns the last error message of the last ODBC or Data Junction operation in the session_name database session.
db_get_last_error ( session_name, error );
• db_disconnect - disconnects from the database and ends the database session.
db_disconnect ( session_name );
• db_dj_convert - runs the djs_file Data Junction export file. When you run this file, the Data Junction Engine converts data from one spoke (source) to another (target). The optional parameters enable you to override the settings in the Data Junction export file.
db_dj_convert ( djs_file [ , output_file [ , headers [ , record_limit ] ] ] );

73. What check points you will use to read and check text on the GUI and explain its syntax?
You can use text checkpoints in your test scripts to read and check text in GUI objects and in areas of the screen. While creating a test you point to an object or a window containing text. WinRunner reads the text and writes a TSL statement to the test script. You may then add simple programming elements to your test scripts to verify the contents of the text.
You can use a text checkpoint to:
1. Read text from a GUI object or window in your application, using obj_get_text and win_get_text
2. Search for text in an object or window, using win_find_text and obj_find_text
3. Move the mouse pointer to text in an object or window, using obj_move_locator_text and win_move_locator_text
4. Click on text in an object or window, using obj_click_on_text and win_click_on_text.
74. Explain Get Text checkpoint from object/window with syntax?
• We use obj_get_text (, ) function to get the text from an object
• We use win_get_text (window, out_text [, x1, y1, x2, y2]) function to get the text from a window.
75. Explain Get Text checkpoint from screen area with syntax?
We use win_get_text (window, out_text [, x1, y1, x2, y2]) function to get the text from a window.
76. Explain Get Text checkpoint from selection (web only) with syntax?

Returns a text string from an object.
web_obj_get_text (object, table_row, table_column, out_text [, text_before, text_after, index]);
• object The logical name of the object.
• table_row If the object is a table, it specifies the location of the row within a table. The string is preceded by the # character.
• table_column If the object is a table, it specifies the location of the column within a table. The string is preceded by the # character.
• out_text The output variable that stores the text string.
• text_before Defines the start of the search area for a particular text string.
• text_after Defines the end of the search area for a particular text string.
• index The occurrence number to locate. (The default parameter number is numbered 1).

WinRunner Coding Standards

1) No hard coded paths.

A WinRunner test should be able to be copied from one machine to another and run without any problems. Anything the test depends on (gui maps, text files, compiled modules, dll's) should be in the same parent folder as the test.

Exception:

Paths to permanent files on the K drive can be hard coded if absolutely necessary. (Warning – you may run into problems with tests running simultaneously on different machines accessing the same files on the K drive)

Wrong:

reload("C:\\WR_TESTS\\Acceptance_6\\acceptance_functions");


Right:

reload(getvar("testname") & "\\..\acceptance_functions");


2) Indent blocks of code for readability

Wrong:

for(counter = count - 24; counter < count - 1; counter++)
{
list_get_item("ListBox",counter,item);
str = str & item & "\r\n";
}

Right:

for(counter = count - 24; counter < count - 1; counter++)
{
list_get_item("ListBox",counter,item);
str = str & item & "\r\n";
}

Right:

for(counter = count - 24; counter < count - 1; counter++){
list_get_item("ListBox",counter,item);
str = str & item & "\r\n";
}



3) Avoid hard coding testing environment dependencies

Do not hardcode information which my change depending on testing environment. These include installation directories, DSN names, names of database servers, database usernames, and database passwords. It's better to define these in variables at the beginning of the test, so you do not have to make multiple changes throughout the script to implement an environmental change.


Wrong:

set_window("SQL Server Login",10);
edit_set("Login ID:", "sa");
edit_type("Password:", "password");

Right:

db_username = "sa";
db_password = "password";

set_window("SQL Server Login",10);
edit_set("Login ID:", db_username);
edit_type("Password:", db_password);

Wrong (also violates coding standard 1):

invoke_application("C:\\iAvenue\\Windows\\UAdmin.exe","","c:\\Power_db",SW_SHOW);

Right:

install_dir = "c:\\iAvenue\\Windows";

invoke_application(install_dir & "\\UAdmin.exe","",getvar("testname") & "\\..\Power_db",SW_SHOW);

4) Use text recognition as a last resort

Text recognition takes a lot of memory, can be unreliable, and can have varying results on different operating systems. It should therefore only be used if there is no other way to get the information from an object. Unfortunately, this is often the case, especially when the object is not recognized (class: object). In the following example, assume that "Assign Date" is class edit:


Wrong:

obj_get_text("Assign Date", text);

Right:

edit_get_text("Assign Date",text);



5) Do not use excessive wait statements. Try to use synchronization functions when waiting is required.


Wrong:

wait(40);


Right:

statusbar_wait_info("Status Bar","value","Sites processed = 20",40);



6) Do not use report_msg as a substitute for tl_step.

Nobody wants to read every line of the test results looking for a failure. It's much easier to look for green or red. It's OK to have a tl_step failure without a tl_step pass.

Wrong:

if(win_exists("Active Information Manager",1) == 0)
{
set_window("Active Information Manager", 1);
obj_get_text("AfxWnd42", text);
my_gui_checkpoint(text,"AIM.log");
}
else
report_msg("AIM failure! Window absent at startup");

Right:

if(win_exists("Active Information Manager",1) == 0)
{
set_window("Active Information Manager", 1);
obj_get_text("AfxWnd42", text);
my_gui_checkpoint(text,"AIM.log");
}
else
tl_step("AimReportRuns",FAIL,"AIM window absent at startup");







7) Use regular expressions to avoid multiple window instances in the gui map.


Wrong:



7) Use regular expressions to avoid multiple window instances in the gui map (continued)

Right:

Monday, July 28, 2008

HTML Validators

RealValidator - Shareware HTML validator based on SGML parser by Liam Quinn. Unicode-enabled, supports documents in virtually any language; supports XHTML 1.0, HTML 4.01, HTML 4.0, HTML 3.2, HTML 3.0, and HTML 2.0 ; extensible - add proprietary HTML DTDs or change the existing ones; fetches external DTDs by HTTP and caches them for faster validation; HTML 3.2 and HTML 4.0 references included as HTML Help. For Windows.

CSE 3310 HTML Validator - HTML syntax checker for Windows from AI Internet Solutions. Supports wide variety of standards; accessibility (508) checking; uppercase/lowercase converter. Free 'lite' version. For Windows.

Link Checking Tools

SiteAnalysis - Hosted service from Webmetrics, used to test and validate critical website components, such as internal and external links, domain names, DNS servers and SSL certificates. Runs as often as every hour, or as infrequent as once a week. Ideal for dynamic sites requiring frequent link checking.

HiSoftware Link Validation Utility - Link validation tool; available as part of the AccVerify Product Line.

ChangeAgent - Link checking and repair tool from Expandable Language. Identifies orphan files and broken links when browsing files; employs a simple, familiar interface for managing files; previews files when fixing broken links and before orphan removal; updates links to moved and renamed files; fixes broken links with an easy, 3-click process; provides multiple-level undo/redo for all operations; replaces links but does not reformat or restructure HTML code. For Windows.

Link Checker Pro - Link check tool from KyoSoft; can also produce a graphical site map of entire web site. Handles HTTP, HTTPS, and FTP protocols; several report formats available. For Windows platforms.

Web Link Validator - Link checker from REL Software checks links for accuracy and availability, finds broken links or paths and links with syntactic errors. Export to text, HTML, CSV, RTF, Excel. Freeware 'REL Link Checker Lite' version available for small sites. For Windows.

Site Audit - Low-cost on-the-web link-checking service from Blossom Software.

Xenu's Link Sleuth - Freeware link checker by Tilman Hausherr; supports SSL websites; partial testing of ftp and gopher sites; detects and reports redirected URL; Site Map; for Windows.

Linkalarm - Low cost on-the-web link checker from Link Alarm Inc.; free trial period available. Automatically-scheduled reporting by e-mail.

Alert Linkrunner - Link check tool from Viable Software Alternatives; evaluation version available. For Windows.

InfoLink - Link checker program from BiggByte Software; can be automatically scheduled; includes FTP link checking; multiple page list and site list capabilities; customizable reports; changed-link checking; results can be exported to database. For Windows. Discontinued, but old versions still available as freeware.

LinkScan - Electronic Software Publishing Co.'s link checker/site mapping tool; capabilities include automated retesting of problem links, randomized order checking; can check for bad links due to specified problems such as server-not-found, unauthorized-access, doc-not-found, relocations, timeouts. Includes capabilities for central management of large multiple intranet/internet sites. Results stored in database, allowing for customizable queries and reports. Validates hyperlinks for all major protocols; HTML syntax error checking. For all UNIX flavors, Windows, Mac.

CyberSpyder Link Test - Shareware link checker by Aman Software; capabilities include specified URL exclusions, ID/Password entries, test resumption at interruption point, page size analysis, 'what's new' reporting. For Windows.

Java Test Tools

EMMA - Open-source toolkit, written in pure Java, for measuring and reporting Java code coverage. Targets support for large-scale enterprise software development while keeping individual developer's work fast and iterative. Can instrument classes for coverage either offline or on the fly (using an instrumenting application classloader); supported coverage types: class, method, line, basic block; can detect when a single source code line is covered only partially; coverage stats are aggregated at method, class, package, and "all classes" levels. Reports support drill-down, to user-controlled detail depth; HTML reports support source code linking. Does not require access to the source code; can instrument individial .class files or entire .jars (in place, if desired). Runtime overhead of added instrumentation is small (5-20%); memory overhead is a few hundred bytes per Java class.

PMD - Open source static analyzer scans java source for problems static analyzer. Capabilities include scanning for: Empty try/catch/finally/switch statements; Dead code - unused local variables, parameters and private methods; Suboptimal code - wasteful String/StringBuffer usage; Overcomplicated expressions - unnecessary if statements, for loops that could be while loops; Duplicate code - copied/pasted code - could indicate copied/pasted bugs.

Hammurapi - Code review tool for Java (and other languages with latest version) released under the GNU Lesser General Public License. Utilizes a rules engine to infer violations in source code. Doesn't fail on source files with errors, or if some inspectors throw exceptions. Parts of tool can be independently extended or replaced. Can review sources in multiple programming languages, perform cross-language inspections, and generate a consolidated report.

TestNG - A testing framework inspired from JUnit and NUnit; supports JDK 5 Annotations, data-driven testing (with @DataProvider), parameters, distribution of tests on slave machines, plug-ins (Eclipse, IDEA, Maven, etc); embeds BeanShell for further flexibility; default JDK functions for runtime and logging (no dependencies).

Concordian - An open source testing framework for Java developed by David Peterson. Utilizes requirements in plain English using paragraphs, tables and proper punctuation in HTML. Developers instrument the concrete examples in each specification with commands (e.g. "set", "execute", "assertEquals") that allow test scenarios to be checked against the system to be tested. The instrumentation is invisible to a browser, but is processed by a Java fixture class that accompanies the specification. The fixture is also a JUnit test case. Results are exported with the usual green and red indicating successes and failures. Site includes info re similarities and diffs from Fitnesse.

DBUnit - Open source JUnit extension (also usable with Ant) targeted for database-driven projects that, among other things, puts a database into a known state between test runs. Enables avoidance of problems that can occur when one test case corrupts the database and causes subsequent tests to fail or exacerbate the damage. Has the ability to export and import database data to and from XML datasets. Can work with very large datasets when used in streaming mode, and can help verify that database data matches expected sets of values.

StrutsTestCase - Open source Unit extension of the standard JUnit TestCase class that provides facilities for testing code based on the Struts framework, including validation methods. Provides both a Mock Object approach and a Cactus approach to actually run the Struts ActionServlet, allowing testing of Struts code with or without a running servlet engine. Uses the ActionServlet controller to test code, enabling testing of the implementation of Action objects, as well as mappings, form beans, and forwards declarations.

DDSteps - A JUnit extension for building data driven test cases. Enables user to parameterize test cases, and run them more than once using different data. Uses external test data in Excel which is injected into test cases using standard JavaBeans properties. Test cases run once for each row of data, so adding new tests is just a matter of adding a row of data in Excel.

JKool - A light weight performance measurement and monitoring tool from Nastel Inc. for live J2EE, Web and Web service-based applications. It provides timing information for web sessions, including JSP/servlets, JDBC, JMS and Java method calls, to measure performance, detect bottlenecks and failures. Probes include a Web probe (JSP, Servlets), a Java probe (Byte Code Instrumentation), a JMS probe, and a JDBC probe.

StrutsTestCase for JUnit - Open source extension of the standard JUnit TestCase class that provides facilities for testing code based on the Struts framework. Provides both a Mock Object approach and a Cactus approach to actually run the Struts ActionServlet, allowing testing Struts code with or without a running servlet engine. Because it uses the ActionServlet controller to test code, can test not only the implementation of Action objects, but also mappings, form beans, and forwards declarations. Since it already provides validation methods, it's quick and easy to write unit test cases.

JavaNCSS - A free Source Measurement Suite for Java by Clemens Lee. A simple command line utility which collects various source code metrics for Java. The metrics are collected globally, for each class and/or for each function.

Open Source Profilers for Java - Listing of about 25 open source code profilers for Java from 2006 from the Manageability.org web site.

SofCheck Inspector - Tool from SofCheck Inc. for analysis of Java for logic flaws and vulnerabilities. Exlpores all possible paths in byte code and detects flaws and vulnerabilities in areas such as: array index out of bounds, buffer overflows, race conditions, null pointer dereference, dead code, etc. Provides 100% path coverage and can report on values required for 100% unit test coverage. Patented precondition, postcondition and presumption reporting can help detect Malware code insertion.

WindowTester - Test automation tool from Instantiations Inc. for Swing or SWT UI's. Quickly and easily record GUI tests by interacting with the application as you normally would; WindowTester watches your actions and generates a test case automatically. Generated tests are pure Java and easily customized using the power available in the Java language. Provides a rich GUI Test Library, hiding the complexities and threading issues of GUI test execution; test cases are based on the JUnit standard

Squish for Java - Automated Java GUI testing tool for Java Swing, AWT, SWT and RCP/Eclipse applications. Record or create/modify scripts using Tcl, Python, JavaScript. Automatic identification of GUI objects of the AUT; inspect AUT's objects, properties and methods on run-time using the Squish Spy. Can be run via a GUI front-end or via command line tools. Can execute tests in a debugger allowing setting breakpoints and stepping through test scripts.

Klocwork K7 - Static analysis technology for Java, C, C++, analyzes defects & security vulnerabilities, architecture & header file anomalies, metrics. Developers can run Klocwork in Eclipse or various other IDE's. Users can select scope of reporting as needed by selecting software component, defect type, and defect state/status.

Coverity Prevent - Tool from Coverity Inc. for analysis of Java source code for security issues. Explores all possible paths in source code and detects security vulnerabilities and defects in multiple areas: memory leaks, memory corruption, and illegal pointer accesses, buffer overruns, format string errors and SQL injections vulnerabilities, multi-threaded programming concurrency errors, etc.

GUIDancer - Eclipse-based tool from Bredex GmbH for automated testing of Java/Swing GUI's, Tests are specified, not programmed - no code or script is produced. Test specification is initially separate from the AUT, allowing test creation before the software is fully functional or available. Specification occurs interactively; components and actions are selected from menus, or by working with the AUT in an advanced "observation mode". Test results and errors viewable in a results view, can be saved as html or xml file.

CMTJava - Complexity measurement tool from Verifysoft GmbH. Includes McCabe cyclomatic complexity, lines-of-code metrics, Halstead metrics, maintainability index.

JavaCov - A J2SE/J2EE Coverage testing tool from Alvicom; specializes in testing to MC/DC (Modified Condition/Decision Coverage) depth. Capabilities include: Eclipse plugin; report generation into HTML and XML; Apache Ant integration and support for test automation.

Jameleon - Open source automated testing harness for acceptance-level and integration testing, written in Java. Separates applications into features and allows those features to be tied together independently, in XML, creating self-documenting automated test cases. These test-cases can then be data-driven and executed against different environments. Easily extensible via plug-ins; includes support for web applications and database testing.

Agitator - Automated java unit testing tool from Agitar Software. Creates instances of classes being exercised, calling each method with selected, dynamically created sets of input data, and analyzing results. Stores all information in XML files; works with Eclipse and a variety of IDEs

PMD - Open source tool scans Java code for potential bugs, dead code, duplicate code, etc. - works with a variety of configurable and modifiable rulesets. Integrates with a wide variety of IDE's.

JLint - Open source static analysis tool will check Java code and find bugs, inconsistencies and synchronization problems by doing data flow analysis and building the lock graph.

Lint4j - A static Java source and byte code analyzer that detects locking and threading issues, performance and scalability problems, and checks complex contracts such as Java serialization by performing type, data flow, and lock graph analysis. Eclipse, Ant and Maven plugins available.

FindBugs - Open source static analysis tool to inspect Java bytecode for occurrences of bug patterns, such as difficult language features, misunderstood API methods, misunderstood invariants when code is modified during maintenance, garden variety mistakes such as typos, use of the wrong boolean, etc. Can report false warnings, generally less than 50%.

CheckStyle - Open source tool for checking code layout issues, class design problems, duplicate code, bug patterns, and much more.

Java Development Tools - Java coverage, metrics, profiler, and clone detection tools from Semantic Designs.

AppPerfect Test Studio - Suite of testing, tuning, and monitoring products for java development from AppPerfect Corp. Includes: Unit Tester, Code Analyzer, Java/J2EE Profiler and other modules.

GJTester - Java unit, regression, and contract (black box) test tool from TreborSoft. Enables test case and test script development without programming. Test private and protected functions, and server application's modules, without implementing test clients, regression testing for JAVA VM upgrades. Useful for testing CORBA, RMI, and other server technologies as well. GUI interface emphasizing ease of use.

QFTest - A cross-platform system and load testing tool from Quality First Software with support for for Java GUI test automation (Swing, Eclipse/SWT, Webstart, Applets, ULC). Includes small-scale test management capabilities, capture/replay mechanism, intuitive user interface and extensive documentation, reliable component recognition and can handle complex and custom GUI objects, integrated test debugger and customizable reporting.

Cactus - A simple open-source test framework for unit testing server-side java code (Servlets, EJBs, Tag Libs, Filters, etc.). Intent is to allow fine-grained continuous testing of all files making up an application: source code but also meta-data files (such as deployment descriptors, etc) through an in-container approach. It uses JUnit and extends it. Typically use within your IDE, or from the command line, using Ant. From Apache Software Foundation.

JUnitPerf - Allows performance testing to be dynamically added to existing JUnit tests. Enables quick composition of a performance test suite, which can then be run automatically and independent of other JUnit tests. Intended for use where there are performance/scalability requirements that need re-checking while refactoring code. By Mike Clark/Clarkware Consulting, licensed under the BSD License.

Koalog Code Coverage - Code coverage analyzer for Java applications from Koalog SARL. Includes: in-process or remote coverage computation, capability of working directly on Java method binaries (no recompilation), predefined (XML, HTML, LaTex, CSV, TEXT) or custom report generation, and session merging to allow compilation of overall results for distinct executions. Integrates with Ant and JUnit.

Abbot Java GUI Test Framework - Testing framework by Timothy Wall provides automated event generation and validation of Java GUI components, improving upon the very basic functions provided by the java.awt.Robot class. (Abbot = "A Better 'Bot'). The framework may be invoked directly from Java code or accessed without programming through the use of scripts via 'Costello', a script editor/recorder. Suitable for use both by developers for unit tests and QA for functional testing. Free - available under the GNU Lesser General Public License

JUnit - Framework to write repeatable java unit tests - a regression testing framework written by Erich Gamma and Kent Beck. For use by developers implementing unit tests in Java. Free Open Source Software released under the IBM Public License and hosted on SourceForge. Site includes a large collection of extensions and documentation.

jfcUnit - Framework for developing automated testing of Java Swing-based applications at the UI layer (as opposed to testing at lower layers, for which JUnit may be sufficient). Provides recording and playback capabilities. Also available as plugins for JBuilder and Eclipse. Free Open Source Software from SourceForge site.

JBench - Freeware Java benchmarking framework to compare algorithms, virtual machines, etc. for speed. Available as binary distribution (including documentation), source distribution, or jar file.

Clover - Code coverage tool for Java from Cenqua. Fully integrated plugin for NetBeans, JBuilder, and other IDE's. Seamless integration with projects using Apache ANT. View coverage data in XML, HTML, PDF, or via a Swing GUI.

JCover - Java code test coverage analysis tool from Codework Limited. Works with source or compiled files. Gathers coverage measures of branches, statements, methods, classes, file, package and produces reports in multiple formats. Coverage difference comparison between runs. Coverage API provided.

Structure101 - Java source code visualization tool from Headway Software. Lets user understand, measure, and control architecture, design, composition, and dependencies of code base. Analyzes byte code and shows all dependencies, at all levels and between all levels; method, class, package, application. Measures code complexity using a measurement framework called XS. For Windows, Linux and Mac OS X.

Java Tool Suite from Man Machine Systems - Includes JStyle, a Java source analyzer to generate code comments and metrics such as inheritance depth, Cyclomatic Number, Halstead Measures, etc; JPretty reformats Java code according to specified options; JCover test coverage analyzer; JVerify Java class/API testing tool uses an invasive testing model allowing access to internals of Java objects from within a test script and utilizes a proprietary OO scripting language; JMSAssert, a tool and technique for writing reliable software; JEvolve, an intelligent Java code evolution analyzer that automatically analyzes multiple versions of a Java program and shows how various classes have evolved across versions; can 'reason' about selective need for regression testing Java classes; JBrowser class browser; JSynTest, a syntax testing tool that automatically builds a Java-based test data generator.

JProbe Suite - Collection of Java debugging tools from Quest Software; includes JProbe Profiler and JProbe Memory Debugger for finding performance bottlenecks and memory leaks, LProbe Coverage code coverage tool, and JProbe Threadalyzer for finding deadlocks, stalls, and race conditions. JProfiler freeware version available.

Krakatau Professional for Java - Software metrics tool from Power Software includes more than 70 OO, procedural, complexity, and size metrics related to reusability, maintainability, testability, and clarity. Includes Cyclomatic Complexity, Enhanced Cyclomatic Complexity, Halstead Software Science metrics, LOC metrics and MOOD metrics. Has online advisor for quality improvement.

OptimizeIt - Profiler, thread debugger, and code coverage tool suite from Borland (formerly from VMGear).

Jtest - ParaSoft's Jtest is an integrated, automatic unit testing and standards compliance tool for Java. It automatically generates and executes JUnit tests and checks whether code follows 400 coding standards and can automatically correct for many.

DevPartner Java Edition - Compuware's (formerly NuMega) debugging/productivity tool to detect and diagnose Java bugs and memory and performance problems; thread and event analysis, coverage analysis. Integrates with several Java IDE's.

VTune - Intel's performance tuning tool for applications running on Intel processors; includes Java support. Includes suggestions for optimization techniques.

TCAT for Java - Part of Software Research's TestWorks suite of test tools; code coverage analyzer and code analysis for Java; written in Java.

Open Source code analyzers listing - A listing of open source Java code analysis tools written in Java.

Open Source code coverage tools listing - A listing of open source Java code analysis tools written in Java.

Open Source Java test tools listing - A listing of open source tools and frameworks for Java testing, written in Java.

Open Source web test tools listing - A listing of open source web test tools and frameworks written in Java.

(Note: some other tools in these listings also handle testing, management, or load testing of java applets, servlets, and applications, or are planning to add such capabilities. Check listed web sites for current information.)