Product overview

Performance Tester divides performance testing into five interrelated task categories: test creation, test editing, workload emulation with schedules, schedule execution, and evaluation of results.

The performance testing of an application begins with preliminary answers to two related questions:

Based on the answer to the second question, you actually perform each of these tasks from Performance Tester, which records the transactions from your browser and generates performance tests from the recordings. Test recording and related tasks are explained in Creating tests > Recording tests.

Based on the answer to the first question, you create a Performance Test schedule, create a user group in the schedule for each of the application usage categories (registering, shopping, etc.), and add appropriate tests to each group to emulate that usage category. Workload emulation through schedules is explained in Representing workloads.

After creating your tests, you will probably want to run the tests individually and inspect the results to make sure that the tests are doing what you expect. It is likely that you will want to make some changes to your tests. The test editing tasks are explained in Editing tests. Running tests or schedules is explained in Running schedules. Evaluating the results of a test or schedule run is explained in Evaluating results.

Perhaps the most common change that you will want to make to a recorded test is to substitute recorded test values with variable test data. For example, in a test designed to test the performance of an employee database search function, you might have searched for "Doe, John." If you run hundreds of instances (Performance Tester calls these virtual users) of this test without modifying it, each virtual user searches for the same employee. To produce a more realistic test, you can substitute values in a recorded test with values contained in datapools. If you modify the employee database search test to use a datapool containing employee names and then run the test, each virtual user searches for a different employee. The section Editing tests > Providing tests with variable data explains datapool substitution. Creating test data explains datapool creation and editing.

Tests generated by Performance Tester provide automated data correlation (sometimes referred to as dynamic data). To illustrate this concept using the example of an employee database search test, if you substitute an employee name in a recorded test with employee names contained in a datapool, on playback, each search request returns information appropriate to the named employee. Without data correlation, the same data would be returned for each employee. But whereas data correlation is automated in this example, data correlation cannot be automated in every situation. Data correlation, including how to manually correlate test values, is explained in Editing tests > Correlating response and request data in a test.

A third common test modification is to enable verification points, such that the test results verify whether an expected behavior occurred. Verification points are explained in Editing tests > Adding verification points to a test.

Parent topic: Product introduction

Terms of use | Feedback
(C) Copyright IBM Corporation 2005. All Rights Reserved.