You represent a workload by creating a schedule and adding user groups, tests, and other elements to it.
Schedule overview
A schedule can be as simple as one virtual user running one test, or as complicated as hundreds of virtual users in different groups, each running different tests at different times.Creating a schedule
Schedules let you accurately emulate the actions of individual users.User group overview
User groups enable you to group tests in a logical order.Adding elements to a schedule
A schedule needs only one user group and one test to run. However, to accurately represent a workload, you should add other elements.Running tests at a set rate
To run a test at a set rate, you add a loop to control the iteration rate, and then add tests to the loop. The tests, which are children of the loop, are controlled by the loop.Running tests in random order
A schedule that contains only user groups and tests will run each test in a user group sequentially. Adding a random selector lets you repeat a series of tests in random order, thus emulating the varied actions of real users.Setting the number of users that start a run
Sets the initial number of users in a run. You can increase this number once the run starts.Starting users at different times
To avoid overloading the system (and causing connection timeouts), you can stagger the number of users that start rather than starting them all at once.Running a user group at a remote location
Lets a user group run at a remote computer location rather than on your local computer. We recommend that you run a user groups at remote locations so that your workbench activity doesn't affect the ability to apply load.Setting think time behavior
Lets you increase, decrease, or randomize think timeāor to play it back exactly as recorded.Limiting think times to a maximum value
Lets you supply a maximum value for the think times of virtual users.Setting the execution history collected during a run
The execution history is a single file that shows all events that occur during a schedule or test run. The level of history that you set determines whether you receive individual response time statistics for Percentile reports and information on verification points.Setting the problem determination level
Lets you set the level of information logged during a run. By default, only warnings and severe errors are logged. Typically, you change this log level only when requested to do so by IBM Software Support.Setting the statistics displayed during a run
Lets you set the type of data that you see during a run and the sampling rate for that data, and whether data is collected from all users or a representative sample.Creating WebSphere Studio Application Monitor reports
If you have WebSphere Studio Application Monitor (WSAM) installed, you can create and view WSAM reports.