The execution history is a single file
that shows all events that occur during a schedule or test run. The level
of history that you set determines whether you receive individual response
time statistics for Percentile reports and information on verification points.
You can set the history level and also whether the history is collected
from all users or from a sampling of users. Setting a sampling rate helps
decrease your log size. To set the execution history and the sampling rate:
- In the Test Navigator, expand the project until you locate the
schedule.
- Right-click the schedule, and then click Open.
- In the Schedule Contents section, click the name of the schedule
and scroll down to the Execution History section.
- Set Execution history log level to one of
the following:
Option |
Description |
None |
Collects no
execution history events. |
Schedule |
Collects
events that correspond to actions executed in the schedule. |
Page |
Typically,
you set history at the Page level, which collects schedule
items, as well as page start and stop events. To produce a Percentile report
or to see any Page Title verification points that you have set, set execution
history at this level of detail or greater. |
Request |
Collects
page information plus request-level events. To collect information about Response
Code or Response Size verification points that you have set, set execution
history at this level of detail or greater. |
All |
Collects request
information plus the actual request and response data. This option produces
a large history file, especially if your tests are long or you are running
a large number of users. To prevent your history file from getting too large,
you should set a sampling rate, rather than collecting all information from
all users. |
- To set a sampling rate, select Only sample execution
history from a subset of users. The number or the percentage
that you select is applied to each user group. If you are running user groups
at remote locations, the number or percentage that you select is distributed
evenly among the remote locations.
Option |
Description |
Fixed number of users |
The
number is applied to each user group. Assume that your schedule contains two
user groups. One group contains 4 users, and one group contains 1000 users.
If you sample 2 users, two users are sampled from each group. |
Percentage of users |
The
percentage is applied to each user group—but at least one user will be sampled
from each group. Assume that your schedule contains two user groups. One group
contains 4 users, and one group contains 1000 users. If your sampling rate
is 10%, one user is sampled from the first group, and 100 users are sampled
from the second group. If your sampling rate is 25%, one user is sampled from
the first group, and 250 users are sampled from the second group. |
The following information is collected at the
Schedule level:
- The test verdict. The verdict can be one of the following:
- Pass indicates that the verification point
matched or received the expected response. For example, a response code verification
point is set to pass when the recorded response code is received during playback.
If your test does not contain verification points, it means that the connection
succeeded.
- Fail indicates that the verification point
did not match the expected response or that the expected response was not
received.
- Error indicates that the primary request
was not successfully sent to the server, no response was received from the
server, or the response was incomplete or unparsable.
- The start and stop time of the schedule, each user group, each virtual
user, and each test.
- If you have set loops, the start and stop time of each loop, and the number
of iterations of each loop.
- If you have set selectors, the start and stop time of each selector.
The following additional information is collected Page level:
- The page verdict. You see a page verdict only if a connection problem
occurs or if you have set verification points. Any failures or errors are
rolled up to the test verdict level.
- The start and stop time of each page.
- If you have set loops within a page, the start and stop time of each loop,
and the number of iterations of each loop.
- The length of each think time.
- If you have set page-level transactions in your test, the start and stop
time of each transaction, and the duration of each transaction.
The following additional information is collected at the Request level:
- The time that the first byte and last byte were sent.
- The time that the first byte and last byte were received.
- The character set of the response data.
- Expected and actual values of page element verification points that you
have defined.
- If you have set request-level transactions in your test, the start and
stop time of each transaction, and the duration of each transaction.
The following additional information is collected at the
All level:
- The actual request data sent to the server.
- The actual response data received from the server.
You can export the statistics into a CSV file for further analysis.
To do so, click , and select Test Execution History.