Launch the benchmark wizard as described in Starting the benchmark wizard, and choose Configure a new benchmark. The
General settings screen is displayed.
In the General Settings screen you configure the basic properties of the
benchmark definition.
Figure 2. General Settings screen

- Type a value in the Benchmark Name field.
- Click the desired value from the Benchmark type list.
- Type a number in the Time to run field to specify the number of
minutes the benchmark executes.
- Type a number in the Number of samples field to specify how
many samples should be taken during the course of the benchmark's
execution.
- Type a number in the Stabilization time field to specify the
number of minutes the sample provider waits before it begins to take
samples.
- (optional) If you want the results of the
benchmark execution to be written to a file, enter the path and name of a file
in the Output File field, or click the browse button to the right
of the field to navigate to a file. If the field is left blank, then
the results are written to the logging destination of WebSphere InterChange
Server.
- Click Next after specifying the proper values to advance to the
next screen.
In the Benchmark Components screen, you add components as participants to
the benchmark definition and configure their behavior during the
benchmark's execution.
Figure 3. Benchmark Components screen

- In the Select and configure screen, right click and select
Add. In the new row that is inserted, click on the empty
component box, then click on the down arrow. Select a component from
the list.
If you add a component that depends on other components, then those other
components are automatically added as well; for instance, if you add a
collaboration object, then any connectors or grouped collaboration objects or
the access client that it depends on are automatically added. The
wizard does not allow you to add components that are not valid participants
for the benchmark type; for instance, you cannot add a map definition to
the Benchmark Components screen.
When components are added to the pane, the Components column
lists the names of the components and the Type column lists the
type of component.
- Select the Work generator check box to specify a component as a
workload generator in the benchmark. Table 2 specifies the valid workload generators are for each type of
benchmark. Typically these are the clients that are the source of
business objects in the benchmark setup--for instance, the source
connector for a collaboration throughput benchmark, or the access client for
an access throughput benchmark.
- (optional) Type a value in the Application
response time column for a component to specify the number of
milliseconds it waits before replying to a service call request. This
value can be used to simulate anticipated application latency.
It is recommended that you perform tests with the assistance of application
experts at the site to determine the average latency for the application to
respond to business object requests that are sent to it by the
connector. Use that average value for the simulated latency to obtain a
more realistic set of numbers while still benefitting from the simplified
setup of simulated connectors.
- (optional) Type a value between 1 and 100 in
the Consume success rate column for a component to specify the
percentage of requests it should process successfully. This value can
be used to simulate the average ratio of successful flows to failed
flows. The default value is 100, which means that the simulated
connector responds with success for 100 percent of the business objects
requests its processes (provided that the flow does not fail for
other reasons, such as mapping problems).
Failed flows sometimes involve more collaboration processing than
successful ones do, depending on the business requirements. Many
collaborations are designed with logic that responds to an initial failure by
resending the business object with a different verb (this logic is typically
identifiable by use of the CONVERT_CREATE and CONVERT_UPDATE
properties). Other collaborations have error-handling routines that
perform transactional or administrative actions in response to a
failure. These execution paths affect performance, so performing a
benchmark that accurately simulates them is important.
It is recommended that you perform some tests with the assistance of IBM
WebSphere InterChange Server development team to determine the average
percentage of successfully processed flows. Then specify the average
value for the consume success rate to determine the impact of failures on
throughput.
- (optional) Type a value in the Number of
objects per poll column for a connector component to specify the number
of events that it picks up with each poll call. This value can be used
to simulate the common connector-specific capability of polling multiple
objects with each poll call.
- (optional) Type a value in the Poll
frequency column for a connector component to specify the number of
milliseconds between poll calls. This value can be used to simulate the
standard connector capability of polling with variable frequency.
It is recommended that you perform some initial tests with the connector
that is being simulated to determine a good initial set of values for the
Number of objects per poll column and the Poll frequency
column. The behaviors of these common capabilities are closely related
and they affect throughput. Perform a benchmark with these values, then
modify them and perform the benchmark again. By testing a number of
combinations you can determine which combination provides the optimal
throughput.
- (optional) Type the path and name of a file in
the Input file column for a component that has been marked as a
workload generator. The file must contain sample data for the workload
generator and be in the standard IBM WebSphere InterChange Server business
object format (that is, the one in which business objects are written out
during system tracing operations, or are saved from the Test Connector
tool).
An input file of sample data can be produced by choosing Generate
workload to a file in the Action screen of the benchmark wizard;
this option is discussed in the section "Generating workload to a file".
- Note:
- The input file must reside on the same computer where the connector agent
runs; if the connector agent is distributed on a computer other than the
one where WebSphere InterChange Server runs, then the input file must be
distributed with the agent.
- Click Next.
In the Object Properties screen, you configure business objects for the
benchmark execution.
Figure 4. Object Properties screen

- In the Select and configure screen, right click and select
Add.
- In the Business Object column, click to select a business
object definition for the benchmark.
- In the Component column, select a benchmark participant that is
associated with the business object definition.
- In the Verb column click the verb with which you want the
sample business objects to be submitted.
- (optional) In the Size column, type
the size in bytes you want for the sample business objects.
It is recommended that you perform tests with the assistance of the
application experts and IBM WebSphere InterChange Server development team at
the site to determine the average size of business objects for the
transaction. The recommended procedure for doing this is:
- Set the AgentTraceLevel property of the connector to a level at
which it outputs the entire contents of the business objects it
processes.
- Generate a number of events that represent production data as closely as
possible.
- Start the actual application connector and have it poll and then process
the events.
- Extract the output of several business objects to individual text files
without including any of the other tracing messages.
- Open the individual text files containing the extracted business objects
in a text editor that is can report the size of its contents in bytes and
record the values.
- Add the sizes of all the files together and divide the sum by the number
of files to calculate the average business object size, then type that value
in the field of the Size column.
- Click Finish to complete the wizard.
If you are performing a Business Object Throughput benchmark, an additional
property is exposed at the Object Properties screen. The
Mapped property lets you specify mapping with the benchmark
execution; if mapping is not included, then only the business object and
its transmission across the transport protocol are benchmarked. If you
want mapping to be included in the benchmark, do the following:
Figure 5. Object Properties screen--Business Object Throughput-specific

- Select the Mapped check box.
- In the Map Direction list click the appropriate value based on
the context you want the benchmark to test.
If the direction is set to the value GenericToApp, then generic
business objects are generated and mapped to the application-specific business
objects with a calling context of SERVICE_CALL_REQUEST; the
application-specific business objects are then placed on the transport
protocol.
If the direction is set to the value AppToGeneric, then
application-specific business objects are generated and mapped to generic
objects with a calling context of EVENT_DELIVERY; the generic objects are
then placed on the transport protocol.
The benchmark types dedicated to synchronous types of interfaces--the
Access Throughput and Access Response Time benchmarks--do not have the
Component and Type columns in the Object Properties
screen, but have unique properties.
Figure 6. Object Properties screen--Synchronous Interface-specific

To set the object properties for benchmarks of synchronous interfaces, do
the following:
- In the list in the Port column, click the name of the port on
the collaboration object specified in the Collaboration column to
which direct calls are made by access clients.
- Type a number in the Number of threads field to specify how
many threads are created to make direct calls to the collaboration
object.
