Creating a new benchmark

Launch the benchmark wizard as described in Starting the benchmark wizard, and choose Configure a new benchmark. The General settings screen is displayed.

Configuring general settings

In the General Settings screen you configure the basic properties of the benchmark definition.

Figure 2. General Settings screen

Figure 2 displays the General settings screen, where such things as the benchmark name and type can be set.

  1. Type a value in the Benchmark Name field.
  2. Click the desired value from the Benchmark type list.
  3. Type a number in the Time to run field to specify the number of minutes the benchmark executes.
  4. Type a number in the Number of samples field to specify how many samples should be taken during the course of the benchmark's execution.
  5. Type a number in the Stabilization time field to specify the number of minutes the sample provider waits before it begins to take samples.
  6. (optional) If you want the results of the benchmark execution to be written to a file, enter the path and name of a file in the Output File field, or click the browse button to the right of the field to navigate to a file. If the field is left blank, then the results are written to the logging destination of WebSphere InterChange Server.
  7. Click Next after specifying the proper values to advance to the next screen.

Configuring benchmark components

In the Benchmark Components screen, you add components as participants to the benchmark definition and configure their behavior during the benchmark's execution.

Figure 3. Benchmark Components screen

Figure 3 displays the Benchmark components screen, where you can add components as participants to the benchmark definition and configure their behavior during benchmark execution.

  1. In the Select and configure screen, right click and select Add. In the new row that is inserted, click on the empty component box, then click on the down arrow. Select a component from the list.

    If you add a component that depends on other components, then those other components are automatically added as well; for instance, if you add a collaboration object, then any connectors or grouped collaboration objects or the access client that it depends on are automatically added. The wizard does not allow you to add components that are not valid participants for the benchmark type; for instance, you cannot add a map definition to the Benchmark Components screen.

    When components are added to the pane, the Components column lists the names of the components and the Type column lists the type of component.

  2. Select the Work generator check box to specify a component as a workload generator in the benchmark. Table 2 specifies the valid workload generators are for each type of benchmark. Typically these are the clients that are the source of business objects in the benchmark setup--for instance, the source connector for a collaboration throughput benchmark, or the access client for an access throughput benchmark.
  3. (optional) Type a value in the Application response time column for a component to specify the number of milliseconds it waits before replying to a service call request. This value can be used to simulate anticipated application latency.

    It is recommended that you perform tests with the assistance of application experts at the site to determine the average latency for the application to respond to business object requests that are sent to it by the connector. Use that average value for the simulated latency to obtain a more realistic set of numbers while still benefitting from the simplified setup of simulated connectors.

  4. (optional) Type a value between 1 and 100 in the Consume success rate column for a component to specify the percentage of requests it should process successfully. This value can be used to simulate the average ratio of successful flows to failed flows. The default value is 100, which means that the simulated connector responds with success for 100 percent of the business objects requests its processes (provided that the flow does not fail for other reasons, such as mapping problems).

    Failed flows sometimes involve more collaboration processing than successful ones do, depending on the business requirements. Many collaborations are designed with logic that responds to an initial failure by resending the business object with a different verb (this logic is typically identifiable by use of the CONVERT_CREATE and CONVERT_UPDATE properties). Other collaborations have error-handling routines that perform transactional or administrative actions in response to a failure. These execution paths affect performance, so performing a benchmark that accurately simulates them is important.

    It is recommended that you perform some tests with the assistance of IBM WebSphere InterChange Server development team to determine the average percentage of successfully processed flows. Then specify the average value for the consume success rate to determine the impact of failures on throughput.

  5. (optional) Type a value in the Number of objects per poll column for a connector component to specify the number of events that it picks up with each poll call. This value can be used to simulate the common connector-specific capability of polling multiple objects with each poll call.
  6. (optional) Type a value in the Poll frequency column for a connector component to specify the number of milliseconds between poll calls. This value can be used to simulate the standard connector capability of polling with variable frequency.

    It is recommended that you perform some initial tests with the connector that is being simulated to determine a good initial set of values for the Number of objects per poll column and the Poll frequency column. The behaviors of these common capabilities are closely related and they affect throughput. Perform a benchmark with these values, then modify them and perform the benchmark again. By testing a number of combinations you can determine which combination provides the optimal throughput.

  7. (optional) Type the path and name of a file in the Input file column for a component that has been marked as a workload generator. The file must contain sample data for the workload generator and be in the standard IBM WebSphere InterChange Server business object format (that is, the one in which business objects are written out during system tracing operations, or are saved from the Test Connector tool).

    An input file of sample data can be produced by choosing Generate workload to a file in the Action screen of the benchmark wizard; this option is discussed in the section "Generating workload to a file".

    Note:
    The input file must reside on the same computer where the connector agent runs; if the connector agent is distributed on a computer other than the one where WebSphere InterChange Server runs, then the input file must be distributed with the agent.
  8. Click Next.

Configuring object properties

In the Object Properties screen, you configure business objects for the benchmark execution.

Figure 4. Object Properties screen

Figure 4 displays the Object properties screen, where business object properties, such as verb, can be set.

  1. In the Select and configure screen, right click and select Add.
  2. In the Business Object column, click to select a business object definition for the benchmark.
  3. In the Component column, select a benchmark participant that is associated with the business object definition.
  4. In the Verb column click the verb with which you want the sample business objects to be submitted.
  5. (optional) In the Size column, type the size in bytes you want for the sample business objects.

    It is recommended that you perform tests with the assistance of the application experts and IBM WebSphere InterChange Server development team at the site to determine the average size of business objects for the transaction. The recommended procedure for doing this is:

    1. Set the AgentTraceLevel property of the connector to a level at which it outputs the entire contents of the business objects it processes.
    2. Generate a number of events that represent production data as closely as possible.
    3. Start the actual application connector and have it poll and then process the events.
    4. Extract the output of several business objects to individual text files without including any of the other tracing messages.
    5. Open the individual text files containing the extracted business objects in a text editor that is can report the size of its contents in bytes and record the values.
    6. Add the sizes of all the files together and divide the sum by the number of files to calculate the average business object size, then type that value in the field of the Size column.
  6. Click Finish to complete the wizard.

Properties specific to Business Object Throughput benchmarks

If you are performing a Business Object Throughput benchmark, an additional property is exposed at the Object Properties screen. The Mapped property lets you specify mapping with the benchmark execution; if mapping is not included, then only the business object and its transmission across the transport protocol are benchmarked. If you want mapping to be included in the benchmark, do the following:

Figure 5. Object Properties screen--Business Object Throughput-specific

Figure 5 shows the mapped property, available when performing a business object throughput benchmark. The mapped property enables the mapping of the business object to be included in the benchmark.

  1. Select the Mapped check box.
  2. In the Map Direction list click the appropriate value based on the context you want the benchmark to test.

    If the direction is set to the value GenericToApp, then generic business objects are generated and mapped to the application-specific business objects with a calling context of SERVICE_CALL_REQUEST; the application-specific business objects are then placed on the transport protocol.

    If the direction is set to the value AppToGeneric, then application-specific business objects are generated and mapped to generic objects with a calling context of EVENT_DELIVERY; the generic objects are then placed on the transport protocol.

Properties specific to benchmarks for synchronous interfaces

The benchmark types dedicated to synchronous types of interfaces--the Access Throughput and Access Response Time benchmarks--do not have the Component and Type columns in the Object Properties screen, but have unique properties.

Figure 6. Object Properties screen--Synchronous Interface-specific

Figure 6 shows the different object properties screen for benchmarks for synchronous interfaces. There are no component or type columns, but a port column has been added. The port column contains the port on the collaboration object. There is also an option to specify the number of threads that are created to make direct calls to the collaboration object.

To set the object properties for benchmarks of synchronous interfaces, do the following:

  1. In the list in the Port column, click the name of the port on the collaboration object specified in the Collaboration column to which direct calls are made by access clients.
  2. Type a number in the Number of threads field to specify how many threads are created to make direct calls to the collaboration object.

Copyright IBM Corp. 1997, 2004