About IBM WebSphere InterChange Server benchmarks

The IBM WebSphere InterChange Server benchmarking tool enables you to test various WebSphere InterChange Server components, interfaces, and systems to measure their throughput. Although the results of a benchmark cannot precisely predict the throughput in a production environment, they can give a useful estimate.

Important:
The benchmarking feature is designed to be used in a development or testing environment only. Running a benchmark adds data to cross-reference tables, and performing the required clean-up actions eliminates data from both work-in-progress tables and persistent messaging queues. It is critical that neither of those things happen in a production environment, and that the benchmarking feature therefore be used only in development or testing environments.

Benchmark terminology

In order for a benchmark to be useful, the terms used to describe it and the task or set of tasks that it measures must be very well defined. Although the terms and tasks involved in benchmarking some technologies (such as graphics cards, CPUs, and databases) are well-defined because of the maturity of those markets and because of established standards, there are no universally accepted terms and tasks within the business process integration market. The following sections establish a set of definitions that are central to benchmarking the system.

Unit of work

A unit of work is a basic, complete and countable piece of work. The different types of WebSphere InterChange Server benchmarks evaluate different types of interfaces or components, so the definition of a unit of work varies with the type of benchmark. For one type of benchmark, a unit of work may consist of the completion of an entire business process where a business object request is submitted by a source connector, processed by a collaboration, processed by any destination connectors, and returned to the collaboration, with any requisite transformations and other operations in between. For another type of benchmark, a unit of work may consist only of the processing of a business object by a connector. The different types of benchmarks and their units of work are described in the section "Types of benchmarks".

Transaction

A transaction is one execution of a unit of work.

Response time

Response time is the amount of time it takes during a benchmark for the component or interface to complete a transaction.

Throughput

Throughput is the number of transactions completed in a unit of time. This figure is typically expressed in terms of the number of business objects processed per unit of time (such as second, minute, or hour).

Workload

The service demand placed on InterChange Server by the connector agents and access framework in the form of business object requests.

Interface

A set of software components that work together to automate a business process. The components may be IBM WebSphere InterChange Server components, such as connectors, maps, collaborations, and so forth, and may be external, such as servlets, triggers, and scripts.

Types of benchmarks

There are six types of WebSphere InterChange Server benchmarks available to profile the performance of particular components, interfaces, or systems.

Business Object Throughput

The Business Object Throughput benchmark measures the throughput of a connector as it manages business objects within the system.

A unit of work for this benchmark consists of the transmission of the business object between the connector controller and connector agent across the transport protocol and the transformation of the business object through mapping in one direction.

Agent

The Agent benchmark measures the interaction between a connector agent and the application it communicates with.

A unit of work for this benchmark comprises the agent posting a request to the application (whereupon the application performs whatever action is dictated by its design and the metadata of the event, such as invoking an API to respond to operations such as create, update, delete, retrieve, and so forth) and receiving the response from the application.

Collaboration Throughput

The Collaboration Throughput benchmark measures the number of objects processed within a unit of time by a collaboration.

A unit of work for this benchmark begins when a business object is delivered asynchronously to a collaboration and ends when the collaboration processes the response business object returned by the destination application.

Access Throughput

The Access Throughput benchmark measures the system throughput for an IBM interface that is triggered by an external process (such as a web servlet) making a synchronous direct call.

A unit of work for this benchmark is identical to that for a Collaboration Throughput benchmark, except in that the collaboration is invoked through the Server Access Interface rather than through a connector controller.

Access Response Time

The Access Response Time benchmark measures the throughput for an external process to make a request of a collaboration and receive the response returned by it.

A unit of work for this benchmark is identical to that for an Access Throughput benchmark, though it also includes the time taken to return the response to the access client.

Business Process Throughput

The Business Process Throughput benchmark measures the throughput of the entire system; it might include any number of connectors, collaborations, and collaboration groups.

How benchmarks can be useful

Benchmarks can be very useful to compare the throughput of two systems that are identical except for a single variable. A CPU benchmark, for instance, might compare the performance of CPUs from competing manufacturers by installing the CPUs on identical computers, giving them the same kind of workload, and testing how many instructions each can process in the same amount of time. Table 1 shows the various variables that figure in IBM's benchmarks, and how they change to achieve different measurements.

Table 1. Benchmark variables

Product Release Workload Number of interfaces Available resources
Different N/A Same Same Same
Same Different Same Same Same
Same Same Different Same Same
Same Same Same Different Same
Same Same Same Same Different

By changing a single variable and keeping the others the same, an IBM benchmark can be used to:

Benchmark core concepts

The basic concepts covered in this section must be understood to configure and run IBM WebSphere InterChange Server benchmarks.

Benchmark component roles

Different WebSphere InterChange Server components (such as collaborations, connectors, and so on) fill roles within a benchmark. The roles are described in the following subsections.

Participant

A participant is any component that is required in a benchmark.

You add components to a benchmark as participants when you define the benchmark. Table 2 shows which components are participants in each benchmark type.

A benchmark executes after it has been defined, InterChange Server has been restarted, and all of its participants have started up. Some participants are started automatically when InterChange Server starts (such as collaborations), but others must be started by you (such as connectors or the access client).

A component, besides being a participant in a benchmark, may also serve in other roles.

Coordinator

A coordinator is a component that keeps track of how many benchmark participants have started up so that it can initiate the benchmark, and shuts down all of the participants when the benchmark completes.

One benchmark participant is automatically chosen by the system to be a coordinator when a benchmark is created. Table 2 shows which type of components can be the coordinator for each benchmark type.

Sample provider

A sample provider periodically profiles characteristics of the system throughput during the execution of a benchmark. You can specify how long the sample provider waits during the benchmark before it starts taking the samples, and you can specify how many samples are taken during the course of the benchmark. When the benchmark finishes, the sample provider analyzes the collection of samples it took and presents an overall profile of the benchmark performance.

One benchmark participant is automatically chosen by the system to be a sample provider when a benchmark is created. Table 2 shows which type of components can be a sample provider for each benchmark type.

Workload generator

A workload generator provides the sample business objects that are processed during the benchmark.

You configure components as workload generators when you define the benchmark. The data used by the workload generator to provide the sample business objects can be generated by the system or supplied by you. Table 2 shows which type of components can be workload generators for each benchmark type.

Table 2. Benchmark components

Benchmark Type Participants Coordinator Sample provider Workload generator
Business Object Throughput (AppToGeneric directionality) All user-selected connectors (agents and controllers) One connector controller Controllers of all participating connectors All participant connectors
Business Object Throughput (GenericToApp directionality) All user-selected connectors (agents and controllers) One connector controller Agents of all participating connectors All participant connectors
Agent One connector (controller and agent) Participating connector controller Participating connector agent Participating connector agent
Collaboration Throughput One collaboration and all connectors bound to it Collaboration Collaboration One connector bound to a triggering port of the collaboration
Access Throughput All user-selected collaborations, all connectors bound to them, and the access client Collaboration Access client Access client
Access Response Time All user-selected collaborations, all connectors bound to them, and the access client Collaboration Access client Access client
Business Process Throughput All user-selected collaborations and all connectors bound to them Collaboration All participating collaborations All source connectors

Exclusivity in benchmark participants

Although multiple benchmarks can be defined, no IBM component can be a participant in multiple benchmarks at the same time. If a component must participate in multiple benchmarks, then one benchmark must be defined, performed, and then deleted so that the next benchmark can go through the same series of steps.

Benchmark statistics

A benchmark produces the following statistics:

Copyright IBM Corp. 1997, 2004