The IBM WebSphere InterChange Server benchmarking tool enables you to test various WebSphere InterChange Server components, interfaces, and systems to measure their throughput. Although the results of a benchmark cannot precisely predict the throughput in a production environment, they can give a useful estimate.
In order for a benchmark to be useful, the terms used to describe it and the task or set of tasks that it measures must be very well defined. Although the terms and tasks involved in benchmarking some technologies (such as graphics cards, CPUs, and databases) are well-defined because of the maturity of those markets and because of established standards, there are no universally accepted terms and tasks within the business process integration market. The following sections establish a set of definitions that are central to benchmarking the system.
A unit of work is a basic, complete and countable piece of work. The different types of WebSphere InterChange Server benchmarks evaluate different types of interfaces or components, so the definition of a unit of work varies with the type of benchmark. For one type of benchmark, a unit of work may consist of the completion of an entire business process where a business object request is submitted by a source connector, processed by a collaboration, processed by any destination connectors, and returned to the collaboration, with any requisite transformations and other operations in between. For another type of benchmark, a unit of work may consist only of the processing of a business object by a connector. The different types of benchmarks and their units of work are described in the section "Types of benchmarks".
A transaction is one execution of a unit of work.
Response time is the amount of time it takes during a benchmark for the component or interface to complete a transaction.
Throughput is the number of transactions completed in a unit of time. This figure is typically expressed in terms of the number of business objects processed per unit of time (such as second, minute, or hour).
The service demand placed on InterChange Server by the connector agents and access framework in the form of business object requests.
A set of software components that work together to automate a business process. The components may be IBM WebSphere InterChange Server components, such as connectors, maps, collaborations, and so forth, and may be external, such as servlets, triggers, and scripts.
There are six types of WebSphere InterChange Server benchmarks available to profile the performance of particular components, interfaces, or systems.
The Business Object Throughput benchmark measures the throughput of a connector as it manages business objects within the system.
A unit of work for this benchmark consists of the transmission of the business object between the connector controller and connector agent across the transport protocol and the transformation of the business object through mapping in one direction.
The Agent benchmark measures the interaction between a connector agent and the application it communicates with.
A unit of work for this benchmark comprises the agent posting a request to the application (whereupon the application performs whatever action is dictated by its design and the metadata of the event, such as invoking an API to respond to operations such as create, update, delete, retrieve, and so forth) and receiving the response from the application.
The Collaboration Throughput benchmark measures the number of objects processed within a unit of time by a collaboration.
A unit of work for this benchmark begins when a business object is delivered asynchronously to a collaboration and ends when the collaboration processes the response business object returned by the destination application.
The Access Throughput benchmark measures the system throughput for an IBM interface that is triggered by an external process (such as a web servlet) making a synchronous direct call.
A unit of work for this benchmark is identical to that for a Collaboration Throughput benchmark, except in that the collaboration is invoked through the Server Access Interface rather than through a connector controller.
The Access Response Time benchmark measures the throughput for an external process to make a request of a collaboration and receive the response returned by it.
A unit of work for this benchmark is identical to that for an Access Throughput benchmark, though it also includes the time taken to return the response to the access client.
The Business Process Throughput benchmark measures the throughput of the entire system; it might include any number of connectors, collaborations, and collaboration groups.
Benchmarks can be very useful to compare the throughput of two systems that
are identical except for a single variable. A CPU benchmark, for
instance, might compare the performance of CPUs from competing manufacturers
by installing the CPUs on identical computers, giving them the same kind of
workload, and testing how many instructions each can process in the same
amount of time. Table 1 shows the various variables that figure in IBM's
benchmarks, and how they change to achieve different measurements.
Product | Release | Workload | Number of interfaces | Available resources |
---|---|---|---|---|
Different | N/A | Same | Same | Same |
Same | Different | Same | Same | Same |
Same | Same | Different | Same | Same |
Same | Same | Same | Different | Same |
Same | Same | Same | Same | Different |
By changing a single variable and keeping the others the same, an IBM benchmark can be used to:
A benchmark is useful in this case to compare WebSphere InterChange Server with integration software developed by other software vendors.
A benchmark is useful in this case to determine the gain in throughput achieved by upgrading from one version of WebSphere InterChange Server to another.
A benchmark is useful in this case to determine the impact on the throughput of the system if the number of transactions processed by an interface increases.
A benchmark is useful in this case to determine if there is an impact on the throughput of the system if new interfaces are added to those that are already running.
A benchmark is useful in this case to determine if the throughput of the system improves significantly by tuning performance or investing in new hardware.
The basic concepts covered in this section must be understood to configure and run IBM WebSphere InterChange Server benchmarks.
Different WebSphere InterChange Server components (such as collaborations, connectors, and so on) fill roles within a benchmark. The roles are described in the following subsections.
A participant is any component that is required in a benchmark.
You add components to a benchmark as participants when you define the benchmark. Table 2 shows which components are participants in each benchmark type.
A benchmark executes after it has been defined, InterChange Server has been restarted, and all of its participants have started up. Some participants are started automatically when InterChange Server starts (such as collaborations), but others must be started by you (such as connectors or the access client).
A component, besides being a participant in a benchmark, may also serve in other roles.
A coordinator is a component that keeps track of how many benchmark participants have started up so that it can initiate the benchmark, and shuts down all of the participants when the benchmark completes.
One benchmark participant is automatically chosen by the system to be a coordinator when a benchmark is created. Table 2 shows which type of components can be the coordinator for each benchmark type.
A sample provider periodically profiles characteristics of the system throughput during the execution of a benchmark. You can specify how long the sample provider waits during the benchmark before it starts taking the samples, and you can specify how many samples are taken during the course of the benchmark. When the benchmark finishes, the sample provider analyzes the collection of samples it took and presents an overall profile of the benchmark performance.
One benchmark participant is automatically chosen by the system to be a sample provider when a benchmark is created. Table 2 shows which type of components can be a sample provider for each benchmark type.
A workload generator provides the sample business objects that are processed during the benchmark.
You configure components as workload generators when you define the
benchmark. The data used by the workload generator to provide the
sample business objects can be generated by the system or supplied by
you. Table 2 shows which type of components can be workload
generators for each benchmark type.
Benchmark Type | Participants | Coordinator | Sample provider | Workload generator |
---|---|---|---|---|
Business Object Throughput (AppToGeneric directionality) | All user-selected connectors (agents and controllers) | One connector controller | Controllers of all participating connectors | All participant connectors |
Business Object Throughput (GenericToApp directionality) | All user-selected connectors (agents and controllers) | One connector controller | Agents of all participating connectors | All participant connectors |
Agent | One connector (controller and agent) | Participating connector controller | Participating connector agent | Participating connector agent |
Collaboration Throughput | One collaboration and all connectors bound to it | Collaboration | Collaboration | One connector bound to a triggering port of the collaboration |
Access Throughput | All user-selected collaborations, all connectors bound to them, and the access client | Collaboration | Access client | Access client |
Access Response Time | All user-selected collaborations, all connectors bound to them, and the access client | Collaboration | Access client | Access client |
Business Process Throughput | All user-selected collaborations and all connectors bound to them | Collaboration | All participating collaborations | All source connectors |
Although multiple benchmarks can be defined, no IBM component can be a participant in multiple benchmarks at the same time. If a component must participate in multiple benchmarks, then one benchmark must be defined, performed, and then deleted so that the next benchmark can go through the same series of steps.
A benchmark produces the following statistics:
The highest observed throughput for the benchmark
The minimum observed throughput for the benchmark
The 90th percentile figure is the time in milliseconds that 90% of the transactions surpassed
The arithmetic mean of all the samples collected throughout the benchmark
The amount of time between taking samples