Modifying IBM WebSphere InterChange Server content if necessary

Some benchmarks may require that you modify IBM WebSphere InterChange Server content. This section describes those situations and how to modify the content.

Specifying the number of generated child objects

You can specify the number of child object instances that are generated for a top-level source object when the benchmark executes. This is an optional technique that results in greater accuracy in the simulation than using a default number of instances generated for each child object. Follow these steps to implement this approach:

  1. Interview persons at the site who are experts in the source application to find out the average number of each contained entity within the top-level source object.
  2. Use Business Object Designer (BOD) to modify the business object-level application-specific information of the child business object definitions of the source top-level object. Create a parameter named BenchNumContainedObjsand set its value to the number of child object instances that you ascertained in step 1.

    Figure 11 shows how to add the BenchNumContainedObjs parameter to a business object definition.

    Figure 11. BenchNumContainedObjs parameter

    Figure 12 shows how to add the BenchNumContainedObjs parameter to a business object definition, using the Business Object Designer.

Special formatting operations

Another reason you may need to modify IBM WebSphere InterChange Server content in preparation for a benchmark is to handle the presence of any special formatting operations in the components. Some examples of this are:

Either of these situations typically occurs in mapping but can occur anywhere depending on the business requirements and how the interface is designed. Furthermore, a violation of any such transformation rule typically results in a failure of the flow, although this behavior also depends on the business requirements and how the interface is designed.

Special formatting operations such as these are likely to result in failed flows when performing a benchmark because the system cannot know to generate sample data from a specific set of values, or sample data that conforms to a specific format. A benchmark that measures the throughput of a system where 100% of the flows fail does not present an accurate measure of the system's throughput capabilities, so it is important to avoid this problem.

You can take one of two approaches to handle these requirements:

Copyright IBM Corp. 1997, 2004