Some benchmarks may require that you modify IBM WebSphere InterChange Server content. This section describes those situations and how to modify the content.
You can specify the number of child object instances that are generated for a top-level source object when the benchmark executes. This is an optional technique that results in greater accuracy in the simulation than using a default number of instances generated for each child object. Follow these steps to implement this approach:
Figure 11 shows how to add the BenchNumContainedObjs parameter to a business object definition.
Another reason you may need to modify IBM WebSphere InterChange Server content in preparation for a benchmark is to handle the presence of any special formatting operations in the components. Some examples of this are:
For instance, a transformation that implements a lookup relationship requires that the value in an attribute be equal to a value in the database tables of the lookup relationship. A lookup relationship might be used to perform static cross-referencing between an application that uses text and another that uses numeric codes for country names.
This sort of transformation is typically achieved with a lookup relationship, but can also be done by evaluating the values in Java code.
In this situation, an operation expects that a value in an attribute is formatted in a particular way, for instance, a series of tokens separated by a particular order of delimiters, or a specific date format.
Either of these situations typically occurs in mapping but can occur anywhere depending on the business requirements and how the interface is designed. Furthermore, a violation of any such transformation rule typically results in a failure of the flow, although this behavior also depends on the business requirements and how the interface is designed.
Special formatting operations such as these are likely to result in failed flows when performing a benchmark because the system cannot know to generate sample data from a specific set of values, or sample data that conforms to a specific format. A benchmark that measures the throughput of a system where 100% of the flows fail does not present an accurate measure of the system's throughput capabilities, so it is important to avoid this problem.
You can take one of two approaches to handle these requirements:
With this approach you change the behavior of the interface so that the special formatting operations do not apply.
If the operation is performed in a map, then you can comment out the operation and recompile the map. If the operation is performed in a collaboration, then you must comment out the operation in the collaboration template, compile the template, and restart any collaboration objects that are based on the template. In both cases, you must make sure that no operations reference or depend on the value derived from any operations that are commented-out.
This is the less helpful of the two approaches because special operations (lookup relationships in particular) affect the throughput of an interface, and the benchmark should be as accurate as possible. If the transformations are not executed as part of the benchmark, then the benchmark is an inaccurate portrait of the system behavior. There may be occasions, however, where the only alternative is to modify the content in this way.
With this approach, you prepare and modify a custom file of sample data that has values that belong to the required subset or conform to the required format.
This is the more helpful of the two approaches because it allows the benchmark to emulate "real" transactions as accurately as possible. This may not always be a viable approach, however, in which case you may have to modify the content as a workaround.
The ways you can generate sample data and modify it are covered in greater detail in the section Prepare sample data.