One type of database benchmark involves choosing a configuration parameter and running the test with different values for that parameter until the maximum benefit is achieved. A single test should include executing the application through several iterations (for example, 10 times) with the same parameter value to get an average timing, which will better show the effect of parameter changes.
When running your benchmark, the first iteration (called a warm-up run) should be considered a separate case from the subsequent iterations (called normal runs). This is necessary because the results from the warm-up run will include some start-up activities (such as initializing the buffer pool). Consequently, the warm-up run will take somewhat longer than normal runs. Although the information from the warm-up run may be realistically valid, it will not be statistically valid. Therefore, when calculating the average timing or CPU for a specific set of parameter values, use the results from normal runs.
You may want to consider using the Performance Configuration wizard to create the warm-up run of the benchmark. The questions asked as part of the Performance Configuration wizard will provide insight into some of those things to consider when adjusting the configuration of your environment for the normal runs during your benchmark activity. To use the Performance Configuration wizard, enter db2cc to get into the Control Center and proceed from there.
If you are benchmarking using individual queries, you need to ensure that you minimize the potential effects of previous queries. This can be accomplished by flushing the buffer pool which can be done by reading a number of pages (irrelevant to your query) to fill the buffer pool.
After completing the iterations for a single set of parameter values, a single parameter can be changed. However, between each iteration, the following tasks should be performed to restore the benchmark environment to its original state:
The following are additional considerations when benchmarking on OS/2:
Output from the benchmark program should include an identifier for each test, the iteration of the program execution, the statement number, and the timing for the execution. A summary of benchmarking results after a series of measurements might look like the following:
Figure 102. Benchmark Sample Results
Test Iter. Stmt Timing SQL Statement Numbr Numbr Numbr (hh:mm:ss.ss) 002 05 01 00:00:01.34 CONNECT TO SAMPLE 002 05 10 00:02:08.15 OPEN cursor_01 002 05 15 00:00:00.24 FETCH cursor_01 002 05 15 00:00:00.23 FETCH cursor_01 002 05 15 00:00:00.28 FETCH cursor_01 002 05 15 00:00:00.21 FETCH cursor_01 002 05 15 00:00:00.20 FETCH cursor_01 002 05 15 00:00:00.22 FETCH cursor_01 002 05 15 00:00:00.22 FETCH cursor_01 002 05 20 00:00:00.84 CLOSE cursor_01 002 05 99 00:00:00.03 CONNECT RESET |
Note: | The data in the above report is shown for illustration purposes only. It does not represent measured results. |
Examining this report would indicate that the CONNECT (statement 01) took 1.34 seconds, the OPEN CURSOR (statement 10) took 2 minutes and 8.15 seconds, the FETCHES (statement 15) returned seven rows with the longest delay being .28 seconds, the CLOSE CURSOR (statement 20) took .84 seconds, and the CONNECT RESET (statement 99) took .03 seconds.
It might be beneficial for your program to output your data in a delimited ASCII format so that it could later be imported into a database table or a spreadsheet for further statistical analysis.
Sample output for a benchmark report might be:
Figure 103. Benchmark Sample Timings Report
PARAMETER VALUES FOR EACH BENCHMARK TEST TEST NUMBER 001 002 003 004 005 locklist 63 63 63 63 63 >> buffpage 1000 1175 1250 1325 1400 << maxappls 8 8 8 8 8 applheapsz 48 48 48 48 48 dbheap 128 128 128 128 128 sortheap 256 256 256 256 256 maxlocks 22 22 22 22 22 stmtheap 1024 1024 1024 1024 1024 SQL STMT AVERAGE TIMINGS (seconds) 01 01.34 01.34 01.35 01.35 01.36 10 02.15 02.00 01.55 01.24 01.00 15 00.22 00.22 00.22 00.22 00.22 20 00.84 00.84 00.84 00.84 00.84 99 00.03 00.03 00.03 00.03 00.03 |
Note: | The data in the above report is shown for illustration purposes only. It does not represent any measured results. |
Examining the data in this example shows that changing the buffpage parameter successively lowered the OPEN CURSOR times from 2.15 seconds to 1.00 second. (The assumption is that there is only one (1) buffer pool with the size (NPAGES) set to -1. This means the size of the buffer pool is controlled by the buffpage parameter.)
In summary, the following steps/iterations may be followed to benchmark a database application:
Run your set of iterations for this initial case and calculate the average timing or CPU.
Note: | If you were to graph the performance results, you would be looking for the point where the curve begins to plateau or decline. |
You can write a driver program to help you with your benchmark testing. This driver program could be written using a language such as REXX or, for UNIX-based platforms, using shell scripts.
This driver program would execute the benchmark program, pass it the appropriate parameters, drive the test through multiple iterations, restore the environment to a consistent state, set up the next test with new parameter values, and collect/consolidate the test results. These driver programs can be flexible enough that they could be used to run the entire set of benchmark tests, analyze the results, and provide a report of the final and best parameter values for the given test.