Performance measurements are relative: they tell how a system behaves for a particular workload. Usually, a system is considered to perform well if it can complete a particular workload faster than other systems or with fewer resources.
In a test system, you can control the workload in your system by running the same tasks many times. During each iteration you can measure how fast your system completed the tasks and how much resource it used.
However, in a production system it is difficult to compare measurements taken at different times because the workload is constantly changing. To obtain a performance measurement, you must compare the average performance of your system measured over a period of time to the workload it processed during that time. To make these comparisons, you need to calculate two types of relative measurements:
Measured as a rate, in tasks per unit time. These measurements help you determine the load on the database manager or the operating system over a period of time. High load values in some areas and low load in others may suggest a bottleneck in the system. Also, while similar load measurements do not guarantee that two workloads are comparable, different ones show that the workloads are not comparable.
Measured in a range from 0% to 100% (where 100% indicates optimal performance). These measurements help you estimate how effective the various buffers in the DASD I/O system are performing.
Effective use is calculated by comparing the number of pages the system looks for in a buffer to the number it finds there. You can think of this as the number of successes compared to the number of attempts. For example, if the database manager looks in the local buffers for a page 100 times, and finds the page it is looking for 75 times, the local buffers are 75% effective.
This percentage can be calculated in several ways:
Success Attempt - Failure Failure ------- X 100% = ----------------- x 100% = 1 - ------- x 100% Attempt Attempt Attempt
You can also express effective use as a hit ratio with the following calculation:
Attempt ------- Failure
In this case, the higher the value of the hit ratio the better the performance, the lowest value for a hit ratio is 1.
An important factor affecting the accuracy of your performance measurements is the sampling interval. Most useful performance values, whether measured directly or calculated from other measures, are averages over time.
If the sampling interval is too long, you may lose significant values. For example, you would not see a 10-minute peak in DASD paging load or a 10-minute drop in the effective use of the local buffers if you only looked at the VMDSS performance counters once a day.
If the interval is too short, your results may not be statistically valid. For example, if one checkpoint occurred during one 30 minute interval, you could not confidently say that the database manager was performing 2 checkpoints per hour.
You also need to weigh the benefits of making performance measurements against the additional overhead involved. While recording performance numbers every 10 seconds may give you an excellent picture of how your database is working, the additional load on the operating system may reduce your database's performance, or consume large amounts of DASD space.