Improving testHash performance

The data Quantify reported indicates that much of the expense of the hashing operation is the number of strcmp comparisons that must be performed to find the requested entry. Avoiding these excessive strcmp comparisons would significantly improve the speed of the hash package.

There are several possible approaches you could take to improve the performance of the hashtable package. You could improve the hash key function itself in order to distribute items more uniformly in the hashtable, thereby shortening the hash buckets and thus the number of items that must be inspected to retrieve the requested item. Or, you could double the size of the hashtable array. This would distribute items into more buckets and make better use of the information in the computed hash key.

The approach you will use in this example is based on the idea that the full hash key is effectively a compressed version of the key string itself. The modulo operation, however, uses only a small fraction of the compressed information to form the index into the hashtable. The rest of the information encoded in the hash key is ignored. Since the hashIndex function computes the same value for identical strings, the testHash program could save the full hash key on the hash entry, compare the full hash keys, and then call the strcmp routine only if the keys were identical. This hash key comparison would be much quicker than calling strcmp.

The source code for the improved testHash program is in the file <quantifyhome>/example_quantify/improved_hash.c. The bold code shown here indicates changes that were made in order to implement the hash key comparison:

Running the improved_testHash program

Run the improved_testHash program.

% improved_testHash.pure 500 test_words

Compare this program summary for the improved_testHash program with the original program summary.

The counts for Time in your code has decreased by 1.26 million cycles—from 5.03 million cycles to 3.77 million cycles. It's now 25 percent faster.

Display the Function Detail window for the getHash function.

You can see that the number of calls to strcmp from getHash has decreased dramatically, from 17,563 to 750.

The time for getHash has decreased because, even though the routine is now comparing hash keys before calling strcmp, it is saving time by avoiding the cost of calling strcmp. Overall, the function+descendants time for getHash has decreased from the first to the second run.

Verifying the performance improvement

Save the export data from the improved_testHash run, then exit Quantify.

You can use Quantify's qxdiff script to compare the performance of the original testHash program with the performance of the improved_testHash run. The qxdiff script compares two export data files and reports any performance changes. Since you are interested only in the time spent in the code itself, you can use the -i option to ignore functions that make system calls.

% qxdiff -i testHash.pure.20790.0.qx improved_testHash.pure.20854.0.qx

The qxdiff report confirms a 25 percent improvement in the performance of the testHash program:

The putHash, getHash, and remHash functions are faster because they now avoid unnecessary calls to strcmp. The hashIndex function is slightly slower because it is saving the full hash key into a global variable.

For more information about using the qxdiff script, read Comparing program runs with qxdiff.