The testHash tutorial shows how you can use Quantify to find one type of performance bottleneck: inefficient computation. Quantify can also help you find and resolve these other causes of slow software:
As applications evolve and algorithms are refined, or as data changes, portions of code that were needed in earlier versions can end up falling into disuse, without ever being removed. The end result is that many large programs perform computations whose results are never used. Bottlenecks are caused by time wasted on this dead code.
Other common useless computations are those made automatically or by default, even if they are not required. Applications that needlessly free data structures during a program's shutdown, or open connections to workstations even though there isn't a user for them, are examples of this type of bottleneck.
Quantify helps find the time that is spent in dead code. Once you're convinced that the results of a computation are useless, you can remove the code.
Any computation that is performed before there is a need for its results can cause a bottleneck. For example, there may not be a reason to sort a list of numbers if the user hasn't requested that the sort be performed. Quantify can't tell you if the computation can be delayed; however, it can tell you the cost of the computation, and you can decide whether to postpone it.
Programs sometimes recompute needed values rather than caching them for later use. For example, determining the length of a constant string can result in needless computation if the computation is embedded in a loop; the length of the string is recomputed many times, each time getting the same value. Quantify can tell you where the recomputation is taking place, and you can decide to store the value after one computation.
A poor choice of algorithm or data structure layout can cause extra work for the program. The initial performance can appear acceptable, given small datasets, but then scale poorly when presented with larger or more complex datasets. This is what happened in the testHash program described earlier.
Quantify can tell you the cost of each computation at different scales so you can predict whether there will be a problem with still larger datasets. You can then use alternative algorithms and data structures that get the job done faster.
Bottlenecks can be caused by the way your own code uses operating system or third-party library services. Making library or system-call requests when you don't need the results is the same as performing needless computations.
Quantify shows you the time spent in the operating system or third-party libraries. You can see how much a request actually costs and make an informed decision about eliminating the request or pooling similar requests for more efficient service.
It is common with operating-system requests to make more requests than necessary. Quantify helps you identify excessive requests so you can design an alternative implementation.
Some operating-system calls can vary in the amount of time they require. For example, opening and accessing files across a network can be slower when there is increased network traffic. On most UNIX file systems, opening or calling the stat function on a file using a fully qualified pathname requires the operating system to verify the existence of each intermediate directory. When stat is called using a relative pathname, the operating system starts checking from the current working directory, thereby reducing the cost of the system call. The elapsed time that Quantify reports for system calls helps you see when they slow down so you can explore less expensive implementations.
External or environmental factors, such as high network delay or a high load average on the machine, can cause slow performance. Your program can also exhibit large swapping and paging effects, which Quantify cannot measure directly. These factors show up in Quantify's reports as increased system-call times.