A note on optimizing code
Some say that the golden rule about optimizing is “don’t”. This “golden rule” is one of those saws that should be taken with a large grain of salt. It might be better to say: Don’t optimize until you know what you are doing. Be that as it may, there are at least three major areas where one can optimize. They are:
Choice of algorithm is an obvious path to improvement. Indeed, “choosing the right algorithm” is often recommended as the key to optimization. Regrettably, this is an overrated panacea. It is quite true that making poor choices as to which algorithms to use can have an enormous impact on the performance of programs. However implemented algorithms are usually available as canned code. In those cases where algorithm choice matters, one selects (should select) an appropriate package and use it.
In most programs the bulk of the code, particularly that actually written by the programmer, does not not consist of well-defined, clearly delimited algorithms. Rather it consists of “program structure”, i.e., definitions of various sorts, and executive routines that do some work and delegate other work. In practice the most costly thing in large programs is inefficient program structure, work continually repeated because of poor choices in program layering and modularity.
Some of this useless work can be spotted with the aid of a profiler. Spending a lot of the execution time in service utilities, e.g., the storage allocator, can spotlight unnecessary work.
However much of it comes from an overly narrow view of the functional bits and pieces. Two things happen; the interface protocols explode, and the program elements drown in repetitive chatter as they talk back and forth passing small bits of information.
This page was last updated June 1, 2004.