Knuth states the algorithm as follows: We create a list L of sublists, each of length of a power of 2, ordered by decreasing length. We loop through the records one at a time. We add each record at the end of L as a run of length 1. Repeatedly merge the smallest two sublists if they are of equal length. This formulation suggests an interative implementation. An implementation in pseudocode is given in Appendix I. When I read the initial post I reinvented the algorithm on the spot and formulated informally as follows:
Sort the first two elements. You now have a sorted list of length 2. Sort the next two elements to get a second sorted list; merge it with the first. You now have a sorted list of length 4. Sort the next four elements and merge to get a list of length 8. Repeat as needed.This formulation suggests a recursive implementation. One is given in Appendix II. My impression is that iterative implementations are more flexible.
The advantage of this scheme for sorting lists is that eliminates excess scans of the data. In a recursive version we have to scan lists to find their mid points. In an interative version we have to scan the list to find the second of a pair of lists being merged. No such scans are required in McCarthy's algorithm; each element of the original list is extracted once in a sequential order.
If we count extracting sequential items from the data and intermediate merges as fundamental operations then McCarthy's algorithm performs exactly the same operations in the same order as a recursive mergesort which always makes the size of the first partition the largest possible power of two.
In the discussion that Schoen started, Eric Sosman pointed out that the partition sizes can be quite unbalanced, with the extreme case being when N is a power of 2 plus 1. When they are, the final merges can be quite inefficient. For example, if N=1025, then the final step is to merge a list of length 1024 with a list of length 1. On average this costs 512 comparisons. Sosman suggested the following modification:
"To avoid wildly unbalanced merges during cleanup, use two arrays alist and blist instead of the single array list. The "incrementing" process first tries to fill the NULL link in alist, but if it's non-NULL it uses blist instead. If both are non-NULL, it merges those two lists, deposits the new sub-list in alist and NULLs blist, then "carries" the merge result one place to the left:"
An important feature of McCarthy's algorithm is that it does not require knowing the length of the list. The implication of this is that we can stop at any time during the sorting of a stream of input and return the sorted stream to date (after having made the final cleanup pass.) Sosman's modification retains this advantage. However it is worthwhile considering modifications that require knowing the length of the list.
The mergesort process has the tournament pairing structure, the difference being that the merger of a pair of entries is advanced rather than the "winner" of a pair of entries. The tournament structure requires that the number of entries be a power of two. If the number is not a power of two the tournament tree is filled out by adding dummy entries called byes. In a recursive mergesort the partitions are divided as evenly as possible. This has the effect of distributing the byes uniformly thoughout the first round of the tournament. The basic McCarthy algorithm puts all of the byes at the tail of the tournament, which creates pairs of byes. It is this pattern of paired byes that creates the final unbalanced merges.
If we know the length we can compute the number of byes needed. If we use special case code for extracting pairs initially (something we want to do in any case) the we can either extract a pair and merge it or extract a single item depending on whether we want a bye or not. This can be done simply enough by keeping a counter of the number of byes issued so far.
Is the McCarthy algorithm just a minor refinement of the merge sort algorithm for sorting lists? No. It actually is a general pattern for processing the tournament pairs pattern. Instead of going through all pairs in one round and then their "survivors" in another round etc, it works across the the tournament tree going left to right and only accessing the next first round pair when all previous "contests" have been completed, i.e., a stream of data can be accessed on as needed basis instead of being read in all at once. likewise, pairs are not required for the pattern; it could be fixed size tuples or even variable sized tuples. All that is required is that there be some mechanism for combining the elements in a tuple.
function msort inlist array lists while (inlist) item = inlist.head inlist = inlist.tail level = 0; while (lists[level]) item = merge(item,lists[level]) lists[level] = nil level++ end while list[level] = item maxlevel = max(level,maxlevel) end while for level = 0 to maxlevel outlist = merge(outlist,lists[level]) end for return outlist
Appendix II - a recursive implementationHere is pseudocode for a recursive implementation. The extract function extracts an element from the input list, inlist; the extracted items are removed from inlist. Function beta returns a two list tuple.
function mergesort(inlist) return alpha(inlist,nil,0) function alpha (inlist, outlist, level) if not (inlist) return (nil, outlist) (inlist,partial) = beta(inlist, partial) outlist = merge(outlist, partial) return alpha(inlist, outlist, level+1) function beta (inlist, level) (inlist, run_1) = cond (level == 0), extract(inlist), beta(inlist,level-1) if not (inlist) return (nil, run_1) (inlist, run_2) = cond (level == 0), extract(inlist), beta(inlist,level-1) return (inlist, merge(run_1,run_2)
This page was last updated February 6, 2007.