madhadron

How I think about algorithms

I was prompted by a discussion to try to articulate how I think about algorithms, that is, how I remember them, recruit them for tasks, or adjust or create them. The closest thing I’ve seen in writing is Skiena’s The Algorithm Design Manual, which takes pains to connect algorithms to form of problems where they arise, but there are certain principles I learned in pure mathematics which I haven’t seen anyone state for the working programmers.

First, I don’t remember proofs of correctness. I remember recovery procedures. This is a distinction that I got from Borovik’s Mathematics Under the Microscope (which I highly recommend):

A recovery procedure is a set of heuristic rules which we vaguely remember to apply when we want to recover a mathematical fact. It is like a poster on the control panel of some serious machine, a sub- marine or a plane, which says what the crew should do when things have gone haywire. A rederivation is a semiheuristic argument made in accordance with the recovery procedure. -Borovik, Mathematics Under the Microscope, p.212

When I want to rederive depth first search in a graph, I do so by taking depth first search on a tree, and adding a set where I record nodes I have visited so I don’t double visit. This provides no proof of correctness, but it provides a very fast path to the algorithm again.

Second, I try to construct my recovery procedures from techniques that cut as broadly as possible. The techniques involved in summing a list of integers and finding the longest path from a node in a graph may seem different, but my recovery procedures for both share attaching an accumulation state to each step of the traversal.

If I were only thinking of summing lists, I might remember an a single accumulator that I update from step to step. At some element of the list the accumulator has the value a and after processing that element it has a new value a’. That doesn’t work for graphs, though, since a given node can have multiple successors and thus a may have multiple new values, one for each next node. So I always think in terms of attaching an accumulator to each stage of the traversal, and add a second piece for the list where I remember that I don’t need any of the previous states and collapse it to a variable that I update in place. This adds complexity to the list case, but provides me with a piece of recovery procedures that I can share widely.

Why do I care about sharing pieces of recovery procedures widely across problems? For those not trained as mathematicians, the obvious answer might be to have to remember less. That’s not it at all. You don’t really save much, to be honest. The important reason is that, when I am groping towards the solution to a new problem, I want to have as many possible pieces to hand that may help me. If I am faced with a problem on trees, that same piece about attaching accumulators to traversal steps is ready to hand and hints a solution.

Third, I organize my algorithms by the forms of traversal they do. There is only really one way to traverse a singly linked list or other sequence: one element after another, accumulating state as you go. You may modulate the traversal with a conditional, such as to skip elements or do different things based on those elements, but in the end it is always

accum = ...
for x in xs:
   if p(x):
       accum = f(x, accum)
   elif q(x):
       accum = g(x, accum)
   ...
return accum

Continuing with linear structures, if we allow random access then we have a few new strategies like binary search and working with subsequences. If we allow high dimensional arrays then we get a whole bestiary of traversals from numerical analysis, largely related to multiplying matrices efficiently.

When I have a problem on a linear structure, I first try to identify the weakest linear structure that I can work with. Could I do this on a single linked list or do I need to backtrack or revisit things? Once I have that, I know the traversal schemes available to me.

Also in this vein are structures with no sequence at all: maps, sets, and multisets (ignoring maps used to represent the adjacency lists of graphs). We still traverse them linearly, but there is no fixed order. We may want to sort them into some order before traversing, or we insist that each step of our algorithm is commutative so it doesn’t matter how we reorder them.

When we get out of linear structures, we go straight into graphs. Trees are a special case of graphs, since most graph traversals consist of pruning the graph into a tree as we go. If we’re actually working with a tree, we may impose some additional machinery to keep the tree balanced, but that’s a modulation of the basic idea, since doing the algorithm on a non-balanced tree may worsen the complexity but doesn’t change the correctness. So I figure out my traversal ignoring balancing and then add balancing in a later step.

So, when I am exploring a new problem, I generally figure out the weakest structure I can represent it in without making the problem impossible. That sets my available traversals. Then I reach for cross cutting pieces to generate heuristic guesses at a solution. I then try to prove or disprove them, usually completely informally by constructing minimal counterexamples that I have to correct the algorithm to handle or hypothesizing an invariant that I need to maintain. This makes new problems far more familiar than not. After a while you stop hitting many problems that feel really new, though your collection of recovery pieces keeps shifting to try to give you smoother reach as you see more.

And, to be brutally honest, I also do this for problems I already know the answer to. Do I remember how to do binary search on a sorted array or depth first search of a graph? Not at all. But I can get it back fast enough that it doesn’t matter, and doing so reinforces the neural pathways that I will need for novel problems.