Programming work has a tree structure: software consists of many files, which consist of many functions, which are themselves implemented with other functions, recursively.

There are two approaches to this work: depth-first and breadth-first.

Depth-first programming is implementing each new function you need, on-demand. This approach risks “blowing your mental stack” — working on a sub-task only to finish and forget what the point was. It can also lead to premature optimization, and premature coding in general, when continued work reveals a different architecture is required.

Breadth-first programming is “stubbing-out” the functions you will eventually need, and solving your higher-level task in terms of those stubs. This keeps the overall architecture in view — revealing design flaws, and reduces the mental burden of keeping a “task-stack” in memory.

Hypothesis: The main value of TDD is that it encourages breadth-first programming.

At a certain point, the task-size becomes manageable to keep in memory, and switching from breadth-first to depth-first is optimal, because of the overhead incurred by abstraction. In terms of functional programming, this is the point when breaking a function into multiple sub-functions increases your total lines of code. Likewise with OOP.

If I’m correct, the optimal switching point will vary coder-to-coder, based on working memory capacity. This could be augmented by tools that manage the task-stack for the programmer (e.g. whiteboard adjacent to computer, where task tree is written).

Is depth-first or breadth-first an accurate characterization of how you program?