Why Working Complexity Must Grow From Simplicity

Copy link
3 min read

A complex system that works is invariably found to have evolved from a simple system that worked. — John Gall

What lingers after this line?

Gall’s Rule in Plain Terms

John Gall’s observation is less a mystical law than a recurring pattern in how real things get built: systems that succeed in the messy world usually didn’t begin as grand designs. They began as something small that already worked, and then they accumulated capability. In that sense, “simple” doesn’t mean trivial; it means understandable enough to run, test, and trust. From there, complexity becomes an outcome rather than a starting point. Gall’s rule highlights a practical asymmetry: it’s far easier to add parts to a functioning core than to debug a sprawling mechanism whose basics never stabilized.

Why Top-Down Complexity So Often Breaks

To see why complex systems fail when built all at once, consider what complexity does to feedback. In a large design, cause and effect are separated by layers, so errors hide. By the time the system is running, failures may look like “emergent behavior,” when they are simply untested assumptions colliding. Consequently, the initial version of a system must be legible enough that people can predict it, observe it, and repair it. When complexity arrives before reliability, you get a paradox: the system needs operation to learn from, yet it can’t operate long enough to teach you.

Evolution as a Design Strategy

Gall’s language—“evolved from”—invites a biological analogy: evolution doesn’t draft a perfect organism; it iterates from viable predecessors. In the same way, successful engineering and organizations tend to proceed by variation and selection: try a minimal structure, see what survives contact with reality, and keep what works. This doesn’t eliminate planning, but it reorders priorities. Instead of attempting to foresee every interaction, you build a small “survivor,” then let evidence guide which features are worth adding and which should be removed before they ossify into permanent liabilities.

Learning Loops: The Hidden Engine of Progress

A simple working system creates tight feedback loops: you can measure outcomes quickly, attribute them to specific choices, and adapt. That learning loop is often more valuable than any single feature, because it turns uncertainty into information. As the system grows, those loops can slow down; therefore, starting small isn’t merely a convenience—it’s how you preserve learning while the stakes are low. In practice, this is why prototypes, pilots, and “minimum viable products” are not buzzwords but mechanisms for converting ideas into tested knowledge.

Software and Infrastructure: Familiar Proof Cases

Software offers obvious examples: a reliable kernel, a basic API, or a small service that does one job well can expand into a platform. By contrast, “big bang” rewrites often stall because they postpone real-world validation until the end, when changes are most expensive. The same principle appears in infrastructure and policy. A city transit improvement that begins with a workable bus-lane pilot can scale into a network redesign, whereas a sweeping overhaul that launches fully formed may collapse under operational surprises, political backlash, or maintenance realities discovered too late.

How to Apply Gall’s Rule Deliberately

Applying Gall’s rule means identifying the smallest version of the system that delivers real value and can run end-to-end. You then guard that core: keep it observable, keep failure modes understandable, and resist “feature debt” that complicates fundamentals before they’re stable. Finally, you grow complexity only when the system earns it—when new parts solve demonstrated problems and integrate without obscuring how the whole behaves. In this way, complexity becomes a controlled accumulation of proven modules, rather than a fragile monument to untested ambition.