Site logo

Measure Fast, Iterate Faster: How Goals Are Won

Created at: August 24, 2025

Measurements and rapid iterations are essential for achieving a goal. - Doran Gao
Measurements and rapid iterations are essential for achieving a goal. - Doran Gao

Measurements and rapid iterations are essential for achieving a goal. - Doran Gao

Why Measurement Comes First

To begin, the claim that measurements and rapid iterations drive achievement asserts a pragmatic sequence: observe, adjust, and advance. Without numbers, we steer by opinion; with them, we steer by evidence. Measurement establishes a baseline, exposes trend lines, and turns fuzzy intentions into testable claims. It shrinks the feedback loop between action and insight, letting teams detect whether they are moving toward or away from the goal. Consequently, data does not replace judgment, but it does discipline it—making success less accidental and more repeatable, and setting the stage for purposeful iteration.

Iteration as Accelerated Learning

Historically, this rhythm of measuring and refining appears in the Shewhart–Deming cycle: Plan–Do–Study–Act. W. Edwards Deming’s Out of the Crisis (1982), building on Walter A. Shewhart’s statistical foundations (1939), frames improvement as a series of tightly scoped experiments. Toyota’s kaizen culture reinforced this logic on factory floors by favoring small, frequent changes over sweeping overhauls (Masaaki Imai, 1986). In each case, rapid cycles reduce the cost of being wrong: errors surface earlier, fixes are cheaper, and learning compounds. From here, the same logic naturally migrated to software and product development.

From Factories to Startups

In modern product work, Eric Ries’s The Lean Startup (2011) popularized the build–measure–learn loop, urging teams to ship small, measure real behavior, and iterate. Online experiments make this concrete: Ron Kohavi et al.’s Trustworthy Online Controlled Experiments (2020) shows that many confident product ideas fail A/B tests, while measured insights uncover unexpected wins. Companies like Microsoft and Booking.com scaled this approach, treating each release as a hypothesis rather than a foregone success. Thus measurement turns iteration into an engine of discovery, not just speed for its own sake.

Engineering at Full Scale

Consider SpaceX’s Starship program, which embraced rapid prototyping and data-rich tests. Early high-altitude flights (SN8–SN10) ended in dramatic “rapid unscheduled disassemblies,” yet each yielded telemetry that informed design tweaks. The result was SN15’s successful high-altitude flight and landing on May 5, 2021—a milestone earned through cycles of measured failure and targeted refinement. Crucially, the cadence created momentum: short intervals between tests meant lessons were fresh, teams stayed aligned, and design bets could be retired or reinforced quickly. Hardware advanced by the same measure–iterate logic familiar in software.

Goals That Invite Measurement

Of course, iteration needs direction. Objectives and Key Results (OKRs), first championed by Andy Grove in High Output Management (1983) and popularized by John Doerr’s Measure What Matters (2018), tie aspirations to quantifiable outcomes. Similarly, SMART goals—Specific, Measurable, Achievable, Relevant, Time-bound—originated with George T. Doran in Management Review (1981). Both frameworks make goals legible to data: they define what to measure, the intended time horizon, and the threshold for success. As a result, rapid iterations converge rather than wander, because each cycle tests progress against a clearly named aim.

Measure What Matters, Not What’s Easy

Yet not all metrics lead to progress. Goodhart’s Law—articulated by Charles Goodhart (1975) and later paraphrased by Marilyn Strathern (1997)—warns that when a measure becomes a target, it can stop being a good measure. Vanity metrics (page views, raw downloads) may rise without improving value, whereas actionable metrics tie cause to effect (activation, retention, cycle time). As Ries (2011) notes, the key is to prefer leading indicators connected to customer value and to guard them with counter-metrics (for example, quality or safety) so improvements don’t create hidden regressions.

Building the Feedback Loop

Practically, teams can operationalize this mindset by instrumenting products, defining baselines, and writing hypotheses before changes ship. Short batch sizes, weekly or biweekly review cadences, and lightweight retrospectives sustain momentum. An experiment log preserves decisions and prevents reinvention; dashboards make learning visible; and pre-defined stop rules curb wishful thinking when results disappoint. With each cycle, insight compounds, risk declines, and execution sharpens—fulfilling the spirit of Doran Gao’s claim: measurement illuminates the path, and rapid iteration carries you along it.