Plot Summary

How to Measure Anything

Douglas W. Hubbard
Guide cover placeholder

How to Measure Anything

Nonfiction | Book | Adult | Published in 1985

Plot Summary

Douglas W. Hubbard, a management consultant and inventor of the Applied Information Economics (AIE) method, opens the third edition of his book with a bold claim: the widespread belief that certain business quantities are "intangibles," or things that cannot be measured, is a costly myth. Drawing on over two decades of consulting experience across industries ranging from insurance and cybersecurity to military logistics and environmental policy, Hubbard argues that anything relevant to a decision can be measured, provided one understands what "measurement" actually means and is willing to apply a few proven methods.


Hubbard begins by reframing the concept of measurement itself. He contends that most people mistakenly equate measurement with calculating an exact value, when in fact measurement is simply "a quantitatively expressed reduction of uncertainty based on one or more observations" (31), a definition rooted in information theory pioneer Claude Shannon's 1948 work. Under this definition, even imprecise observations count as measurements if they tell you more than you knew before. He further introduces the Bayesian interpretation of probability, in which probability represents a person's degree of belief rather than an objective property of reality, and argues this interpretation is more useful for real-world decisions than the competing frequentist view, which defines probability as an idealized frequency over infinite trials.


To demonstrate that difficult measurements are possible, Hubbard presents three "measurement mentors." The ancient Greek scholar Eratosthenes measured Earth's circumference in the third century B.C. using only shadow angles in two cities and simple geometry, arriving within roughly 3% of the true value. Nobel Prize-winning physicist Enrico Fermi showed how to estimate seemingly unknowable quantities by decomposing problems into smaller, more estimable components. And nine-year-old Emily Rosa designed a controlled experiment using a cardboard screen, coin flips, and randomization to test whether therapeutic touch practitioners could detect human energy fields; across 280 trials with 21 therapists, they performed no better than chance, and her results were published in the Journal of the American Medical Association. Together, these examples illustrate that clever indirect observations, decomposition, and simple scientific methods like sampling, randomization, and blind testing are accessible even on a small budget.


Hubbard then systematically dismantles objections to measurement. He addresses misconceptions about the concept (resolved by the uncertainty-reduction definition), failure to define the object of measurement (resolved by his "clarification chain," which holds that if something matters it must be detectable, and if detectable it can be measured), and ignorance of available methods. He introduces the "Rule of Five," which states there is a 93.75% chance that the median of a population falls between the smallest and largest values of any random sample of five, showing that even tiny samples are far more informative than most people realize. He also argues that economic objections are valid only when measurement costs exceed benefits (which is rarely the case), that the claim "you can prove anything with statistics" conflates persuasion with proof, and that ethical objections to measuring things like human life are self-defeating because refusing to measure forces worse allocation of limited resources.


The core of the book presents Hubbard's five-step AIE framework. The first step is defining the specific decision a measurement supports; if no decision can be identified, the measurement has no value. The second step is quantifying current uncertainty through calibrated probability estimation, a teachable skill in which estimators learn to provide confidence intervals whose stated probabilities match their actual accuracy. Hubbard reports that among 927 training participants, most are initially overconfident: when asked for 90% confidence intervals (ranges with a 90% chance of containing the true answer), they capture only about 53% of true answers. However, approximately 80% reach ideal calibration after a half-day of training using techniques including the "equivalent bet test," which asks estimators to compare betting on their stated range against betting on a random dial with matching odds, then adjusting until both options feel equivalent.


The third step is computing the value of additional information. Hubbard defines Expected Opportunity Loss (EOL) as the chance of being wrong multiplied by the cost of being wrong, and the Expected Value of Information as the reduction in EOL a measurement provides. Across over 80 major decision analyses totaling more than 7,000 variables, he finds that most variables have an information value near zero, but typically one to four per model justify deliberate measurement. This "measurement inversion" finding means the most valuable measurements frequently target the most "intangible" variables rather than familiar cost and schedule items.


The fourth step applies measurement methods where information value is high. Hubbard introduces Monte Carlo simulation, a computer method that generates thousands of random scenarios from probability distributions for uncertain inputs, as the fundamental tool for quantifying risk. He critiques popular risk-scoring methods using "high, medium, low" labels as meaningless and error-introducing, and notes a "risk paradox": organizations tend to apply quantitative risk analysis only to routine decisions while leaving the largest, riskiest choices to informal methods. He then surveys empirical techniques including small-sample statistics, controlled experiments, regression modeling for isolating individual variables, and Bayesian statistics for combining prior knowledge with new observations. Throughout, he argues that what decision makers need is not traditional statistical significance but sufficient uncertainty reduction to improve decisions.


The book's later chapters address measuring subjective preferences (including willingness-to-pay methods and organizational risk tolerance), the human mind as a flawed measurement instrument, and corrective methods. Hubbard documents cognitive biases such as anchoring, the halo/horns effect, and bandwagon conformity. He presents Paul Meehl's landmark finding that simple statistical models outperform expert judgment across over 150 studies, and describes corrective tools including Rasch models (which standardize evaluations across different judges and difficulty levels), the Lens Model (which uses regression on expert estimates to create a formula removing human inconsistency), and simple equally weighted scoring. He critiques methods like the Analytic Hierarchy Process for lacking empirical evidence of improving decisions. A case study from Life Technologies, Inc. demonstrates the Lens Model reducing revenue forecasting error by 76%. Hubbard also surveys emerging instruments including GPS tracking, Internet-based data analysis, and prediction markets.


The book concludes with three detailed case studies. For the Environmental Protection Agency's Safe Drinking Waters Information System, a model of 99 variables showed only one required further measurement; a Bayesian review of existing analyses resolved the uncertainty, justifying all three proposed improvements. For U.S. Marine Corps fuel forecasting, road experiments with GPS and fuel flow meters revealed that the biggest forecast error came from whether convoy routes were paved or unpaved, and the resulting tool cut error roughly in half, saving at least $50 million per year. For the ACORD insurance industry standards valuation, a survey of 149 member organizations confirmed an average 23% reduction in implementation time, and the total estimated value of industry standards exceeded $1 billion annually. Hubbard closes by reiterating his central message: if something is important enough to care about, it is observable in some way, and even simple methods can achieve significant uncertainty reduction at a fraction of the value gained.

We’re just getting started

Add this title to our list of requested Study Guides!