How to measure anything: Finding the value of “intangibles” in business – Douglas W. Hubbard
Who should read this book?
Although many individuals stand to benefit from the groundbreaking ideas, it is worth noting that the book’s dense content may pose a challenge for casual readers.
For a CFO or Program Manager responsible for evaluating decisions within your organization, Hubbard’s work introduces practical techniques and strategies that can enhance your decision-making processes. By applying concepts such as 90% confidence intervals and breaking down abstract notions like security or quality into tangible components, you’ll gain a fresh perspective on evaluating the impact of your choices.
For an investor seeking to make informed decisions about potential opportunities, the book provides you with a toolkit to assess the worth of different investment options. With concepts like Monte Carlo simulations and Bayesian statistics at your disposal, you’ll develop a stronger ability to gauge the potential risks and rewards associated with various ventures.
Why you should read this book (or not)?
While the book was a demanding read, it contains several thought-provoking ideas that were definitely worth my effort. Hence, I am very thankful it was recommended to me. Especially for people who have little background in statistics (or simply don’t like these), many chapters will be hard to process. The first 100 pages are much less demanding, do not require statistics and contain great insights too.
The book contains several concepts that are definitely worth knowing, such as:
- The meaning & purpose of measurements
- Using 90% confidence intervals
- Breaking down an abstract thing (like security or quality) into tangible items
- The application of Monte Carlo Simulations and Bayesian Statistics in practical real-life examples
- Biases and flaws in our way of thinking and in decision making
- …
Sometimes, I left empty-handed. E.g. in chapter 12, there is a paragraph stating “suppose you wanted to evaluate the proficiency of project managers (…). Rasch developed a solution to this problem.” Unfortunately, the author continues with a different example. More in general, the author offers a starting point that in itself is not sufficient to get the job done yourself easily. You will need additional resources to convert a spotted opportunity into a value-driven conclusion.
I am happy I read it in full. However, you might prefer getting that knowledge differently than crawling through this book.
Interesting extracts
“There are 3 reasons for measurement:
- To inform a key decision
- It has its own market value (e.g. results of a consumer survey) and could be sold.
- To entertain or satisfy a curiosity”
“In some cases I’ve observed, the committees were categorically rejecting any investment where the benefits were “soft”. Important factors with names like “improved word-of-mouth advertising,” “reduced strategic risk,” or “premium brand positioning” were being ignored in the evaluation process because they were considered immeasurable. (…) In an equally irrational way, an immeasurable would be treated as a key strategic principle or “core value” of the organisation. In some cases, decision makers effectively treat this alleged intangible as a “must have” so that the question of the degree to which the intangible matters is never considered in a rational, quantitative way.”
“Definition of Measurement: A quantitatively expressed reduction of uncertainty based on one or more observations.”
“I might try a “thought experiment.” Imagine you are an alien scientist who can clone not just sheep or even people but entire organizations. Let’s say you were investigating a particular fast food chain and studying the effect of a particular intangible, say, “employee empowerment.” You create a pair of the same organization calling one the “test” group and one the “control” group. Now imagine that you give the test group a little bit more “employee empowerment” while holding the amount in the control group constant. What do you imagine you would actually observe—in any way, directly or indirectly—that would change for the first organization? Would you expect decisions to be made at a lower level in the organization? Would this mean those decisions are better or faster? Does it mean that employees require less supervision? Does that mean you can have a “flatter” organization with less management overhead? If you can identify even a single observation that would be different between the two cloned organizations, then you are well on the way to identifying how you would measure it.”
“Suppose, instead, you just randomly pick five people. There are some other issues we’ll get into later about what constitutes “random,” but, for now, let’s just say you cover your eyes and pick names from the employee directory. Call these people and, if they answer, ask them how long their commute typically is. When you get answers from five people, stop. Let’s suppose the values you get are 30, 60, 45, 80, and 60 minutes. Take the highest and lowest values in the sample of five: 30 and 80. There is a 93.75% chance that the median of the entire population of employees is between those two numbers. I call this the “Rule of Five.” The Rule of Five is simple, it works, and it can be proven to be statistically valid for a wide range of problems. With a sample this small, the range might be very wide, but if it was significantly narrower than your previous range, then it counts as a measurement. Note that in this case the “population” is not just the number of employees but the number of individual commute times (for which there are many varying values even for the same employee. (…) With a sample this small, the range might be very wide, but of it is significantly narrower than your previous range, then it counts as a measurement.
Rule of Five: There is a 93.75% chance that the median of a population is between the smallest and largest values in any random sample of five from that population.”
“The point is that when people say “You can prove anything with statistics,” they probably don’t really mean “statistics,” they just mean broadly the use of numbers (especially, for some reason, percentages). And they really don’t mean “anything” or “prove.” What they really mean is that “numbers can be used to confuse people, especially the gullible ones lacking basic skills with numbers.” With this, I completely agree but it is an entirely different claim.”
“The most common objection I might hear for building Monte Carlos is, however, the practicality of modeling real world problems in what strikes some as an academic abstraction. (…) “Yes, but our problem really is uniquely complex.” To a certain extent, I agree. Problems managers deal with are all as unique as snowflake – just like all the other snowflakes.”
“Everything we know from “experience” is just a sample. We didn’t actually experience everything; we experienced some things and we extrapolated from there.”
“The idea of “statistically significant” is often completely misremembered as some fixed minimum sample size or is invoked informally (ie without any calculation) as an objection to measurement. Even if the math for statistical significance is remembered and done correctly, the results are often misinterpreted. Even if the math is right and interpreted correctly, the mathematically precise meaning of statistical significance is not really what a decision maker wanted to know in the first place. (…) Statistical significance is not about whether the measurement was informative or economically justified.”
“But we probably wouldn’t be that interested in the probability that the market test was successful, given that there was a first-year profit. What we really want to know is the probability of a first-year profit, given that the market test was successful. That way, the market test can tell us something useful about whether to proceed with the product. This is what Bayes’ theorem does.”