Posted on
April 20, 2024

Thinking more like an economist

We hear a lot about trying to “think like a scientist” as we develop a growth mindset and learn how to navigate uncertainty. But I’m starting to think we might be better off trying to “think like an economist.”

Establishing a laboratory-like setting, testing a null hypothesis, controlling for variables, crunching the data… those core capabilities of science still feel a little “over-the-top” for most of corporate strategy and product development. Creating a control group? Randomized tests? Conceptually, sure. But in reality, not so much.

But there are a group of professionals out there who have learned to answer questions in a complex space in which they have little direct control. A space where it’s less about physics and more about human behavior. Less physical nature, more human nature. And it affects our happiness, so it’s not trivial.

So maybe we should observe how economists approach their work?

The “dismal science” of economics has been forced to build models and explore hypotheses under complex and uncertain conditions since its birth. Economists operate within robust decision architectures that help them build and test beliefs about behaviors and trends in the complex domains, like the world economy.

What’s really interesting is how transparent much of their work is.

This week, I read a weekly newsletter from economist (and Nobel-prize winner) Paul Krugman that delivered a master-class in good decision architecture. Now, granted, he’s more of an observer/pundit/analyst/advisor than decision-maker for this, but still, I found myself thinking that if we could bring this mindset to our corporate settings, it would go a long way towards improving our organizational decision making capabilities.

His newsletter, titled “Stumbling into Goldilocks”, provided an updated analysis of the current state of the US economy, tracking the efforts of the Fed as it aims to fight inflation while avoiding a recession. For a couple years now, economists (including Krugman) have been very vocal about their (competing) hypotheses on the best course of action. This post was Krugman’s check-in on the debate, and an assessment of what could be learned about beliefs and past decisions from new evidence.

What follows is a breakdown of what I observed in Krugman’s analysis, and a discussion of how we could aim to do something similar in our organizational contexts.

In the article, I saw several behaviors worth emulating. Let’s take a closer look at what he was doing:

  1. Explore alternative metrics to better seek truth

“Stumbling into Goldilocks”, Paul Krugman.

Metrics derive from models, and all models are wrong but sometimes useful. So it’s important to keep an open mind to additional metrics that can paint a richer picture of the fuzzy, emergent truth.

In this case, Krugman is questioning the standard metric, and using a comparison of two metrics to assess which one offers a better model of the true context. No one metric can tell the whole story, and we see weaknesses in our defined measures the more we use them. This isn’t a flaw, it’s a benefit of learning.

  1. Ask “Were the decisions good? or lucky?”

“Stumbling into Goldilocks”, Paul Krugman.

You couldn’t ask for a better example of “outcome fielding”. Here, Krugman is acknowledging that the decisions led to a desired outcome (i.e. bringing inflation down, without triggering a recession), but still he wonders… is there a clear causality here? Were the hypotheses behind the decisions and actions validated? Or did we just get lucky with the outcome? This is the main question that frames the article.

Even better, he presents his evaluation as “my take”... just an opinion, not fact. When you field an outcome, you’re just making a second bet (to paraphrase Annie Duke) on whether your decision actually caused (or contributed to) the result. It’s just another bet, with another set of uncertainty.

  1. Revise a causal theory, with hindsight

“Stumbling into Goldilocks”, Paul Krugman.

To revisit the decision quality, we should remind ourselves of the information that we had available at the time the decision was made (hint: it was incomplete). We should also build (or revise) our story of causal connections to include events that played out after the decision.

Here, Krugman gives his interpretation of causes and effects that produced the inflationary conditions, which is critical to discuss, if we want to assess whether the decision accurately modeled the context, conditions, and domain.

At the time of the decision, with incomplete information, they were all guessing. With hindsight, we can build a better narrative of causality (with new facts) and answer the question: “If I knew then, what I know now, would I make the same decision again?”

  1. Revisit the risks and identified possibilities

“Stumbling into Goldilocks”, Paul Krugman.

When decisions are made under conditions of uncertainty, risks will be identified and alternative possibilities will be considered. The risks (hopefully) get tracked and mitigated over time, but it’s also worth reflecting back on them after the probability of the risk has dropped to insignificant levels. We can ask:

  • Was the risk valid?
  • Did we misunderstand something?
  • Did we learn anything from watching the risk evaporate?
  • What beliefs should be challenged?

In this example, there was a healthy dialog and debate amongst leading economists on the best course of action. Different possibilities were explored. When a choice was made, one action was taken, the alternative possibilities were distilled into risks against the chosen path.

This highlights that when you make a difficult - but ultimately good - decision, it’s important to remember that you easily might have chosen the other path. Revisiting the risks is one way to explore this “bizarro” alternative universe (as Chris Butler might say) or counterfactual, to revisit the beliefs you had at the time.

  1. Conduct a retrospective analysis of forecasts

“Stumbling into Goldilocks”, Paul Krugman.

When you get lucky, you achieve your outcome, but your forecasts were probably wrong in some ways. Consider holding retrospectives on your forecasts (it will be humbling).

When we publish forecasts, it’s like making side bets, and we can use these to test our beliefs along the way.

In this example, the side bets exposed how the models (behind the rationale for the decision) did not hold up. Instead, they got lucky with a couple offsetting factors, yielding the desired outcome, but in a different way than they had forecasted.

  1. Show humility: being happy to be proven wrong

“Stumbling into Goldilocks”, Paul Krugman.

When you commit to seeking the truth, being wrong feels different (in a good way). When you find out that your model, beliefs, assumptions, or hypotheses were wrong, it's just a step toward being more right (overall). This takes a healthy chunk of humility. You still must work hard to build a strong conviction, but then you hold it loosely, which makes it easier to toss out when the evidence doesn’t support it.

Here Krugman shows that humility, as he did throughout the debate - where he framed the two schools of thought as “Team Transitory” and “Team Persistent” (like a game or match), instead of a dogmatic, “I’m right, so you’re wrong” argument.

The whole commentary in the newsletter exhibits a kind of Bayesian thinking that we should aim to bring to more of our corporate decision architectures.

This kind of thinking requires us to:

  1. Document our prior beliefs, probabilistically. So at the time of the decision, we acknowledge the uncertainty by saying something like, “Based on what I know right now, I think this belief or hypothesis is 60% likely to be true.”
  2. Think about the conditions that would surface (as evidence) if this possibility becomes reality. We ask ourselves, “If the belief does turn out to be true, then what’s the probability that we’d see evidence for this condition?” We would capture our thoughts on this probability up-front as well.
  3. Think about the current (up-front) probability of the evidence appearing. Is it a long-shot? A coin-toss? Let’s capture this as well.
  4. Challenge our beliefs when the evidence appears. Know when we’ve got enough to re-assess our initial beliefs (i.e. the probability in #1). When a long-shot hits (i.e. the odds in #3 were low), then if we previously said that this evidence was likely to be present if our belief is true (i.e. the odds in #2), then…. Bayes’ Theorem tells us that it’s appropriate to increase our probability that the belief is true. After all, the long-shot hit, and we had said that it would be good evidence if it did!

In Krugman’s update, he didn’t go so far as to cast probabilities like this, but he does:

  • Identify the new evidence (i.e. lower inflation, solid growth)
  • Re-surface the initial beliefs (and their risks) at the time of the decision
  • Revisit the causal models that were in play at the time of the decision
  • Enhance causal models for what has transpired since the decision

In your organization, what would it look like if you did a retrospective on the most significant decision you’ve made over the last 12 months?

Ask yourself, could you:

  • Compare actual-outcomes to the desired-outcomes from last year?
  • Re-surface the beliefs behind that decision, and the risks that emerged?
  • Revisit the causal arguments made in the rationale for the choice made?
  • Improve the models for how your actions can drive results, based on what you’ve learned in the last 12 months?

If you can, then I’m eager to read your newsletter.