Posted on
July 1, 2024

An Argument for Probabilistic Thinking

This is going to be an oversimplification, but bear with me:

Leaders are only interested in these four questions:

  1. “What do we see out there?”
  2. “How are we doing?”
  3. “What can we do?”
  4. “What should we do next?”

That’s it.

Any conversation boils down to some flavor of one of these four questions.

And that’s a good thing! These are great questions!

Where it goes off the rails is when these same leaders expect an unrealistic level of certainty in the answers they get back. When the answers they get back are either (1) an overly confident statement with (unsupported) conviction, or (2) a shrug and the statement “it depends”... in either case, decision making will suffer.

The world exists in shades of gray. But when we limit our palette to just black and white, we lose the ability to really see.

So what can we do instead?

We can start by prefacing our answers to any of those four questions with, “Well, I’m not sure, but…”

This is the acknowledgment of uncertainty, one of the core principles here at The Uncertainty Project. It’s surprising, still, how difficult this is to say, in most of our business cultures.

From there, an enlightened leader can lean in, and ask for help exploring the uncertainty. What do we know, with some certainty? What are the most important questions that we don’t yet have answers to? What unknowns are “keeping you up at night?” The severity of these questions (i.e. how scary they are) is what establishes the initial level of uncertainty. These kinds of discussions bring a laser-focus to the risks you are carrying.

Leading with questions, when done in a thoughtful (and not flippant) way, can help frame discovery efforts, frame experimentation, or frame a conversation you need to have with an expert on your team (or another team).

But most importantly, it allows us to express our answers to our questions in shades of gray. That is, in probabilistic terms.

“What do I see?”

“Well I think I’m seeing pricing pressure in the market due to new entrants, but there might be other forces at play as well, so I’m only 60% confident that the pricing pressure is real...”

“How are we doing?”

“This project has missed the last two milestones and lost a key member of the team last week. I’m dropping the likelihood of us delivering on the desired outcomes by the target date from 80% to 65%...”

“What can we do?”

“We are starting to hear about this pain point in the problem space, but it’s only from a couple existing customers right now. We don’t yet have a theory on how solving for this pain can move the needle on much of anything. I’m capturing this opportunity with just a 40% chance of being valuable to both our customer base and the business...”

“What should I do next?”

“We’ve identified these five things as our top risks, and we think we can mitigate the biggest one if we drive this change quickly. We believe this change would drop the probability of this risk from 80% to 50%, which would avoid a big negative impact. The change only has a 70% chance of producing the desired outcome, though, given what we know so far. But that’s solid enough to commit some resources with a decision.”

These examples show how these questions open the door to navigating uncertainty in different realms, and how probabilistic statements (expressed as a %) can enrich the dialog.

When we ask, “What do I see?”...

…we are exploring our external (or internal) environments or context, and expressing uncertain beliefs about what we are sensing. When two people see the same circumstance, but arrive at different views, they can express their relative confidence to convey how supported they feel in their viewpoint. Is it a hunch? Or is it based on a pattern they see, derived from years of experience? Or maybe both?

When we ask, “How are we doing?”...

…the question is relative to some concept of success. The real question (always) is “Are we going to be successful in this area?” Of course, that question is easier to tackle when the definition of success is clear. When desired outcomes are stated, and connected, to show expected support and causality, then our ability to think about our actions, choices, or decisions, and tie them to some future conditions are enhanced. It’s still just a prediction, but it will be more crisp.

When we ask, “What can we do?”...

…we are surveying the possibility space for opportunities. Some opportunities present stronger connections to future desired outcomes than others. This is the conversation we want. When we ask, “What is the likelihood that this opportunity for change will make an impact (somewhere)?”, we are unfolding our speculative map of the future terrain, and checking the navigability of possible paths.

When we ask, “What should I do next?”...

…we are recognizing that we have finite resources, and that we will have to make decisions (that allocate these scarce resources) without the luxury of certainty. Moving forward with one choice absorbs the opportunity cost of another. These opportunity costs can be expressed with a magnitude, and more importantly, with a confidence interval. We can use algorithms to help us sort and stack, but we should know where the inputs are shaky.

So expressing uncertainty as % of confidence is a powerful way to enhance dialog in decision making, to make sure that the supporting information is understood for what it really is.

But that’s not even the strongest advantage of adding this layer to your decision architecture.

The best part about this is that the % values can (and will) change over time. They change as you learn. They change as you watch plans get executed. They change as you observe market conditions shift. They change as you see competitors (or a key technology) debut something unexpected.

This ability to talk about how the probabilities are changing is the key to unlocking adaptability, or business agility, in your organization.

How can you convince people that this probabilistic framing is valuable? Maybe start with a picture.

A great way to think about this evolution of uncertainty over time, for a specific value or result,  is the “cone of uncertainty”:

You can draw one of these for any key variable you are tracking, like target dates, planned costs, monthly users, or new signups. Variables tied to risks or assumptions are good places to start.

As time progresses, you hope to learn enough to reduce the uncertainty around the impact of your ongoing efforts.

“In principle, the basis for assessing the value of information for decisions is simple. If the outcome of a decision in question is highly uncertain and has significant consequences, then measurements that reduce uncertainty about it have high value.” - Douglas W. Hubbard, “How To Measure Anything”

We learn by collecting new information - for example, via measurements.

These occasional learnings are the oft-referenced-yet-elusive insights that drive adjustments to estimates, plans, forecasts, expectations, or roadmaps. An overlay of the cone of uncertainty can set some expectations around how, over time, we should be “buying information” to reduce the uncertainty around the results that are most important to us.

It also suggests that insights should be happening at some frequency over time, to fuel the narrowing of the cone. If a few weeks go by, and nothing is changing - no new insights that support a reduction in uncertainty - then it’s time to pause and ask why.

In this way, the cone can be used as a set of control limits to this effort of “buying information”. That is, if some time passes, and your confidence interval still exceeds the boundaries of the narrowing cone, then you may be “out of control” in terms of your ability to reduce the uncertainty.

Ask: “Why aren’t we more confident by now?” and  “Are we applying our efforts in the best places to reduce the uncertainty around success?”. If you’ve done your best, but the confidence is still lacking, maybe it’s time to revisit other opportunities, since the expected value of this one is still wavering.

And this perspective will be different, depending on your role in the organization. As a product manager, you care about results related to adoption, usage, and retention. As a project manager, you care about results related to delivering changes with a set of resources by a target date. In both cases, they are interested in applying new information to evaluate the probabilities of success. But the definition of success will be local. And those definitions of success are interrelated. This is where belief networks can model the way one uncertain result can shape the uncertainty around a related result.

So the cone of uncertainty can be a powerful visual to help leadership teams bring probabilistic thinking into their dialog.

Compare this probabilistic approach to the typical way we try to answer “How are we doing?” for a project. When we say that a project is “90% done” it’s a reflection of the execution of changes to produce different outputs. But if the 10% that remains is the most crucial to the valued changes, then we are in trouble, right?

If, at the same point, we ask, “What is the probability of success of this project?”, I bet when the progress bar shows 90%, that we don’t feel that success is 90% likely. A question that focuses on probability of success (however it is defined) is a better opener for an honest conversation about risk.

But the cone of uncertainty is just another visualization of a learning loop. And when an organization sponsors multiple concurrent efforts (which is always the case), then those four questions fold into a quarterly cycle like this:

The aim each quarter is to find answers to those questions that land like insights, and feed learning. When they string together in this repeating sequence:

  1. “How are we doing?”
  2. “What can we do?”
  3. “What should we do next?”
  4. “What do we see out there?”

And when we update our probabilities as we learn, then we improve our ability to navigate uncertainty.