Posted on
December 20, 2023

Uncertainty Project Year in Review

What a great year! I’ve been so happy to meet so many interesting people because of my involvement with The Uncertainty Project.

As we wind down the year (and try to get too much “senioritis” that I always get in December) I wanted to look back at what I’ve been seeing and what I expect in the coming year…


Probably the biggest thing I’ve learned and internalized was the way that we need to have conversations, arguments, discussions, and discourse about decisions. And this needs to be separate from the decision point itself.

This specific time is very important for a few reasons:

  • It lets us figure out how we will actually make this decision.
  • It gives a specific time to gather options and refine them.
  • It sets the expectation that there is a difference between discussion and decision.

As I’ve started to hack the various decision processes I’ve part of with this I’ve found that there are varying levels of acceptability within orgs. It isn’t always possible within a culture to be overt about this. We sometimes need to subvert the process itself to be better in the end. This type of grassroots work shifts the Overton window of the org towards better practices.

I’ve probably done ten talks, podcasts, and salons targeting this point alone on the topic of discourse vs. decision making. I always get great questions and concerns about adoption through the people I’ve met.

A book I’ve started to read that speaks to how we make decisions better in groups is The Enigma of Reason. I’ve really enjoyed the discussion of what reasoning really is.

Second, I’ve started to push towards the visualization of strategy and tooling that would allow teams to monitor that strategy over time. The discussions around 360° strategies is the most key to this.

Talking through this with the team and how we might build a graph has resulted in things like this:

I’m really bullish on this idea of a dynamic graph of strategy for the org. I think that Wardley Mapping and Assumption Mapping are going at things in a similar vein.

The assumptions that are underpinning the strategy end up being tripwires for us to monitor. This could start with metrics but could lead to generative summarization of ongoing news too.


I think with AI/ML being such a big topic in today’s tech world we are going to see more and more people think about what it means to bring virtual agents into this decision making process.

There are key questions I have in this world:

  • Can systems make decisions? Or do they just automate actions?
  • Can these systems provide partners to have discourse?
  • How do they distract from or aid others in this process?
  • What are the right channels (like chat) they should interact with a person or a group of people?
  • Where do different systems fit on the tool to teammate spectrum?

There is a lot of literature that I’ve been reading through that talks about automation of decision making. A lot of those end up being about the way we understand those systems and trust them. I think the paper “AI, Algorithms, and Awful Humans” really hit home for me that we need to set standards about what is the right thing for a human vs. a system to do.

Over the coming year I think we will really start to tackle and experiment with these questions. This will include in the open through the open source community (through the models and frameworks they create) and inside of larger companies (that will experiment with their own tools that take a larger part of the decision making process).

Thank you

More than anything I want to thank this amazing community for coming together, supporting this project, and being such awesome collaborators.

Please don’t hesitate to reach out as you try the tools or more experimental thinking we are using.