Posted on
March 18, 2024

Autonomous agents in decision making

Based on the hype of today all decision making will be taken over by autonomous agents in the coming weeks. And maybe restarting the discussions about sentience and self-awareness.

Let’s take a step back and a deep breath. There are lots of interesting and fantastic innovations that are taking place. However there are still plenty of ways this will require more work to productionalize.

I'm particularly interested in the intersection of decision making, organizational dynamics, and autonomous agents. Over the last year I’ve been doing my thinking through my work, side projects like the employee manual of the future, various posts, and talks.

What is the reality of having autonomous agents as part of our decision making processes? Where would they help? Where would they make things worse?

Worst of both worlds

What makes automation bad for people? First we should point to important work by Data & Society, Distributed AI Research Institute, and many others that have done work previously on the risks with automating decision making. Second we can point to recent tests like one showing that OpenAI GPT Sorts Resume Names With Racial Bias. There is significant risk in the problems that automation bias brings about.

When people can’t disagree, escalate, or opt-out we create systems that put people in horrible situations. We shouldn’t do that.

A recent paper that really struck me was AI, Algorithms, and Awful Humans. It sets out a case that we need to make sure that what humans do best and machines do best are what they should actually do. But we get systems that try to take over too much from humans and get into dire straits. It is really a great paper so I’d recommend you read the whole thing.

What we don’t want is systems that remove the parts of decision making that humans do really well:

  • Using emotion, intuition, and tacit knowledge that is hard to quantify.
  • Making exceptions when we are trying to be empathetic or compassionate.
  • Revising or adding to the criteria we need to “satisfice” rather than assuming we get it perfect at first.
  • Only putting in enough effort to get something done - we are good at being lazy and great at “half-assing” things in just the right way.

With today’s systems there are many things they are good at which we should let them do:

  • Creating a function of the data we have collected to infer things we can’t put rules to.
  • Taking huge data sets and sifting through them based on known criteria.
  • Encoding biases that humans have into systems - for good and bad.
  • Generating content that may or may not be right but could be an extension of what we might have considered - this includes provocations and just enough confusion.

There are overlaps but not many. There are clear differences in how we should utilize people and automations in decision making processes.

Cognitive artifacts

Don Norman started talking about “cognitive artifacts” that help humans in their work in 1991. Most things we do that use technology ends up mediating the world to us through these cognitive artifacts.

Then David Krakauer took this further to distinguish between complementary and competitive cognitive artifacts (quoted from Competitive Cognitive Artifacts and the Demise of Humanity: A Philosophical Analysis):

  • Complementary Cognitive Artifacts: These are artifacts that complement human intelligence in such a way that their use amplifies and improves our ability to perform cognitive tasks and once the user has mastered the physical artifact they can use a virtual/mental equivalent to perform the same cognitive task at a similar level of skill, e.g. an abacus.
  • Competitive Cognitive Artifacts: These are artifacts that amplify and improve our abilities to perform cognitive tasks when we have use of the artifact but when we take away the artifact we are no better (and possibly worse) at performing the cognitive task than we were before.

While Krakauer has said he is not making a normative judgment and simply trying to point out there is a difference, I feel the terminology gets us into the realm of saying a tool is “competitive” to humans is probably bad…

Yes, there will be tools that will take over a certain capability and we will no longer be good at it. But if that allows for higher level abstractions that get to even better outcomes for people, isn't that a good thing? Isn’t specialization and automation “good” if it helps people?

This gets even more murky with autonomous agents. At a certain point they are no longer just an extension of the person (like a hammer becomes) but they are potentially agents that will go off on their own to take actions on their behalf.

How is this different than when we have a team of people and we delegate particular work to them? Even though the manager of the group can’t do the work that is being delegated, is that bad?

The key is that we should be wary of handing off aspects that make us lose touch with the important parts of decision making and we should automate (or delegate) that which helps extend what we are capable of. Simon Wardley recently wrote about how critical decision making is something that doesn’t change when we are using these new systems.

This ends up being a spectrum between a tool which is an extension of our bodies (or mediator) and another agent. But it doesn’t change the need for decision making.

Source: Designing Agentive Technology with Christopher Noessel

The reality of automation is that we will utilize tools across the spectrum.

Where are these systems best?

In a recent talk for ProductCamp Chicago I took a quick look at where in our meta-decision making process we might include AI, ML, generative AI, autonomous agents, and other automations.

I’ve started to think through the ways we might task humans and automation for each of these steps:

* See 'What is the right amount of Strategic Ambiguity'

A key question is whether some decision is worthwhile and “good” to be automated. In general, if there are edge or special cases that are severely impacted then it isn’t a great idea.

I think Roger Martin said this well in a recent piece about LLM/AI use in organizations:

If you want to discover the median, mean, or mode, then lean aggressively into LLM/AI. For example, use LLM/AI if you want to determine average consumer sentiment, create a median speech, or write a modal paper. If you are looking for the hidden median, mean, or mode, LLM/AI is your ticket — and that will be the case plenty of the time. Being an LLM/AI denier is not an option.

He continues about special cases:

However, if your aspiration is a solution in the tail of the distribution, stay away. This is the enduring problem for autonomous vehicles. Tail events (like the bicyclist who swerves in front of you to avoid being sideswiped by another car) are critical to safety — and still aren’t consistently handled by automotive AI.

If we are aware of when we should split the job between humans and automation we can make better decisions setting up our systems. If we aren’t we will create systems that hurt people and make worse overall decisions.

Future going forward

We should not forget that a wise person at IBM put this statement in a manual back in 1979:

Source: https://twitter.com/SwiftOnSecurity/status/1385565737167724545

We, as humans, need to continue to hone the skill of asking good questions and critical thinking that leads to decisions. From there we can start to leverage the automation that minimizes the errors of human decision making while maximizing the human-ness of those decisions.