This technique was originally developed by
Read the original content

Multi-Choice Decision Matrix

A decision matrix is a tool that is used to evaluate the potential options or choices in a decision making process. It is a structured approach that allows you to carefully consider the potential risks and benefits of each option, and to compare them in a systematic way.

To create a decision matrix, you first need to identify the criteria or factors that you will use to evaluate each option. Then, the team will identify the different options or choices. These criteria could include things like cost, feasibility, impact, or alignment with your goals.

This is primarily for a multi-choice decision. For a binary decision, we can use a binary-choice decision matrix.

Identify the criteria

Why do we identify the criteria first? Two primary reasons.

  • The Framing Effect: We want to avoid starting with the potential options and then shaping our criteria to manipulate a preconceived decision. Also, free-forming proposals often allows us to embellish a narrative that may be convincing. This often happens with pro/con lists.
  • Focus on what matters: The criteria help drive the dialog and give guardrails for information gathering. For example, we wouldn’t need data on anything that does not align with the criteria

There are two ways to identify the criteria. The first option is to have the driver define the criteria through interviewing contributors separately and synthesizing what they believe is important.

The second option takes more time, but fosters stronger buy-in:

  1. Survey contributors - ask them to list out what they believe is important.
  2. Synthesize these responses into a list of ‘It’s important that...’
  3. Survey the contributors again and for each criteria, ask them to rate, on a scale of 1-7 how much they agree with the criteria.

You’ll end up with something like this:

Screen Shot 2022-12-09 at 9.19.22 PM.png

From here, we can facilitate a conversation, either async or face-to-face where we ask dissenting opinions to state their case. For criteria A, we see strong disagreement that this criteria is important to the decision. We’ll take a few statement from both sides, but it’s important that this discussion follows the guidelines. Criteria B we don’t need to discuss and Criteria C will likely weight low or we need to provide some better definition.

The driver will synthesize the criteria.

Weighted criteria

From here, we can weight the criteria so the averages reflect the relative importance of each criteria. Criteria can be weighted as an outcome of the feedback (meaning the driver assigns the weight) or contributors can be asked to weight each criteria on their own and then those responses are averaged.

There may be scenarios where a criteria is a ‘must have’ it’s important to identify and include those, but also make sure this label is reserved for truly non-negotiable requirements. Of course in these scenarios, options would have to meet the ‘must have’ criteria to be considered.

Identifying options

Options can be collected from contributors async or drafted by the driver (if the driver has enough context), but regardless, this process should be done anonymously. Contributors should only be able to see the other options (and who proposed them) once they’ve provided their own.

The options should be presented similarly to avoid the framing effect. Teams can start with a simple template that asks contributors for:

  • The Elevator pitch (the tl;dr for the option)
  • The Details
  • An argument for this option as it relates to each criteria
  • A counter-argument (encourage contributors to steel-man their own proposal)

The template can be adapted to whatever works best for how the team communicates, but they should be similar (if not identical) in format. For example, if Option A has a full visual mockup, the other options should as well. We tend to select options we understand over their viability.

Option: At this point, the team could decide to give long-form feedback on each option in relation to the criteria. The same rules apply, contributors should not be able to see the feedback until they have given their feedback (to avoid group think).

Scoring Options

At this point, we have a list of options with arguments for why/how they impact the criteria we identified as important.

Now the driver can ask contributors to give their feedback on the options in a very efficient way. For each option, contributors will score from 1-7 based on the perceived impact to the criteria and provide the argument for their score.


image.png


The feedback for each option should only be revealed once the contributor has given their own feedback. Once the feedback is collected, the team ends up with a decision matrix. On this table we should be able to see:

  • Each option and each criteria with an average score across contributors
  • How close the team was to agreeing (in this visual, a green background means the team agreed, red means there were dissenting opinions)
  • The weighted average for each option as a whole (based on the weight of each criteria)

image.png

At this point, what’s important to look at is where we agree and where we disagree. If there is agreement across a clear winner, then we assume we can move quickly and make a decision, but the goal is not agreement. Disagreement just means we move slow and tease out important information.

It’s important to stress that we’re not seeking consensus, we’re seeking buy-in and consent. These scores do not determine a decision, they guide conversation, tease out information, and quantify expected value in ambiguous situations.

When reviewing the criteria scores where there was disagreement, each point of view should be stated, not debated. Once each case has been heard, contributors should have the ability to update their scores.

At this point, the decider should have a clear understanding of where the group disagrees or has varying levels of confidence in the options.

Why is it important we have contributors work anonymously?

Even if we’re aware of our biases, we tend to adjust our viewpoints and withhold information when we discuss ideas/option in groups. For this reason, it’s better to operate as ‘nominal groups’; meaning members of the group formulate their opinions individually, document them, and then come together to discuss once they’ve locked in their viewpoint.

This process, adapted from the mini-Delphi Method, combats group and various other biases involved when groups share and analyze information. In short, collaborators should always document their views individually, share those views in a group, then adjust their views accordingly.

Some teams may opt to do this process fully anonymously.