This technique was originally developed by
Barry O'Reilly
Read the original content

Value Engineering

Value Engineering is an method for managing initiatives as experiments that creates optionality and makes tradeoff decisions clear.

In an output-focused environment, backlogs of initiatives with heavy business cases often end up stack ranked and 'approved' with a set definition for their scope and expected value. This fails to create a necessary mechanism to shed what is not providing value and double down on promising initiatives. Therefore, losing bets fall victim to a self-reinforcing escalation of commitment and siphon resources from other, more opportunistic investments.

The Value Engineering approach recognizes these initiatives as a portfolio of bets and uses an evidence-based approach to make decisions as new information is revealed. This approach follows the cycle of hypothesizing a future outcome, making bets on what will drive that outcome, and making decisions to pivot, persevere, or stop:

  • Hypothesize: Quantify your beliefs of what value is, and how you will know it.
  • Bet: Test your hypothesis with experiments to gain knowledge to make better investment decisions.
  • Pivot and Persevere: Learn early and often at every level of the organization about what’s providing value and what’s not.

Adopting Value Engineering

The mindset change

It's often more difficult to make the mindset change than it is to introduce a technique. Value Engineering requires teams to shift from a feature factory mindset towards an outcome-based, experimental mindset.

Value Engineering is an evidence-based decision making model for identifying when to pivot, persevere, or stop investments.

To avoid the escalation of commitment that naturally occurs with ongoing investments, we want to adjust for cost vs value over time. We already build this model implicitly in our heads for ongoing projects, but when we're in the middle of it, it's difficult to give appropriate weight to the warning signs. Visualizing this value vs effort model makes helps to see if the evidence supports the size of the bet.

Small bets create safe-to-fail experiments enabled by faster feedback loops, limited-risky investments, and keeping initiatives in a recoverable state by never being too big to fail.

Above all, the currency of Value Engineering is learning. It is a mindset shift at all levels that favors discovering the harsh realities over sticking to a plan that may be out of date from the moment you think it’s complete.

Getting Started

To start, we ask three questions:

  1. What is the hypothesis?
  2. How will we know when we've achieved the desired outcome?
  3. What is the max 'bet' to achieve the desired outcome?

Simply put, we need to define the desired outcome, demonstrable measures or criteria that prove the hypothesis, and how much we'd be willing to pay for that outcome if we could buy it off the shelf.

We end up with a template for each initiative:

  • Hypothesis: We believe <this capability> will result in <this outcome>. We will have confidence to proceed when <we see a measurable signal>
  • Desired outcome: Explanation of a future state, ideally measurable. This might be an OKR, a target for a KPI, or just an articulation of the future.
  • Max bet: If you could buy the outcome off the shelf, how much would you pay? This can be in terms of dollars, but it's often easier to define this as people over time (e.g. 2 teams over 16 weeks)

From this, we're able to plot initiatives without any need to normalize the measurement of outcomes or spend:

We can read this as:

Investment A: 5% confidence, 8% bet - Looks good, too early to tell, but we have a few positive signals

Investment B: 85% confidence, 35% bet - Blowing away expectations, might be time to double down

Investment C: 80% confidence, 75% bet - Getting about what we expected

Investment D: 15% confidence, 80% bet - Reevaluate, potentially pivot

Example:

Imagine we're a trading platform struggling with increasing deposits among millennials. We have a signal that people in other demographics tend to invest more when they opt-in to automatic deposits. This leads to a hypothesis that in order to increase deposits among millennials, we need to convince them to set up reoccurring deposits.

The desired outcome comes naturally, it's a 15% MoM increase in deposits among millennials, but what is that outcome worth to us? There may be situations where there is a clear monetary ROI, like closing a contract, but in this case, it's much more abstract.

We'll say that the most we'd be willing invest would be two teams over 8 months. We're not approving any kind of budget or locking in resources here, we're just giving ourselves a point in the future to relate to the present.

Now we can say that we want to place an initial bet on this. We'll plan on investing one team next quarter. This is 25% of what we determined we're willing to bet.

In this model, we'd expect to raise our confidence (that our efforts will deliver the desired outcome) above 25% to persevere. We're likely seeing signals that we're doing something right. Below 25%, we may consider pivoting, or stopping altogether and reallocate to more confident initiatives.

Because our 15% lift in deposits is a lagging indicator, we decide to build an in-app waitlist signup. We assume that if 45% of our monthly active millennials sign up, we're confident that we can convert 30%

Our y-axis is our expected confidence, so if the experiment succeeding would have raised our confidence to 50% (justifying another 25%+ bet), this result brings us to ~25%. Right there on the deciding line.

Do we pivot, persevere, or quit?

That decision may be relative to other investments.

Because we can plot other initiatives across these axes, we can visualize whether or not we have longer-running initiatives worth doubling down on or earlier initiatives showing more promise.

And of course these initiatives are all different sizes - some of them are relatively small bets and some are much larger. When looking across the portfolio, plotting them by size or impact can bring another dimension to the conversation

We can make these decisions easier, but using this model, we create clear guardrails. For example, maybe everything past 30% spend and below the dotted line is automatically cancelled and tiers above the line automatically justify larger tranches.

The alternative is often status updates that don't give any relative weight to how close we are, or confidence that we will produce the desired outcome. The flexibility of Value Engineering allows it to be scientific if needed (where confidence is correlated to progress towards the goal) or more of an art, where confidence is a scorecard of signals that the team uses to gauge confidence.

This mimics how we implicitly model these decisions in our minds, but making this model explicit allows us to adapt, share, and automate it as a model for decision making.

How this technique helps navigate uncertainty

Historically, in the traditional project portfolio environment, teams built heavy business cases with pipe dream ROI models to justify a locked-in budget over a long period of time. In an outcome-driven environment, organizations have shifted focus to funding teams and outcomes over work.

Though this transition has been profoundly impactful, it's still difficult to have effective pivot/persevere conversations, especially with larger initiatives. The reality is that many initiatives continue to run through 'completion' and are judged on their results.

With one checkpoint (at the end), the only potential scenarios are the first two below - luck or overcommitment. Ideally initiatives can look more like the third graph where we can make incremental adjustments or cancel before we're stuck in an escalation of commitment.

Y-axis is value and the X-axis is time (or spend)

Introducing a framework for pivot/persevere decisions enables adjustment through uncertainty with multiple checkpoints that enable us to navigate long-term initiatives and avoid overcommitment.

Sources: