Driving new conversations around whether to stay the course
Finding courage to break free from the status quo is hard. Initiating change is one of the core skills of a leader, but there are many incentives to just “stay the course” in modern organizations. When we ask a team or an entire organization to help create change, we need a compelling story - one that makes it clear that the status quo is no longer good enough.
In this post, we will outline a way to build such a story. Imagine you are a leader of a business unit, or product organization, or functional department. You are accountable for the performance of your organization, and have identified and instrumented several KPIs (key performance indicators) to make performance measurable, and visible to your teams. When crafting your KPIs, you were forced to ask:
You also preferred metrics that are “leading indicators” - measures that change early enough to help you understand whether the changes are making a difference. [Note: the “lagging indicators” might still be necessary to understand whether they are making an impact.]
Choose a single KPI metric to focus on first. The question we will pose is:
“If we assume things remain as they are now, with the current plan-of-record (i.e. the status quo), how will this metric change over the next few quarters?”
To answer this question, pull together a set of “experts” from both within and outside your organization. Try to find people with enough experience to be solid forecasters, but no incentives that might skew their forecast. Draw from your peers, other leaders, as well as experienced individual contributors with good knowledge of the “ground truth”. Aim for a set of 5-7 people that are capable of forecasting the specific metric, for a target date.
Next choose that future “target date” to build the forecast around. Pick a date that isn’t so soon that current trends won’t be visible, but also isn’t so far out that the uncertainty overwhelms the forecasters. If in doubt, pick a date that is 2 or 3 quarters out.
Ask each member of your “forecast group” to provide a specific kind of estimate: a 90% confidence interval for the value of the metric at the target date. They should set an upper bound and a lower bound on the metric value, such that across 100 possible futures that might unfold, the actual value at that date will fall within their range 90% of the time. It’s a probabilistic way to gauge the uncertainty in their forecast. Another way to think about it is that 5% of the time, the actual value should fall above your upper bound, and 5% of the time it should fall below your lower bound. That should make for a wide range, right?
Each forecaster should develop their 90% CI individually, and offer up some qualitative commentary as rationale both their upper bound (”here’s some of the conditions that might drive the value this high”) and lower bound (”here’s what might happen to drive the value this low”). After each expert has contributed their independent forecast, you might want to let them see the other forecaster’s estimates, to spark a good dialog.
But as the leader, and sponsor of this exercise, you now must aggregate the independent estimates, to arrive at a single 90% CI range that represents the inputs of the group. You can average the upper bounds and the lower bounds, or weight the individuals’ responses based on their forecasting skills (do this weighting in advance though…), or just scan the inputs and set your own range. You are accountable for the forecast, so you decide how to use their contributions.
With the contributions distilled down into a single 90% CI range, we will now assume that the likelihood of the values (at the target date) across the range will fall across a normal distribution. That is, we will say that the probability of the KPI having a value (at the target date) that falls exactly between the two bounds is 50%.
Okay, so with these estimates, and a little statistical magic, we’ve been able to answer our question:
“If we assume things remain as they are now, with the current plan-of-record (i.e. the status quo), how will this metric change over the next few quarters?”
Now, we need to ask another question:
“Is this okay?”
Which encompassed questions like:
So now we need to set expectations, by choosing a desired “target value” for the metric, at that same target date. Call it a goal or desired outcome if you’d like.
With the 90% CI estimates we’ve collected, we can assess the likelihood that the status quo will get us to this target value. If we set a target value to the midpoint of the 90% CI range? Then we’ve got a 50/50 chance of hitting that mark. Is that okay? Do you want better odds? Your call.
Sidebar: In some contexts, we might even be able to apply an economic model to our KPI value, to forecast what is lost when we fall short of the target value at the target date. If you are able to translate your metric into dollars, with a loss function, like this, then that can help assess the risks involved. When you are able to apply the loss function, then the red in the chart implies financial loss, which is powerful for analysis.
Once you’ve defined success with a target value, and captured the expected performance of the status quo, then if you choose to do nothing, then you are expressing a belief that the status quo is “good enough”. You can document that belief, if desired, clarifying the uncertainty with the probabilities from the forecasts. Why do this? Because as things change, internally and externally, you might want to periodically challenge that belief, and re-run the estimates, to re-answer the question, “Is the status quo (still) okay?”
The choice to “do nothing”, or to stick with the status quo, is a choice nonetheless. A forecast-based approach like this can help defend a leader’s decision to “stay-the-course”, or to motivate change. Regardless of the choice, the acknowledgment of uncertainty (that you get with the forecasts) makes for smarter dialogs about the decision.