Some Notes on Executive Dashboards
Command & Control & Confusion
Why are executive dashboards so bad?
In my consulting work, almost every company has lackluster reporting and dashboards. These days it’s less a case of completely missing reporting (though that still happens) but rather the things the executive team are looking at regularly lack any real insight into the business.
Most of my consulting work revolves around putting together some kind of strategic plan. It usually boils down to a kind of basic equation, something like:
“If you invest $$ into activities X, Y and Z then over 2 years we can achieve $$$”
You secure buy-in from the key players, grab the money and get to work.
Oddly though - companies only tend to measure the right hand of the equation.
Company dashboards are designed around metrics and measurement of results - they’re trying to measure what has happened.
Measuring what happened is important, obviously. But it’s a bit like driving a car only looking in the rear view mirror…
It’s also important, however, to measure what is happening.
Unfortunately in my consulting work most companies don’t have any kind of measurement in place for the left hand side of the equation.
📈📈📈
Maybe there’s a blind spot in my consulting. When you put a plan together that says “If you invest $$ into activities X, Y and Z then over 2 years we can achieve $$$” - then there’s some kind of assumption, either explicit or implicit that activities X, Y and Z will produce results.
It’s kind of obvious that you have to find evidence for this historically - I like to show how investing in these activities has paid off previously, or how a similar situation worked out for a similar business.
But perhaps I could better articulate how this future investment will play out. Not just a business model showing X, Y and Z with revenue potential, but actually showing how you would measure progress on each initiative. It always feels implied to me that when you invest in a plan you need to measure progress, but I think I could be more proactive in bundling the measurement plan with the pitch… Hmm.
📈📈📈
The book Working Backwards explores this idea - Amazon calls them “input metrics” or “controllable input metrics”. From the book:
The simple answer is that we are not taught to think like this. When people say “be more data driven”, we immediately assume, “oh, we have to measure our business outcomes”. And so we measure things like number of deals closed, or cohort retention rates, or revenue growth, or number of blog visitors. These are all output metrics — and while they are important, they are also not particularly actionable.
Amazon argues that it’s not enough to know your output metrics. In fact, they go even further, and say that you shouldn’t pay much attention to your output metrics; you should pay attention to the controllable input metrics that you know will affect those output metrics. It is a very different thing when, say, your customer support team knows that their performance bonuses are tied to a combination of NPS and ‘% of support tickets closed within 3 days’. If you have clearly demonstrated a link between the former and the latter, then everyone on that team would be incentivised to come up with process improvements to bring that % up!
Input metrics are like measuring the left hand side of the equation! You’re measuring the things that supposedly impact the revenue. Today’s measure of revenue is not a good measure of tomorrow’s revenue - input metrics are better.
Interestingly - it’s quite hard to find the right input metrics, it’s not always obvious exactly which input metrics actually influence future revenue.
When we realized that the teams had chosen the wrong input metric—which was revealed via the WBR process—we changed the metric to reflect consumer demand instead. Over multiple WBR meetings, we asked ourselves, “If we work to change this selection metric, as currently defined, will it result in the desired output?” As we gathered more data and observed the business, this particular selection metric evolved over time from
- number of detail pages, which we refined to
- number of detail page views (you don't get credit for a new detail page if customers don't view it), which then became
- the percentage of detail page views where the products were in stock (you don't get credit if you add items but can't keep them in stock), which was ultimately finalized as
- the percentage of detail page views where the products were in stock and immediately ready for two-day shipping, which ended up being called 'Fast Track In Stock'.
I think the basic working model for people is that metrics measure the business, when in fact input metrics help you learn about the business. By iterating and refining your input metrics you actually become a stronger operator - you learn more specifically which levers actually get results. Dive into the full post from Cedric Chin here for a bit more.
📈📈📈
But, there’s something deeper here. Over the last few years I basically only work with the C-suite of organizations. Supposedly the “people in charge”. But time and time again my point of contact is frustrated at the state of reporting internally, while also not doing anything about it. So why not fix it?
Output metrics feel neutral. They’re observations about what happened - so it’s hard to argue with them.
Input metrics, on the other hand, are more opinionated - as we just saw they’re not perfect measures or predictors of future revenue and in fact you might iterate and refine them over time. You might disagree with them!
This brings a power dynamic into play that I find interesting. Senior executives - CEOs or founders even - who feel unable (or unwilling) to impose new dashboards and metrics on the business. Everyone is scared of micro-managing. Perhaps also senior executives don’t feel confident understanding the mechanics of the actual work well enough to oversee the creation of input metrics?
📈📈📈
Dashboards are a battleground for power in other ways too.
I often see teams frustrated that the way they’re measured doesn’t accurately reflect the effort / nuance / expertise / care that they feel is necessary for their work to succeed.
But I rarely see teams advocating to change the measure!
This is the metrics mindset that only measures concrete outputs. But you have the freedom to make your own measures. I recall working with a content publishing business (think someone like Wirecutter) where we were trying to nail down some measure of “quality content”. Not a simple problem - and certainly one that’s hard to find an objective metric for. But eventually we got a few senior people around the table, created a simple 5 point scale on a few key areas, and then asked everyone to rate content subjectively on that 5 point scale.
If I remember correctly it was questions like:
- “Is the summary of the page clear within 30 seconds?”
- “Can you immediately tell that it’s written by experts?”
- “Have we demonstrated that we did hands-on testing of the products?”
Everyone scores the content, you average the scores and create a blended “quality score” for content. This in turn creates a metric that you can use to measure some of the intangible “content quality” ideas that the team felt was important, but wasn’t reflected in the existing dashboards.
Once we got this quality score added to the dashboards it wasn’t long before the CEO was demanding that we increase the average content quality of our pages.
You manage what you measure. So think carefully about how to measure what you want to be managed by.
📈📈📈
I’m very interested in what Doubleloop is building. They’re basically a kind of strategy canvas where you can plug various input metrics into output metrics and measure them with live data. Here’s a quick overview:
I like this idea - that we should be questioning and exploring the dependencies of our strategic plan directly! We make a bunch of assumptions in the initial strategy pitch and then…. never go back and check whether we were right? Seems kind of nuts.
📈📈📈
I’m also interested in what Variance is building. Starting with the opinionated thesis that Product-Led Growth should enable prospects to engage, sign-up, set up billing and then actually use the product - Variance is building a reporting product that allows you to see prospects on an account by account basis as they move through various “milestones” of user action:
I’m interested in this because I think it’s smart and useful - but also because it’s building software around an embedded thesis or ideology. IF product-led growth, THEN here’s the CRM product for you… I think we’ll see more of this kind of opinionated software in B2B emerging.
📈📈📈
My brother is raving about the book Four Disciplines of Execution (summary). The book has a similar notion to Amazon’s around leading indicator vs lagging indicators. But they also have this idea of scoreboards.
Discipline Three: Keep a Compelling Scoreboard
To remain engaged, the team should know at all times if they are winning. People play more seriously when they are keeping score. Without knowing the score, staff will be distracted by the whirlwind. A visible scoreboard helps the team to work out how to move forward.
There is a big difference between a manager’s scoreboard and a team’s scoreboard. The team’s scoreboard should be simple, visible, show both lead and lag measures (actions and results), and show at-a-glance if the team is winning. It can be motivating for the team to physically create their own board.
Mmm. I like this notion of “Player scoreboards” vs “Coach Scoreboards”. It reminds me that every single dashboard is, implicitly, an exercise in incentive design. By choosing what goes into the dashboard you’re emphasizing what’s important, and what’s not.
The medium is the message, you manage what you measure etc etc.
📈📈📈
Talking of the “medium of dashboards” - I’ve been spending a lot of time in Google Data Studio recently and I really appreciate the idea of a blank canvas to design layout and reporting on top of. It implicitly encourages layout as a primary activity.
Yes, it’s technically possible to do all kinds of fancy design in a spreadsheet too - but any design or styling work you do is more fragile in a spreadsheet and, mostly, people don’t bother.
I mean - like any design tool I see plenty of Google Data Studio reports that make my eyes bleed! But on balance I like the notion of starting with a blank canvas. A data studio report feels very different to a spreadsheet report. It forces me to more clearly make tradeoffs between visual hierarchy, position and relation.
📈📈📈
Another nice thing about Google Data Studio - it allows you to separate access to the dashboard from access to the underlying data source. So you can safely circulate a data studio report to various stakeholders. This can be handy since a dashboard is only as powerful as it’s a shared object.
The more teams use dashboards and rely on them the more they become cemented as powerful.
I’ve written before about the age of permeable organizations: the idea that organizations increasingly have a series of orbital stakeholders with a blurring of the boundaries between “inside” and “outside” the organization.
Maybe DAOs are relevant here?
If quarterly reports are the traditional way of exposing data outside the organization - a realtime dashboard is the web3 way of doing it? DAOs are (optimistically) the modern business structure designed for orbital stakeholders.
And maybe (maybe!) a DAO that tokenizes input will naturally have a leg up - their dashboard by default will show input metrics and output metrics…
📈📈📈
But maybe dashboards don’t even need metrics or numbers on them at all! As we move towards an oral and visual culture with video, memes and social media, maybe dashboards need more rich context? Again, from Amazon:
Amazon employs many techniques to ensure that anecdotes reach the teams that own and operate a service. One example is a program called the Voice of the Customer. The customer service department routinely collects and summarizes customer feedback and presents it during the WBR, though not necessarily every week. The chosen feedback does not always reflect the most commonly received complaint, and the CS department has wide latitude on what to present. When the stories are read at the WBR, they are often painful to hear because they highlight just how much we let customers down. But they always provide a learning experience and an opportunity for us to improve.
This tracks nicely to what I see inside organizations. Too often user research is a one-time activity, and often buried deep inside the product or marketing org. It’s not a strategic activity. What would it look like to structure user research at a strategic, executive dashboard level? Maybe something like Amazon’s voice of the customer.
📈📈📈
So, I know that every client I work with needs help setting up better dashboards. But I also know that a dashboard is a powerful object. Changing it requires bravery and nuance. To recap these ideas - here’s some ways to interrogate your own reporting setup to see how you might change:
1. Qualitative vs Quantitative
Is your dashboard raw data or is there some post-processing? Are you using expertise to create gradings or analysis on top of the data? Is there a voice of the customer segment for your reporting?
2. Input vs Output
Are you reporting only on what has already happened or are you showing what’s happening now? Obviously you need both, but in my experience companies rely too heavily on output metrics.
3. Flexible vs Fixed
How often do you update the metrics you’re reporting on? Are you explicitly designing your metrics to be updated? What’s your feedback loop for checking that inputs actually lead to the right outputs?
4. Open vs Closed
Who gets to see the dashboard? How do you ensure that everyone is on the same page? Are you capturing the potential value in a world of orbital stakeholders from opening up the dashboard to a wider set of people?
📈📈📈
Maybe dashboards and strategic plans need to be more closely entwined?
–
Update #1: Some good links posted on twitter:
- Daniel Schmidt, CEO of Doubleloop wrote about “Flat metrics dashboards”
- Consultant Andrew Bartholomew wrote Good Dashboard, Bad Dashboard
–
Update #2: Some great links here, Clare’s talk is great:
It’s an emotional need rather than a functional one. I call it the executive safety blanket. https://t.co/rwoRZcjsrg
— Chris Butler (@chrizbot) May 6, 2022
Clare Gollnick’s talk about dashboards points to why:https://t.co/YTUDED0Ibn
–
Update #3: I recorded a podcast episode with my friend Nigel talking all about exec level dashboards:
–
Update #4: Love this post from the head of data at Reforge, looking at the data(!) around which executives look at which dashboards, and how often:
For this report, I wanted to know what the exec team was looking at in the past week. I want to know how much they’re looking at the metrics and whether they’re looking at the dashboards I think they should be looking at.
September 3, 2024
Working With Founders Who Have Conviction and Taste
April 16, 2024
This post was written by Tom Critchlow - blogger and independent consultant. Subscribe to join my occassional newsletter: