Measuring what we care about, or caring about what we measure?

Do you have a step counter on your mobile? Do you use an app for setting goals and following up your training? Do you look at how an article or a service is ranked before you click Buy? Have you and your manager set up measurable objectives for your work this year… objectives that might affect your salary or bonus? Many of us are nearly obsessed by measuring, ranking and evaluating – both in our private and professional lives. Cecilia Unell, business developer and enterprise architect at Combitech, reflect on the art of measurement.

Chris Dancy, who is often called “the most connected man on Earth”, was interviewed by Ny Teknik in March 2019, where he explained how he gathers data on both himself and his surroundings. It includes everything from pulse and blood pressure, to planned and finished activities, to the sound and light levels in his surroundings. He gathers all this data into an “internet of Chris Dancy” and automates it so that he gets an alert if he exceeds certain values, for example if he hasn’t moved enough lately.

During the interview Dancy touched on the fact that we sometimes get caught up in measurement itself; we’re measuring and evaluating without actually knowing if we are measuring something important or not. He captured it this way: “We haven’t learned to measure what we care about, so instead we are caring about what we measure.”

The philosopher Jonna Bornemark tackles the same theme in her book Renaissance for the unmeasurable – Making a deal with global pedantry. She examines how measurement and evaluation in themselves are sometimes held to be more important than the reality being measured. Her book takes up cases from healthcare where zealous measurement and slavish conformance and follow-up of established processes and routines will risk that we get poorer, not better, healthcare.

Direct and indirect measurement

What I have experienced in professional life is that we seem to operate under a management culture that is close to obsessed with measurement and evaluation. SMART  goals, key ratios, KPIs  and balanced scorecards – what’s next? It’s easy to understand this desire to measure things; it’s hard to get to where we want without setting goals, and hard to know if we’ve arrived in the right place if we can’t measure and follow up. The problem is just that sometimes it’s hard to measure what we actually want, and we have to settle for measuring one or more things that are easier to check – something that’s related to what we actually want to measure. This is the difference between direct and indirect measurement methods. 

Traditionally, the most important key figure for a company is its profit. This is a good example of a direct measurement method. Most companies would also like to assure future profitability by being good at innovation – developing radically new services, products, business models and so forth. But what’s the best way to measure and evaluate innovation power or potential? This is where we need indirect measurement methods.

Can we measure our innovation level by the number of ideas that our esteemed colleagues drop into a suggestion box? Surely that’s connected to innovation? Well yes, it might be, but what good is that measurement if it turns out that half of these ideas are irrelevant to the company? Maybe it’s better to measure the number of innovations that we launch in the marketplace each year? Or perhaps the number of successful (profitable?) innovations each year instead? The question is whether we will know after a year if a bright new innovation is successful or not. And how can we best measure the learning that pulses through the entire innovation process, regardless of whether the resulting innovation in itself is judged to be a success or not?

So measuring innovation potential requires us to rely on several different indirect measurements. Moreover, these need to be combined and weighted in a well thought-out way, or we’ll otherwise risk spending our time measuring something that isn’t actually important, and that we’ll follow up and guide our activities based on measurements that don’t lead us to our objectives.

When you’re drowning in measurement data… AI to the rescue?

Besides being hard to identify what we actually want to measure, and then guide our actions in accordance with those measurements, in our digitalization era we have the problem of excess data. We have an incredible potential to measure and gather data without great difficulty, and in a very short time. For example, we can measure how many transactions or inquiries a customer service representative handles daily, how many products a machine can spit out in a given time, and so on.

How can we handle this and navigate through the large amounts of data coming from our measurements? One way is to seek assistance from Artificial Intelligence (AI) to discover patterns and relations between different measurement points – patterns that would be difficult or impossible to discover on our own. For example, several studies have shown that AI is better than healthcare specialists at identifying different types of cancer. These studies involve digitalized tissue samples, where an AI program is trained to detect patterns that indicate cancer in the sample.

But whether you use AI for identifying different patterns, or not, be cautious. Finding a pattern, or relationship, in the data is not the same thing as finding a cause-and-effect relationship. The fact that most engineers in a company are men doesn’t mean, for example, that men are more appropriate for engineering roles than women. This clear relation between gender and role is not caused by gender identity in itself, but by other factors.

What can we do, then?

So how are we going to measure “the right way”? There’s no simple answer to that question, unfortunately, but here are some fundamentals:

  • Identify what you really want to measure, and why. If this can’t be measured directly, think about what you can measure instead, and how you can weigh and combine these indirect measurements to get as close as you can to what you really want to measure.
  • Because you’re going to follow up and act on what you measured (otherwise it would be meaningless to measure), you’ll need to think about if there’s a way to “fool the system”. Let’s go back to the suggestion that you could use the number of ideas coming into a suggestion box as a way of measuring innovation. What would happen if you measured, followed up, and rewarded your colleagues according to how many ideas they submitted? The odds are that you would get in a greatly increased volume of ideas, but at the risk that their quality level would plunge.
  • Finally, in the euphoria surround our ability to measure almost anything at all, make sure you don’t get stuck in measuring inconsequential things. Make sure you don’t get so engaged in measuring that you don’t have the energy left to follow up on the measurements. And remember that sometimes it’s actually that which can’t be measured that is the most important of all.

References

SMART stands for Specific, Measurable, Achievable, Relevant/Realistic, Time-based

Ottsjö, Petter (2019) Ny Teknik, interview with Chris Dancy (Swedish; paywall), https://www.nyteknik.se/premium/sa-blev-han-varldens-mest-uppkopplade-man-6950145

http://www.chrisdancy.com/ 

Bornemark, Jonna (2018) "Det omätbaras renässans – en uppgörelse med pedanternas världsherravälde" (Title translation: Renaissance for the unmeasurable – Coming to terms with global pedantry), Stockholm: Volante. ISBN 9789188659170

Tucker, Ian (2018) “AI Cancer Detectors" https://www.theguardian.com/technology/2018/jun/10/artificial-intelligence-cancer-detectors-the-five

Towers-Clark, Charles (2019) “The Cutting-Edge Of AI Cancer Detection” https://www.forbes.com/sites/charlestowersclark/2019/04/30/the-cutting-edge-of-ai-cancer-detection/#2afa1a347336