Are you measuring right? 4 reasons you’re probably not

Share now

The majority of the measures we see in Product Management (#productmanagement) are flawed, and the better they look, the worse they (usually) are. Why is it so, and 4 easy checks to fix them.

Whether setting up OKR, monitoring your “product health” or just running experiments, there are countless reasons why, as a product professional, you should measure what you do and set goals. And you should do this right! 

Being able to rely on metrics (actually, having meaningful and well-defined metrics) is your best ally to ensure your product is successful; but most importantly, it is a key element to understand when success is not hitting, and a mandatory tool to make (fast) fact-based decisions. 

This article is NOT a guide into the reasons why measuring your product performance, experiments, or “key results” is important. But hopefully, if you made it here, it’s clear why having metrics is the cornerstone of a successful product organization that relies on facts, rather than opinions. However, having a set of “whatever” metrics in place is by no means a guarantee per se, and in the next paragraphs, we will show some common pitfalls in dealing with measures and metrics.

A little spoiler: you should spend the majority of your time defining the proper measures, as much time as needed to get solid data, and little time debating on what the numbers you’ve seen mean. Note that I’m not throwing here exact numbers or ratios of time spent defying/collecting/analyzing on purpose. The reality is more complex than this: the exact values that work for your organization will evolve over time, but the overall idea is that solid anticipation of the “what” and “why” you’re measuring, will set a healthy basis for a consistent decision-making process down the line. If you’re far from this setup, chances are that your metrics are flawed and that you’re building the rest of your strategy on the sand. In the rest of this article, you will learn why.

Here are the 4 most common reasons why success metrics don’t work, and how to fix them:

1) The issue with relative changes:

This is probably the biggest blindspot I see over and over in organizations that are just starting to become “data-driven”: defining relative (change) metrics instead of looking at absolute numbers. While in theory looking at relative improvements seems like a great idea (wow, we’re growing our revenue by 30% this year!), there are several reasons why you should look at absolute numbers instead. Of course, taking relative + absolute numbers together is a great way to look at “the whole story”. The main point I want to make here is that relatively express changes, taken alone, are at the root of many problems with your products. First of all, what do I mean by “relative change metrics”: note the word change; strictly speaking, a relative metric is a number dependent on other numbers. In this sense, a conversion rate is a relative measure (eg. 5% of our users entering the funnel make it to the end and finalize the purchase); nothing wrong with it, and this is not our concern here. We’re instead looking at those relative numbers used to express a goal (or an expected result) as a change (increase, decrease…) in a metric: like in “Increase our traffic 10%”, or “Reduce our churn rate by 20%”. Let’s be clear, there is nothing wrong with increasing our traffic by 10% or reducing our churn rate by 20%. However, there are several reasons why you cannot rely on a relative expression of your performance:  

  • Most likely, you don’t know how to measure the underneath KPI: This is by far the top 1 reason why you shall avoid setting relative change measures (alone). 9 times out of 10, teams who define relative goals will feel reassured at the beginning, just to discover later on that they don’t know how to measure the baseline at all! By defining your goal as “increasing the traffic by 10%” you’re not forcing yourself to check that you’re able to measure this traffic in the first place! Do you (technically) have what is needed to measure? Is this measure stable enough over time to make it a baseline? Are you taking into consideration seasonality and so? If you don’t master the baseline, you won’t master the definition of success! Let me try to clarify: seasonality and fluctuations will be there also if you look at absolute values instead of relative ones; the difference, however, is that by looking at absolute values you’re forced to understand the baseline before you define the goals. This way, you will be factoring in whatever fluctuations you would expect from the beginning, instead of taking a “blind bet” such as “let’s increase 10%, whatever the baseline was and regardless of how it will evolve anyway”.
  • You’re prone to misunderstandings: In this example of increasing traffic by 10%, have you defined exactly which traffic you’re talking about (users or sessions? From SEO? From Advertising …)? Looking at the exact absolute value you’re talking about will ensure alignment! (“ahh you want to raise the 100K figure? You mean unique users coming from advertising then, right?”).
  • They don’t tell the full story: a 10% increase seems like a good deal… but is going from 200 to 220 users a lot? Mmm not really, right? If you don’t look at the numbers in absolute (and comparable) terms, you’re shooting yourself in the foot before you even start talking about prioritization! Is increasing 10% of traffic 2 times better than saving 5% of existing users?
  • They are often arbitrary: The reality is that relative numbers are too easy to throw and don’t require the necessary reflection and preparation to understand if the goal makes sense in the first place. Why 10% and not 20%?
  • They prevent rational judgment: Looking at them in absolute terms makes it easier to gauge solutions: is it realistic to add 10% traffic through advertising? Probably. Is it realistic to pay for 10K additional users? Mmm, let’s do the math before we call it a goal 🙂

All in all, while relative indications can tell a part of the story, I always strongly advise looking at absolute numbers (first), and phrasing your metrics as a mix of both absolute and relative measures (ideally): eg. “Increase your organic search traffic by 10%, from 100K to 110K unique users/month”. You can (and probably should) look at relative numbers, but you definitely cannot call it a day until you’re solid on the absolute numbers: baselines, additions, and targets.

2) The issue with the number of metrics:

Let’s make it easy: less is more. There are tons of reasons why your business is complex, why you can’t summarize all that you’re doing in one number, and why you would need a full dashboard of numbers to understand your product. Yet, there are many more reasons why you should keep the number of metrics you (really) look at very limited:

  • KISS (Keep it Simple and Stupid ): Your metrics shall not be a vanity of the complexity of your product, but something you can rely on to make efficient decisions. The more they are, the more complex it will be to put them in perspective and agree on what they mean. Result? You will drift into a “gut-feeling” decision mode (at best), or into a “no-decision” mode (more likely) and will lose all the benefits of going to a data-driven decision process. Deciding on the basis of more than a handful of metrics is likely to be a bad idea. This being said, you can agree on observing a couple of additional metrics if you feel they may bring value in the future; this may help to create the “baseline” you will need in your future iterations.
  • Maintenance and soundness: The more metrics you have, the higher the likelihood that you’ll have mistakes and errors in them. KPIs are part of the product and, as such, shall have their own full “life-cycle”: from the definition, implementation, validation, and maintenance. Needless to say, when the number of metrics goes out of control, “on-the-fly” measures start to appear. Done in a rush, out of context, and without the full life cycle in mind: chances are that they will create more issues than they solve.
  • The non-choosing: while having a good and contextual understanding of your product is important, agreeing on what is important for you is key! If you agree at the beginning on a limited number of metrics that matter, the future you that will be deciding (or advocating decisions) based on these metrics will be extremely grateful!
  • Ownership: There is a full paragraph dedicated to ownership but it goes without saying. The more metrics you have, the less somebody can own them. More about this later.

Unfortunately, “simplicity” and reducing the number of metrics can often lead to another undesired effect: creating Frankenstein and other fake metrics.

3) The issue with “getting the next best metric” or Frankenstein metrics (aka “fake metrics”)

Let’s say we made it to a limited number of metrics, ideally all measurable and expressed as absolute numbers. Wow, the worst is behind us! Still, you want to be attentive to a few categories of what I call “fake” metrics, that will give you this good sense of control. Until the moment they don’t do it anymore.

Frankenstein metrics: We’ve already seen how having too many metrics is not really helping. So, what do teams usually do in these cases? Well, a concept we all know too well in computer science is “compressing” information. In other words, if we still have the feeling that all this information is important, yet we want to reduce the number of metrics, many teams just “fill” the “outputs” of different metrics in one complex indicator. Is this helping? Well, NO!! For instance, one of my product teams once needed to look into a complex optimization problem, including some “trade-offs”. It was something like “let’s improve X. But because we don’t know how to measure it, we will measure Y, W, and Z instead and say that our metric is (Y+W)*Z. 

I call these “Frankenstein metrics” because they’re nothing else than a complex meaningless mix of heterogeneous measures.

Disconnected proxies: In other cases, we’re interested in behaviors of our product that are just too difficult to single out and measure. In these cases, I’ve seen many teams coming up with “proxy metrics” that are indeed simple but… so far from the original problem and assumption, to the point that they eventually bring no information about the impact we’re trying to make. I call these “disconnected proxies” because the metric we’re using is kind of disconnected from the original problem.

Diluted proxies: similarly, some metrics are difficult to measure directly, so we take the closest “higher metric” we find. In other words, we try to measure something so large that cannot single out the real driver and contributors to any changes. I believe we’re all so familiar with a reasoning going like this: “our goal is increasing revenue. So, let’s make this feature to do X, which shall contribute positively to revenue”. It will probably do, and it’s probably the right thing to do. Nevertheless, will we really be able to directly correlate feature X with revenue? Or will its positive contribution be “diluted” in seasonal fluctuations? Are we going “too high” in the food chain, measuring directly the revenue?

I call these “diluted proxies” because looking at the metric in a “too broad” scope will just dilute the contribution to the metric that is under our control.

4) Ownership & Rituals

Last but not least, now that you have a solid set of metrics defined, remember that this is not a  “fire and forget” exercise.  

    • Ownership: who will be monitoring these numbers? Ideally, you want at least the whole product team to be familiar with the metrics and involved in their follow-up. Will somebody be looking at them regularly? And who will ensure the data we’re retrieving is correct? Define this from the very beginning.
    • Rituals: something else you want to define from day 1 are the different “rituals” related to these metrics. Will you be looking at those metrics weekly? Or daily? And will somebody summarize a common “understanding” of the story these numbers are telling? Will you be reviewing the results together, or rely on each team member to get their metrics directly from a dashboard? Define from the beginning what is your strategy in looking at your metrics: a regular portfolio review, based on sound data, can be the core of your product decision-making. It will make prioritization natural, support comprehensive communication, and help you proactively sunset features and avoid product debt
    • Decision making: The first reason why measuring is important, is because you want to make an informed decision about the future of your product, the direction you’re taking with it, or just a small experiment. Doesn’t matter how large or how small the scope is, you want to define the decision-making process from the beginning. Define well before starting what is the definition of success, what will happen in case of success or failure, and who will make the calls in case of doubt.
  • Communication: by making your organization a data-driven one, don’t forget that you’re raising expectations. Management, team members, sales, and designers, will all be eager to know “where we stand with this new metric”. Factor your communication in from the beginning, and be upfront about the way updates will be communicated.

Conclusions

By now it should be clear what I meant at the beginning by “spend more time on defining the right metrics than reading them”. Simplicity doesn’t mean “simple to put in place”, but simple to use and understand. If you did a proper job in the definition and you didn’t come up with “fake metrics”, understanding what numbers are indicating is straightforward. Conversely, if you spent little time in preparation, got metrics that seem ok but in reality are impossible to obtain, or are using metrics that are too far from what you’re moving, you will be spending a lot of time debating on what the metric is telling you. And you will be spending this time, instead of deciding what is next and iterating faster. 

So, when the time comes to set up your metrics, remember a few key points:

  • Absolute vs relative: Always go for both, but keep in mind that absolute is king.
  • Less is more. Don’t get overwhelmed and choose wisely what you’re looking at.
  • Better nothing than a fake proxy. If you can’t find the right metric, don’t make it up instead.
  • Ownership. Metrics don’t live alone, and your best bet is on people looking after them.

I think somebody once said: “Metrics are like a joke. If you have to explain them, they’re not funny”. Or probably he didn’t say metrics, but it doesn’t matter 🙂 

Before you go. I like to practice what I preach, and with this article, I am starting to ask my readers for a very precious contribution: their honest feedback! Could you spend 2’ of your time answering a couple of very simple questions here? https://forms.gle/WXQx7XCiHFjtsS3u7  . Thanks !


A big thanks to Patrick Hauert and Mattia Albergante for challenging many of my thoughts, for their attentive review of my drafts, and for the countless suggestions for this article.

Don't miss new Articles!

Be among the first to be notified when a new Reasonable Product Article is published

We promise we’ll never spam! Max one email per week.


Share now
Scroll to top