How to start making data driven decisions in 5 easy steps

Many claim to be data driven in their decision making. But no one ever explains what that actually means in practice. While making data driven decisions in product development sounds great in theory, actually doing so is not always easy. The way to get there may seem straightforward but many struggle with the details. 

Today, I want to offer help. I want to provide a simple blueprint that you can implement to start making data driven product decisions. I will also mention some common mistakes and pitfalls within each step. 


The Backlog is a bi-weekly newsletter about the undervalued and overlooked in modern product development. It covers product development, self organization, and productivity. I include methods, books, and write about my own experience. The target audience are Product Owners, Scrum Masters, Developers, and project leaders. The Backlog is about getting the most out of product development.
Subscribe to get new posts straight to your inbox. 


The five steps to become more data driven

These are the steps I recommend to teams in order to become more data driven. They work for most teams out of the box. Nevertheless, adapt them to your circumstances if you deem it necessary. The five steps are:

  1. Set a goal for the product
  2. Define leading indicators for your goal
  3. Build the technical infrastructure for reporting
  4. Make data driven product decisions
  5. Verify the correlation between leading and lagging indicators

1) Set a goal for the product

First of all, you need a goal for the product that you want to achieve. This gives direction to all development efforts. You can have more than one goal. The less, the better, though. One is ideal. 

The goal should fit the context of the entire company and ideally be derived from the top level business goals. If you are using OKRs and the accompanying planning cycle this is already the case. 

OKRs are popular but there are plenty of other techniques for specifying the goal. You can use SMART, CLEAR, or whatever other acronym you may find for goal setting. BHAG is also fine – although this is more of a vision than a product goal. 

The technique doesn’t matter. The manner of specifying the goal does. It should be specific and measurable, aspirational but realistic (although some might argue unrealistic goals lead to more creative and better solutions). Most importantly, the goal should define a target state that is agnostic of HOW to get there. 

The goal should be an outcome, not an output (good: increase Revenue; bad: build features X,Y,Z). This is key in empowering the product team. It will lead to better solutions as well as increased ownership and motivation.

If the organization doesn’t have a goal for the product (bad sign btw), come up with your own, propose it to leadership, and make it real by having leadership officially sign off on it (using the regular decision making meeting that most organizations have in place).

2) Define leading indicators for your goal

Great, now you have a well written outcome for your product. The problem is that most business outcomes for your product are lagging indicators. It takes time (lag) until they react to product changes. Lagging indicators are retrospective or historical metrics. For example you might only be able to see the increase/decrease in revenue once the month is over and the books are closed. 

You need to find a number that correlates with your goal but changes (almost) immediately whenever you change something in your product. You need to find a leading indicator that you can track. Leading indicators are forward-looking and proactive. They are often based on human behavior and the interactions with our product. As such, they are much more sensitive to changes in our product.

Instead of revenue this could be the median duration a user stays on your website. Pick one or a few leading indicators. This is what you will actually be measuring during product development. 

It’s important to remember that the correlation between leading and lagging indicators is an assumption you are making. You need to evaluate this assumption later on, see number five.

3) Build the technical infrastructure for reporting

Whatever you want to measure needs to be logged somewhere and stored for reporting. The data also needs to be accessible so that you can actually gather some insights from it. These are the two steps to creating the reporting infrastructure: logging and building reports. 

The engineers usually have experience in logging data. Rely on them when deciding what needs to be logged, when to log, and where to store the raw data. 

If you are creating the system for logging data from scratch, there is a trade-off to be made between getting it ready quickly and making it flexible and robust so it can be adjusted to changing demands in the future. I advise to keep it simple here. I have worked with teams that spent weeks or even months building the one stop shop for logging data. This premature optimization was almost never worth it. 

There are plenty of tools available to make the data accessible. Kibana, Tableau and New Relic are all ones I have worked with. They all have their strengths and weaknesses. Do some research and pick the one that seems most fitting. Don’t get too caught up on the differences in features. Trust me, you will only use 5% of the ones available anyway.

Depending on the complexity of your situation you may need to transform the data to make it usable by your tool of choice. This can get difficult. Start with the basics. You can already learn a lot without or with very simple data transformations looking at totals (number of clicks, total users…), averages, or percentiles (average or median time spent on a site). Many of the tools include basic transformations anyway. You don’t need to be a data scientist create a useful report.

In building the reports try to also keep it simple. Here the people who need to make product decisions often overcomplicate it. Having so many options for building reports available often leads to the fanciest of dashboards, even if you only need to look at one number. As is the case for logging, a lot of time and energy is also spent in prematurely optimizing the reports.

4) Make data driven product decisions 

You have a goal, you have the leading indicator that you are able to track. You can now build small product increments and monitor how your leading indicator reacts. This way you can run small experiments to assess the assumptions about your product and make data driven product decisions based on the results. 

In my mind there are three categories to these decisions. 

The first is assessing the effect of changes in production. This is basically the final step in validating what you found out in qualitative research or what you assumed about user preference. It’s a final check of the impact of features. This is important as people don’t always do as they say. The most famous example of the difference between stated and revealed user preference was likely the change in Facebook’s feed from time based to algorithmic. Everyone said they hated it. Yet they engaged much more with it.  

The second category of tests you can run are actual small experiments to quantitatively test assumptions or solutions themselves and remove risk. The most notable examples of these experiments are likely A/B, Fake Door, and Smoke Tests.

Finally, you can do quantitative and unmoderated usability testing. In this case you send one or multiple tasks to a large number of users. By monitoring the data gathered in these tests you can find usability issues and make or evaluate design decisions. 

5) Verify the correlation between leading and lagging indicator

Congrats, at this point you are already making data driven product decisions! There is only one thing left to check. Remember, you are tracking a leading indicator (median duration on site) which – you think – correlates with the actual outcome (revenue) you are trying to achieve.

You need to verify that this is true. So, after some time running tests and getting results, you need to double check if leading and lagging indicators are moving in unison. 

Teams sometimes forget this because they are so focused on optimizing their one leading indicator. This can have catastrophic consequences because they spend energy on things that move the indicator but have no impact on the business. 

The difference between good and great teams 

I am convinced that one of the main differences between exceptional teams and the rest is the rigorous focus on reaching a quantified goal by the latter. Many teams have goals written down somewhere but for various reasons – I think mainly due to company culture or bad goal definition – they are not ingrained in the day to day processes. 

The really good teams are constantly looking at the KPIs they are tracking and rigorously prioritizing accordingly. This is a mindset. 

If you implement the five steps to data driven decision making in product development you are automatically doing that. Thus, you are automatically a step ahead. Along the way, you actually learned what data driven decision making means in practice and can justifiably claim that you are making data driven decisions. That’s more than most. 😁