Skip to main content

Removing risk from product development

Using Minimum Viable Data (MVD) and Minimum Viable Experiment (MVE)

As a data consultant, I’m often asked, “How do we measure success?” But, far too often, it’s asked at the point of product launch, which reveals that many product managers aren’t sure how customers will respond.

In this all-too-common scenario, data becomes a “prove me right” metric, surfaced only after all the money, time, and effort has been spent. It’s a high-risk, backwards approach to product launch. 

Last year, we published thinking beyond minimum viable product, a reframing of MVP through a lens of intention and outcomes. Here we argued that MVP is too often treated as a finish line, when it should come after understanding. The article outlined how the Minimum Viable Outcome (MVO) is a balance of desired business and customer outcomes that the project needs to achieve to be deemed successful. And, most importantly, not forgetting this foundation when developing the MVP.

As a follow-up to support the MVO strategy, we’ve been exploring a deeper question: how do teams de-risk product development before committing to build?

Minimum Viable Data (MVD) asks:

What’s the minimum amount of data we need to evaluate our hypothesis, while understanding our business and customer needs?

A strong data strategy doesn't mean collecting all the data, it means identifying the right data. This could be quantitative (product analytics, usage data) or qualitative (user research, interviews, surveys) and should be derived from a mix of methods like UX testing, market sizing, or industry benchmarking.

The goal of MVD is to streamline your data inputs, so that you give real-world context to your problems. It helps you answer: 

MVD is about creating decision confidence, not perfect knowledge. But to get there, data professionals need a seat at the strategy table. We need a be proactive partners in shaping product strategy from day one. 

Minimum Viable Experiment (MVE) asks:

What’s the minimum experiment we need to run to validate core assumptions or reduce risk around our product, users, or market?

A strong experiment strategy doesn’t mean testing just for testing’s sake, it's about learning fast with minimal effort and cost. The aim is to generate just enough evidence, to either validate or adjust course, before heavy investment. MVE builds directly on your MVD: once you know the key data you need, you can design targeted experiments to collect it. This might be in the form of a fake door test, a concierge service, a landing page, or an A/B test. Opt for whichever method gives you fastest, low-cost insights. 

The goal is simple: go to market with confidence, backed by validated assumptions, not just hope. You don’t need to "prove" success after launch if your MVE has already shown that there’s a strong signal before you build.  

A real-world example

Imagine you're a global insurance company and, over decades, you've built best-in-class quote engines, servicing platforms and claims systems, each tailored to local market needs. But now this strength has become a weakness. Now you have multiple versions of the same business, operating across regions with zero shared architecture, strategy, or journey logic.

What’s the cost? 

You're not innovating. You're firefighting. You're not improving journeys. You're patching them. So, leadership makes the call: it’s time to streamline, to rebuild smarter, to create seamless customer journeys and systems that support continuous evolution, not slow it down.

But the question is, where do you start?

If you follow the MVP route, your discovery often starts with questions like: “What do we have now?” or “How do we replicate these journeys everywhere as a catch-all?” But if you start with Minimum Viable Data (MVD), the first question is completely different: “Which journeys are actually working best for our customers right now?” 

That shift matters, because one path is based on assumptions and the other is rooted in evidence. MVD gives you the insight you need, including: 

Then, you can apply MVE thinking: “What are the riskiest assumptions stopping us from scaling this?” You can break this down into steps, like in the example below:

1. Validate that what works in one region works elsewhere  We did this for motor vehicle insurance journeys, using service design methods, by combining user intervention points and system processes for the current journeys. Then we validated that model in another market through interviews. This helped us quickly spot similarities, differences, and potential blockers before making any changes.

2. Test journey or UI changes before rollout We now had the evidence to say: “These journeys work for most; these would cause problems if rolled out globally.” We didn’t build a thing in this scenario, just prototyped and validated. No develop time wasted and clarity achieved. 

3. Overlay with business data We then layered this with business metrics, for example, “How many policies were processed in each region?” This gave us a view of scale and impact. 

In this example, while MVD and MVE reduced the risk of rollout, they also built the confidence to invest where it mattered most. 

Defining your Minimum Viable Outcome (MVO) While one team experiments, project leads can work with stakeholders to define the business and customer outcomes beyond tech delivery. They can use what they've learned from MVD and MVE to shape an aligned, value-driven outcome, such as: 

These metrics then become the North Star goal, so you're not just shipping a feature, but solving a problem. 

And then you get more than just confidence in the solution: 

Finally, it’s time to define the MVP and get to building. And you'll arrive at this phase knowing that there are no more questions or stones left unturned. Your teams will be more confident in the product they’re building and won’t waste time on endless iterations or pivots to the product plan.