Would you bet a year’s salary on the accuracy of your data?

Data is the lifeblood of many organisations. It drives strategy, informs investment, and reassures boards that decisions are evidence-based. But for something so vital, we rarely interrogate its quality with the same rigour we apply to financial reporting or compliance.
Many businesses fall into one of two traps. Some shy away from probing their data too deeply, fearing what flaws they might uncover. Others swing to the opposite extreme, treating data as an exact science – assuming every metric is a hard truth rather than a complex estimate. Both positions are dangerous.
The reality is that most data is messy, imperfect, and what I like to call consistently inconsistent. That doesn’t make it useless – far from it. Relative trends can still be powerful, provided we acknowledge their limits. The danger comes when inconsistent data is dressed up as precise measurement, or when directional signals are mistaken for absolutes.
The convenient belief test
Here’s something everyone in our industry has seen – when data tells a positive story, people accept it at face value. When the story is negative, suddenly the data is under fire – the sample is too small, the weighting questionable, the methodology flawed. Few areas show this more clearly than market share data. How many countless hours have been spent debating whether the figures are ‘accurate’ rather than asking the harder question: what is this data actually telling us about direction and change?
This selective belief – trusting data when it flatters, distrusting it when it challenges – is one of the biggest cultural risks in how organisations consume evidence.
No single dataset holds the truth
One of the most persistent myths is that there’s a definitive dataset somewhere – the one source of truth that will end debate. It doesn’t exist. Every dataset has its strengths and weaknesses: some are broader but shallower, others narrower but richer. The real value comes not from choosing a winner, but from stitching multiple sources together into a coherent narrative.
Strong practice means understanding the reliability of each input, weighting it appropriately, and telling a story that acknowledges nuance. Data doesn’t deliver the answer – it frames the conversation. The better you are at weaving evidence, the more credible and actionable the story becomes.
Bias isn’t rare, it’s commonplace
Every dataset carries bias – pretending otherwise only builds risk into the system. Quality means knowing where the bias sits, monitoring it over time, and demonstrating that mitigation steps work. Without this, bias isn’t an outlier – it’s the invisible hand shaping decisions.
Accuracy vs. interpretation
It’s easy to assume data fails because it’s ‘wrong’. More often, it fails because it’s misinterpreted. Averages mask subgroups, correlation is mistaken for causation, dashboards compress complexity into false clarity. High-quality practice means building safeguards – transparency notes, contextual explanations, and clear boundaries around what the data can and cannot support.
Audit yourself before others do
If your dataset were dropped into the hands of an independent auditor tomorrow, what would they question? Sampling representativeness? Weighting methodology? Treating this as a live exercise helps shift the mindset from ‘compliance’ to credibility management.
The salary test
Then there’s the sharpest test of all: if this dataset were used to set employee bonuses, would you put your name to it? Would you defend it in front of the people whose pay depends on it? If not, why are you letting it steer million-pound decisions on pricing, consumer lead innovation or market development?
From comfort blanket to compass
Too often, data becomes a corporate comfort blanket: something that looks neat, ticks compliance boxes, and reassures senior leaders that decisions are ‘evidence-based’. But evidence isn’t the same as truth.
Strong organisations don’t demand impossible perfection, nor do they ignore imperfection. They understand that data is rarely absolute, but that doesn’t mean it lacks power. Consistently inconsistent data, used wisely, provides a compass –one that can show direction, momentum and relative change.
The organisations that thrive are those that:
- Recognise that no single dataset holds the whole story
- Weave multiple sources into a narrative that balances strengths and weaknesses
- Use data to guide direction, not to claim flawless precision
- Ask tough questions of quality before decisions are made – not after.
The very best organisations go even further, asking themselves questions that keep their data culture honest:
- What’s the smallest signal we’d still stake our reputation on?
- When would we tell a client not to use this dataset for a decision that really matters?
- If a rival dataset told the opposite story, do we need to prove ‘ours’ is more reliable? Or is there a good reason why we see what we see?
In a follow-up article, I will further explore the types of questions that clients should be asking of suppliers – and suppliers should be asking themselves – to ensure data credibility.
Because ultimately, the question isn’t whether your data is perfect. It’s whether it is trusted, defensible, and strong enough to guide the future of your organisation.
So, would you bet a year’s salary or your reputation on it?
Alex Owens is a digital transformation leader who previously led Unilever’s global People Data Centre, focusing on transforming consumer and marketing
We hope you enjoyed this article.
Research Live is published by MRS.
The Market Research Society (MRS) exists to promote and protect the research sector, showcasing how research delivers impact for businesses and government.
Members of MRS enjoy many benefits including tailoured policy guidance, discounts on training and conferences, and access to member-only content.
For example, there's an archive of winning case studies from over a decade of MRS Awards.
Find out more about the benefits of joining MRS here.









0 Comments