You know the meeting.
The dashboards are on the screen. Someone points to one metric. Someone else opens a different report. Another person questions the attribution model.
Within minutes the discussion shifts from what the data says to which version of the data counts.
That’s the moment metrics stop behaving like evidence and start behaving like arguments.
For most of my career, I’ve helped companies build measurement systems.
Customer data platforms. Event tracking frameworks. Attribution models. Analytics pipelines that connect marketing systems all the way into CRM and revenue reporting.
The goal is always the same: give teams a clear view of how customers actually behave.
Where they come from.
What actions lead to conversion.
What happens after someone becomes a customer.
In theory, if you have enough of that data, decision making should become easier.
At least that’s the hope.
Over the years I’ve worked on projects that stitched together increasingly detailed views of the customer journey. We unified identities across systems, standardized event definitions across products and marketing tools, and connected analytics platforms to sales and CRM systems so we could see the entire path from first interaction to closed revenue.
Once those systems start working, the amount of insight can be remarkable.
You can compare attribution models. First click, last click, multi-touch. You can analyze return on ad spend across campaigns and channels. You can trace how specific customer behaviors correlate with long-term retention.
The picture becomes richer and richer.
And yet something interesting happens in the meeting where the decision actually gets made.
Despite all the dashboards, reports, and carefully constructed attribution models, the room still ends up debating.
Someone points to one metric. Someone else references a different report. Another person questions how the attribution model works. Eventually someone asks whether the conversion event is defined correctly.
The conversation slowly shifts from the numbers themselves to the interpretation of those numbers.
At that point, the original goal of being “data-driven” quietly begins to slip away.
Over time I’ve come to believe this happens for a simple reason.
Most organizations don’t actually have a measurement system.
They have measurement infrastructure.
Lots of tools.
Lots of dashboards.
Lots of reports.
But underneath it all, the foundation is inconsistent.
Customer identities are fragmented across platforms. Events are defined slightly differently across systems. Attribution models change depending on who built the report. Conversion optimization teams improve local metrics that may or may not connect to broader business outcomes.
When the underlying measurement system is inconsistent, the numbers don’t resolve debates.
They extend them.
In that environment, metrics slowly stop behaving like evidence. They become arguments.
Different teams defend different dashboards. Reports become tools for persuasion rather than tools for learning. And ironically, the more data an organization accumulates, the easier it becomes to find numbers that support almost any position.
Eventually something subtle happens.
Decisions drift back to the same forces that existed before all the analytics systems were built.
Experience.
Confidence.
Hierarchy.
Not because people dislike data, but because the data itself cannot clearly resolve the question.
The solution is not simply adding more dashboards or analytics tools. It is building a shared measurement foundation underneath them.
A unified customer identity.
Consistent event definitions.
A shared taxonomy for behaviors and traits.
Clear attribution models that everyone understands.
When those systems exist, data becomes incredibly clarifying.
When they don’t, data becomes political.
Data does not automatically produce truth. It only reveals truth when the system generating it is coherent.
Without that foundation, teams don’t really learn from numbers.
They simply argue with them.
