why data-driven everything is killing creativity
We’re Not in an Insight Economy — We’re in an Inference Crisis.
It’s funny. The more data we have, the less we seem to know what to do with it.
I say that as someone who’s spent years inside this world — building models, cleaning pipelines, training algorithms, re-explaining what correlation actually means to teams who just want a chart to back their pitch.
And I’ll say something I probably shouldn’t: The current state of “data-driven creativity” is neither creative nor particularly data-literate.
We’re not living in a golden age of insight. We’re living in a crisis of inference.
Because data isn’t neutral. Models aren’t objective. And most of what passes for “insight” is just a pattern someone already decided was valuable — repackaged in a new font.
what gets measured gets mimicked
Let’s be clear: the problem isn’t data. It’s how we use it. More specifically, it’s how we’ve let it replace the things it was supposed to support — taste, risk, vision, interpretation.
Marketing didn’t become smarter. It became safer. And in the process, we lost the ability to sit with ambiguity — to pursue the thing that feels promising even when it can’t be plotted.
There’s a name for this. It’s called model myopia — the tendency to assume that what the model can see is all there is to see. It’s rampant in finance, dangerous in medicine, and quietly corrosive in creative work.
Because when the brief starts with the dashboard, you’re no longer solving for meaning. You’re solving for measurability.
the dataset is never the whole story
Here’s what no one wants to admit in your average Monday-morning insights meeting:
Most attribution models are a guess. A useful one, maybe. But still a guess.
Predictive scores are shaped more by what’s been measured than what matters.
And “the data” doesn’t say anything. People do — when they interpret it, filter it, choose what to report and what to leave out.
The dataset is always constrained by what can be measured. The model, by what can be modeled. Still, we treat their outputs as definitive — not because they are, but because they’re legible.
Arrow warned us, but we didn’t listen
In 1951, economist Kenneth Arrow proved something that should haunt every strategist still clinging to the idea of a perfect scoring system.
His Impossibility Theorem — which earned him a Nobel Prize — shows that when you try to aggregate individual preferences into a collective decision, under even the most reasonable conditions, the outcome will always be distorted. In other words: there’s no clean way to compress complexity without losing something that matters.It doesn’t matter how clean your inputs are or how “neutral” your methodology claims to be.
Any attempt to reduce multiple, complex human judgments into a single coherent score — whether it's a creative brief, a campaign evaluation matrix, or your content performance dashboard — will leave something essential out.
So when marketing teams pretend they can weigh reach, originality, cultural relevance, and emotional tone with one composite KPI — they’re not being strategic. They’re reenacting a mathematical impossibility. With confidence.
We kill ideas too early. We greenlight only what looks familiar. We turn briefs into Frankenstein presentations: 20% benchmark, 30% platform constraints, 50% performance anxiety.
Averages aren’t alignment. And consensus isn’t creativity.
feedback loops aren’t foresight
The models themselves aren’t the enemy. Most were built to explore possibility — to surface patterns, simulate outcomes, reduce noise.
A model, in its purest form, is just a structured guess: a simplified abstraction of reality built to help us make decisions. It takes historical data, applies logic or math, and returns a score, a rank, a recommendation.
Useful? Sure. But also deeply limited.
Because models only see what they’re trained to see — and they optimize for what’s measurable, not necessarily what matters.
So when we hardwire marketing infrastructure around short-term wins, when we measure brand relevance in quarterly reports and reward teams for hitting performance KPIs that were never designed to measure cultural value, we don’t just misuse the model.
We shrink the entire creative space it was meant to support.
We stop asking what’s interesting and start asking what’s repeatable. We don’t track behavior to understand people — we track it to justify making the same thing again.
That’s how we trap ourselves.
Not failure — just la grande pièce de théâtre, where repetition passes for relevance and everyone claps at the cue.
so what now?
Use data to sharpen your curiosity — not to defend your assumptions. Build systems that tolerate the unproven, the unresolved, the not-yet-measurable. Listen for weak signals. Pay attention to the outliers. And above all, resist the instinct to resolve ambiguity before it’s had a chance to reveal something true.
The picture used as a header is uncredited, from an Informix ad in the November 1996 issue of Italian computer magazine Bit. "Whatever form your data takes, there is only one technology that can handle it."
Yes. I’ve written a bit about this too! Somewhere a long the line, we stopped marketing to people and now market solely to lines of code. But such an interesting read coming from the data perspective. Originality is your only competitive edge.
Sending this to my boss ahah! Thanks for sharing I couldn’t agree more. Interesting to think about the place of intuition in all this.