Honesty in an Untruthful World

AI-Generated Consumer Panels Tell Us Everything About Our Relationship with Reality
— Philip

I grew up in a house of secrets.

It took decades to understand what they were: my father's affairs, the constant house moves, the circumstances of my adoption. (yeah I know, there's a potential Netflix series in there…..).

But those years taught me. People are spectacularly good at presenting confident lies when the truth feels too uncertain to admit.

Right now, confident lies have become our default setting. The US President operates on his own unique definition of ‘truth’. GB News viewers believe any click-bait stat, like net migration is rising despite statistical evidence showing it isn't. And TikTok influencers with millions of followers suggests microwaving chicken with Nytol. (Just don’t)

Somewhere between these absurdities, we've lost our ability to distinguish between what's real and what simply sounds plausible.

I’ve just come across research from PyMC Labs and Colgate-Palmolive that inadvertently reveals something profound about this crisis of truth.

The research explores AI-generated consumer panels, synthetic respondents that marketing departments increasingly use to test products. I work in advertising. I use these tools. And I'm acutely aware that most of my industry peers treat them with a mix of suspicion and desperation. The latter for cost savings but, underpinned by the suspicion they're building a beige future of 'good enough' rather than exceptional work.

When researchers asked these AI systems to rate products on a standard five-point scale, something revealing happened.

The AI models did what meeting attendees do everywhere: they gave overly confident answers that didn't reflect reality. They consistently chose '3', the safe middle ground, occasionally venturing to '2' or '4', almost never '1' or '5'.

The researchers had a problem: how do you get honest uncertainty out of a system optimised for confident responses? Their solution became Semantic Similarity Rating (SSR), a method that would accidentally reveal something profound about truth itself.

The Unexpected Solution

Instead of forcing ratings, researchers let models express themselves textually first, explain their thinking, then mapped those words to ratings using semantic similarity. The results were striking: the AI achieved 90% of human test-retest reliability whilst producing realistic, human-like distributions of responses.

This isn't merely technical innovation. It's a different way of thinking about truth.

By allowing uncertainty, by acknowledging that "I'd probably buy it if the price is right" genuinely sits somewhere between "likely" and "very likely", the system became more honest. More accurate. More useful.

The synthetic consumers using SSR "appear less prone to the positivity bias common in human surveys" and provided "a broader dynamic range" offering "more discriminative signals."

Read that again. 

The AI system, properly configured to express uncertainty, exhibits less bias than human respondents.

What started as a marketing research methodology had accidentally stumbled onto something far more significant: a framework for honest uncertainty in an age of confident lies.

Slightly uncomfortable but it looks like accidentally built AI that achieves 90%+ correlation with human judgment whilst simultaneously being more resistant to certain cognitive biases. If so, we've stumbled onto something that exposes the architecture of our current crisis.

The limitation isn't that AI can't tell us what's true. It's that we've built information systems rewarding confident assertions over careful analysis, whilst simultaneously degrading the cognitive capabilities required to distinguish between them.

Consider the infrastructure:

18% of adults in England are functionally illiterate. 

62% of UK workers score below OECD cognitive flexibility benchmarks. 

When people believe conspiracy theories contradicting official statistics, they're not making sophisticated epistemological choices. They're operating below the literacy threshold required to evaluate evidence.

90% of UK primary school children experienced negative literacy impacts during COVID-19, with improvements still stubbornly low.

We're not getting smarter. We're just getting louder.

The Architecture of Honest Uncertainty

The SSR framework does something architecturally significant: it builds honesty about uncertainty into system design from the start.

Rather than asking "What is the answer?" it asks "What is the distribution of plausible answers given available evidence?"

Imagine if our information systems worked this way.

Instead of headlines screaming "MIGRATION CRISIS" or "MIGRATION SOLVED", what if news reported: "Based on ONS data, 73% probability net migration is decreasing year-on-year, with 27% chance current methodology misses informal flows, though confidence varies significantly by measurement approach and timeframe"?

This isn't sexy. Won't go viral. But it's truthful in ways that actively resist weaponisation.

The research suggests we could have:

Distributional Journalism: Stop presenting singular narratives. Provide probability distributions across plausible interpretations. (I know, unlikely to happen)

Uncertainty Interfaces: Instead of binary fact checks ("FALSE" vs "TRUE"), provide distributional assessments with explicit confidence levels. (Nice to have, again, unlikely in the real world)

Truthful AI Assistants: Optimise for accurate representation of uncertainty rather than appearing confident. "I'm 60% confident in this answer, based primarily on X source, but three competing frameworks suggest Y might be more accurate in certain contexts.” (Actually possible now, with the right prompts)

Cognitive Literacy as Infrastructure: Companies investing in cognitive literacy programmes see 22% higher AI success rates. This isn't optional anymore. Its feckin’ essential for the well being of the nation!

What this is really about

What the researchers discovered, perhaps accidentally, is an architectural pattern for truth-seeking in an untruthful world.

Growing up with secrets taught me that people create confident stories to fill gaps where truth should be. The genius of SSR isn't that it makes AI smarter. It's that it makes AI honest about what it doesn't know.

In a world drowning in confident lies, perhaps the most radical act is admitting when you're uncertain.

Final thought

In an age that rewards confident lies, its never been more important to seek the truth, whether nailing an inconsequential flavor preference or calling out state sponsored genocide.


What drives you to seek truth in an age that rewards confident lies? I'd be interested in your thoughts.