Synthetic consumers Are More Honest Than Real Ones

I grew up in a house of secrets.

It took decades to understand what they were: my father's many affairs, the circumstances of my adoption (yeah I know, there's a potential Netflix series in it).

But it did teach me something useful for a future strategist: people are spectacularly good at presenting confident lies when the truth feels too uncertain to admit.

Currently there is a heated debate about the merits, or lack of, with AI-generated consumer panels and the resultant insights.

But fascinating new research (Link in comments) from PyMC Labs and Colgate-Palmolive suggests we're looking at it entirely wrong.

LINK HERE = Research = https://arxiv.org/abs/2510.08338

Instead of forcing AI into the prison of 1-10 rating scales, the researchers let it think in natural language first. Explain its reasoning. Express uncertainty. Only then map that language to a distribution of possibilities.

The results? 90% correlation with actual human judgment. More realistic responses. And crucially, less prone to the positivity bias that plagues traditional research.

As a dyslexic, this resonates deeply. My brain has never trusted neat numerical gradients. When you ask me to rate something 1-10, I'm already translating complex, contradictory feelings into an arbitrary system that feels fundamentally dishonest.

The number is a lie we agree to tell because it's easier than expressing nuance.

The AI, when freed from that constraint, was more honest. More critical. More useful.

Think about that. The synthetic consumer, when allowed to be uncertain, gave better insight than humans trained to be agreeable in focus groups or rushing through an online survey.

This isn't a research shortcut. It's potentially an upgrade.

Faster, cheaper, AND more honest insight. The entire "synthetic data is a cheat" argument collapses when the synthetic version is more truthful than the original.

Death of positivity bias. Imagine pre-testing creative and getting feedback that isn't softened by British politeness or American enthusiasm. That's not just different data—it's better data that drives better insights.

True qualitative depth at quantitative scale. We've always had to choose. This method suggests we might not have to anymore.

I love the irony. While the world panics about AI-driven misinformation, our industry has discovered that the same technology, used differently, might actually get us closer to truth.

After 20+ years in strategy, most of it learning from spectacular failures (Google "Hicklin Slade Sharon Bridgewater" if you fancy a laugh), I've learned this: comfort with uncertainty beats confident ignorance.

Every. Single. Time.

Question for you: What other “AI cheats" in our industry are actually just better methods we're too nervous to trust?

Honesty in an Untruthful World

AI-Generated Consumer Panels Tell Us Everything About Our Relationship with Reality
— Philip

I grew up in a house of secrets.

It took decades to understand what they were: my father's affairs, the constant house moves, the circumstances of my adoption. (yeah I know, there's a potential Netflix series in there…..).

But those years taught me. People are spectacularly good at presenting confident lies when the truth feels too uncertain to admit.

Right now, confident lies have become our default setting. The US President operates on his own unique definition of ‘truth’. GB News viewers believe any click-bait stat, like net migration is rising despite statistical evidence showing it isn't. And TikTok influencers with millions of followers suggests microwaving chicken with Nytol. (Just don’t)

Somewhere between these absurdities, we've lost our ability to distinguish between what's real and what simply sounds plausible.

I’ve just come across research from PyMC Labs and Colgate-Palmolive that inadvertently reveals something profound about this crisis of truth.

The research explores AI-generated consumer panels, synthetic respondents that marketing departments increasingly use to test products. I work in advertising. I use these tools. And I'm acutely aware that most of my industry peers treat them with a mix of suspicion and desperation. The latter for cost savings but, underpinned by the suspicion they're building a beige future of 'good enough' rather than exceptional work.

When researchers asked these AI systems to rate products on a standard five-point scale, something revealing happened.

The AI models did what meeting attendees do everywhere: they gave overly confident answers that didn't reflect reality. They consistently chose '3', the safe middle ground, occasionally venturing to '2' or '4', almost never '1' or '5'.

The researchers had a problem: how do you get honest uncertainty out of a system optimised for confident responses? Their solution became Semantic Similarity Rating (SSR), a method that would accidentally reveal something profound about truth itself.

The Unexpected Solution

Instead of forcing ratings, researchers let models express themselves textually first, explain their thinking, then mapped those words to ratings using semantic similarity. The results were striking: the AI achieved 90% of human test-retest reliability whilst producing realistic, human-like distributions of responses.

This isn't merely technical innovation. It's a different way of thinking about truth.

By allowing uncertainty, by acknowledging that "I'd probably buy it if the price is right" genuinely sits somewhere between "likely" and "very likely", the system became more honest. More accurate. More useful.

The synthetic consumers using SSR "appear less prone to the positivity bias common in human surveys" and provided "a broader dynamic range" offering "more discriminative signals."

Read that again. 

The AI system, properly configured to express uncertainty, exhibits less bias than human respondents.

What started as a marketing research methodology had accidentally stumbled onto something far more significant: a framework for honest uncertainty in an age of confident lies.

Slightly uncomfortable but it looks like accidentally built AI that achieves 90%+ correlation with human judgment whilst simultaneously being more resistant to certain cognitive biases. If so, we've stumbled onto something that exposes the architecture of our current crisis.

The limitation isn't that AI can't tell us what's true. It's that we've built information systems rewarding confident assertions over careful analysis, whilst simultaneously degrading the cognitive capabilities required to distinguish between them.

Consider the infrastructure:

18% of adults in England are functionally illiterate. 

62% of UK workers score below OECD cognitive flexibility benchmarks. 

When people believe conspiracy theories contradicting official statistics, they're not making sophisticated epistemological choices. They're operating below the literacy threshold required to evaluate evidence.

90% of UK primary school children experienced negative literacy impacts during COVID-19, with improvements still stubbornly low.

We're not getting smarter. We're just getting louder.

The Architecture of Honest Uncertainty

The SSR framework does something architecturally significant: it builds honesty about uncertainty into system design from the start.

Rather than asking "What is the answer?" it asks "What is the distribution of plausible answers given available evidence?"

Imagine if our information systems worked this way.

Instead of headlines screaming "MIGRATION CRISIS" or "MIGRATION SOLVED", what if news reported: "Based on ONS data, 73% probability net migration is decreasing year-on-year, with 27% chance current methodology misses informal flows, though confidence varies significantly by measurement approach and timeframe"?

This isn't sexy. Won't go viral. But it's truthful in ways that actively resist weaponisation.

The research suggests we could have:

Distributional Journalism: Stop presenting singular narratives. Provide probability distributions across plausible interpretations. (I know, unlikely to happen)

Uncertainty Interfaces: Instead of binary fact checks ("FALSE" vs "TRUE"), provide distributional assessments with explicit confidence levels. (Nice to have, again, unlikely in the real world)

Truthful AI Assistants: Optimise for accurate representation of uncertainty rather than appearing confident. "I'm 60% confident in this answer, based primarily on X source, but three competing frameworks suggest Y might be more accurate in certain contexts.” (Actually possible now, with the right prompts)

Cognitive Literacy as Infrastructure: Companies investing in cognitive literacy programmes see 22% higher AI success rates. This isn't optional anymore. Its feckin’ essential for the well being of the nation!

What this is really about

What the researchers discovered, perhaps accidentally, is an architectural pattern for truth-seeking in an untruthful world.

Growing up with secrets taught me that people create confident stories to fill gaps where truth should be. The genius of SSR isn't that it makes AI smarter. It's that it makes AI honest about what it doesn't know.

In a world drowning in confident lies, perhaps the most radical act is admitting when you're uncertain.

Final thought

In an age that rewards confident lies, its never been more important to seek the truth, whether nailing an inconsequential flavor preference or calling out state sponsored genocide.


What drives you to seek truth in an age that rewards confident lies? I'd be interested in your thoughts.


The Age of Cognitive Inflation

Like currency flooding into an economy. We’re experiencing what might be called cognitive inflation. The volume of content is exploding, but its actual value is plummeting.

Tell me, this has not happened to you. You downloaded a beautifully formatted report. Stack full of details and charts. Looks professional. Reads smoothly.

And yet…its bland, colorless and absolutely feckin’ useless.

Welcome to workslop, the new currency of cognitive inflation.

We're fooling ourselves that having access to so much more information means we now know so much more. In fact, the opposite is happening.

The Workslop Economy

A recent Harvard Business Review article identified a phenomenon 'workslop'. Employees are using AI tools to create low-effort, passable looking work that ends up creating more work for their coworkers.

Damien Charlotin, Legal expert and commentator talks of AI derived filler appearing in court documents “…in a few cases everyday…”

Consider this: K&L Gates got censured and fined in May of this year. Top international law firm. Serious chops. Judge found their case riddled with AI fabrications. Nine out of 27 citations were bollox. They corrected it. Submitted new documents. Six more AI derived mistakes. This is a firm charging each individual lawyer (in a team of 10) out at north of $2500 an hour.

When someone sends AI-generated content, they're not just using a tool. They’re transferring cognitive burden to recipients who must interpret, verify, correct, or redo the whole thing.

The numbers are sobering. 42% report having received workslop in the last month. Half view colleagues who send it as less trustworthy. Whether its McKinsey or your rivals, I cannot believe you have not come across this sudden flood of reports, white papers and think pieces that reek of…nothing…just soulless words, neatly placed together.

Workslop perfectly encapsulates cognitive inflation. More content. Less meaning. The signal-to-noise ratio is collapsing. Trust in any information becomes impossible without extensive verification, which, frankly nobody appears to have the time for.

Literacy: The Line Between Power and Danger

Recently I read a brilliant article by James Marriott: 'Dawn of the post-literate society’ (link in comments). His research is stark. Reading for pleasure has fallen by 40% in America in the last twenty years. In the UK, more than a third of adults say they've given up reading altogether.

Childrens reading skills have also been declining yearly post-pandemic. Experts link this decline not only to decreased traditional reading but also to reduced critical thinking and comprehension.

The National Literacy Trust reports reading among children is now at its lowest level on record.

It’s not just about books.

But about how we think.

The world of print, Marriott argues, is orderly, logical and rational.

Books make arguments, propose theses, develop ideas. "To engage with the written word," the media theorist Neil Postman wrote, "means to follow a line of thought, which requires considerable powers of classifying, inference-making and reasoning."

"The world after print increasingly resembles the world before print,"

Marriott writes. As our cognitive capabilities diminish, we're creating AI systems in our increasingly confused image. We're building dumbing down of the user into the foundations of our future technologies.

As books die, we are in danger of returning to pre-literate habits of thought. Discourse collapsing into panic, hatred and tribal warfare. The discipline required for complex thinking is eroding.

This is the environment, those on the right are thriving on. They flourish amongst populations with limited capacities for inference-making and reasoning. For more on this see the excellent ‘Segmentation of the far right” from Steven Lacey / The Outsiders -link in comments

I heard a great podcast this week. Geoffrey Hinton, one of the architects of today's AI. In conversation with Steven Bartlett (link in comments)

Hinton warns: We're making ourselves stupider before we understand how to use AI safely. That cognitive decline will be baked into the next generation of AI systems we build.

It's that last bit that's most worrying.

Hinton told investors at a recent conference that instead of forcing AI to submit to humans, we need to build 'maternal instincts' into AI models. Because if we don’t, the temptation amongst bad actors is to do the opposite. Hinton's nightmare scenario isn't just more cyber attacks, though those are crippling institutions worldwide. But weaponised AI creating Covid style viruses.

This is not just about having the wrong tools or using them badly. It's about losing the ability to question the value of what is being produced.

The World Economic Forum identifies curiosity, creative thinking, and flexibility as core skills for future workplace success. It's intriguing how we've arrived at the need to really value basic human traits in our most technologically advanced era of AI.

Competency without curiosity creates professional dead ends. The most valuable professional asset isn't knowing everything. It's maintaining the discipline to approach everything as if you know nothing.

Yet literacy, the foundation of that discipline, is collapsing. Many point at it beginning with smartphones, then heighten during Covid lockdowns, and now supercharged by AI. Although, to that last point. Many amazing educators are harnessing AI to try and desperately reverse this trend.

But Universities are now teaching their first truly 'post-literate' cohorts.

"Most of our students are functionally illiterate,"

according to one despairing University entrance assessment.

Productivity Everywhere, but nowhere

This is cognitive inflation's cruelest joke. We have more tools than ever. More information than ever. More 'productivity' software than ever. Yet productivity just... stalls. (In the UK, we've managed a princely +0.5% annually for the last decade…stunning, you’ll agree.

When efficiency becomes the only goal, the outcome is always the same: a world of increasing activity and decreasing value. We mistake motion for progress. Busyness for productivity. Access for understanding.

Companies are investing billions in AI tools. Most, are not seeing any kind of a measurable return. (Other than their head count has dropped, and they can’t understand why their Glassdoor reviews have done the same) The MIT Media Lab found that 95% of organisations have yet to see measurable return on their investment in AI tools. So much activity, so much enthusiasm, so little return. And yet…

…our capacity to think deeply, to read carefully, to reason logically, erodes. We're not getting smarter. We're just getting louder.

What Comes Next

Without intervention, we're heading toward a workplace where nobody trusts anyone else's work, where verification becomes impossible, where cognitive capacity wastes away like an unused muscle. We become like medieval peasants, but with better WiFi.

‘Pilots’ or ‘Passengers’ is a great analogy for how workers are currently using AI. Pilots are navigating their own course, with AI as an instrument of work. Passengers are, well, sat back letting AI take them where they need to be. (See more at: BatterUp Labs/Stanford University)

Zoe Scaman, has just written a brilliant article about helping major organizations roll out AI strategy, seeing passenger behaviour everywhere (Read ‘The Great Erosion’ link in comments) She talks of people outsourcing the twenty percent of work that's genuinely hard thinking, keeping the eighty percent that's just formatting and execution. Backwards. Catastrophic. Because the twenty percent is where you build the muscles. That's not theory. That's happening in organizations today. The cognitive damage is immediate. The supposed productivity gains? Years away, if they arrive at all.

If you're a leader, your most important job isn't to buy more AI tools. It's to build in space and time where your team strengthens the thinking muscles that AI is surreptitiously stealing

Most importantly, we all need to recommit to the hard work of thinking.

Yes, that also includes reading.

And the discipline of following an argument, weighing evidence, changing our minds when presented with better information. (Again, see rise of the far right in this context)

This challenge isn't technical. It's rational. The question isn't whether we can build better AI. It's whether we can remain capable of using it wisely.

In an age of intellectual inflation, attention is the scarcest resource. Not information.

It’s the ability to actually focus, to think deeply, to spot signal in all the noise.

Our futures belong not to those with access to the most information, but to those who retain the ability to think about it clearly.

We've built an economy where information is infinite yet attention is seemingly worthless.

Where everyone has an opinion but nobody has time to think.

Where our tools grow exponentially more powerful while our minds grow…well, let's be honest, weaker.

This. Can. Only. End. Badly.

The answer depends on whether we're willing to do the one thing our AI-saturated culture makes most difficult: slow down, focus, and think.

We all need to (metaphorically) get up, go for a walk and explore the ideas in our own heads.

We need to celebrate the illogical, lateral, properly weird thinking that only humans do well. The very thing AI can't replicate and, worse, is teaching us to devalue.

Because the only way out is through the hard work of becoming human again.


Conceived and written by a dyslexic human, Me.  Made readable by Claude.ai

This whole article coundn’t have been possible without the amazing inspiration of the following writers:

‘AI-Generated “Workshop” is Destroying Productivity

https://tinyurl.com/43kc7m99

‘The dawn of the post-literate society’

https://tinyurl.com/ytdhhns3

Geoffrey Hinton in conversation with Steven Bartlett

https://tinyurl.com/43hsn9ns

‘Segmentation of the Far Right’

https://tinyurl.com/dtfhu28m

‘Pilots & Passengers: The next Evolution in management’

https://tinyurl.com/yhu5e2m3

Zoe Scaman ‘The Great Erosion’

https://tinyurl.com/bdcw3m6k

Who Controls Britain's AI Training Data Controls Britain's Future:

The £42 Billion Question of Intellectual Sovereignty

Elon Musk recently said "the cumulative sum of human knowledge" is becoming exhausted, requiring AI systems to "retrain human knowledge" by deleting "garbage" and introducing "divisive facts." This is not a technical process.

He was announcing a political programme.

One that Britain, with exquisite timing, has this week agreed to be part of.

This week's £42 billion UK-US "Tech Prosperity Deal" commits British infrastructure to American AI systems led by Microsoft, Nvidia, AWS, and Google. On paper, a coup. In practice, something rather more troubling: the systematic outsourcing of how Britain will learn to think.

For a deeper explanation of the full ramifications, read Zoe Scaman's brilliant 'Investment or Surrender' essay see: https://substack.com/inbox/post/173643575

The stakes aren't merely economic. They're epistemic. Those who control training data don't just shape what AI knows. They determine what it considers knowable.

In June, Elon Musk's AI Grok accurately responded that right-wing political violence in the US had been more frequent and deadly since 2016. Elon called this a "major fail," adding his team were "working on it" to change the narrative in the AI's responses.

As Orwell warned with uncomfortable prescience:

"Who controls the present, controls the past; who controls the past controls the future"

In our case, those who control the (present) training data control how we now, and in the future, interpret reality.

Britain's Vulnerable Position

Recent studies by OpenAi (ChatGPT) and Anthropic (Claude) tells an uncomfortable story about Britain's AI readiness.

While 76% of UK professionals express excitement about AI, only 44% receive organisational support. Just 22% of public sector workers report using generative AI, despite high awareness. Meanwhile, our government appears content to hand the keys to US hyperscalers.

This matters because the research reveals a troubling global pattern about usage to date (based on 1.5 million conversations across 195 countries) found adoption growth in low-income nations runs four times faster than wealthy countries.

Yet interactions remain disappointingly mundane: 49% asking questions, 40% completing tasks, just 11% creative expression.

Most people use AI as an advisor, not a collaborator.

Anthropic's usage data shows "directive" conversations those where users delegate wholesale rather than iterate. This jumped from 27% to 39% in just eight months. Automation is displacing augmentation. The difference isn't academic: augmentation builds cognitive muscle, automation risks enfeebling it.

Britain sits dangerously in the middle. Excited but unsupported, aware but unprepared.

The Political Economy of "Truth"

The Trump administration has already mandated that federally-funded AI systems edit out training data on climate change, diversity initiatives, and critical race theory. This isn't content moderation. It is the rewriting of reality at the foundational layer where models learn to interpret the world.

Training data isn't neutral infrastructure.

It's political architecture. Every dataset embeds assumptions about what constitutes knowledge, whose voices matter, which perspectives deserve preservation.

When UK companies build workflows on US-controlled models, they're not just adopting tools. They are accepting cognitive frameworks that are being politicised. Plus let's not forget, these are people who genuinely believe tea tastes better microwaved.

The irony is exquisite. In our anxiety about technological dependence, we've sleepwalked into intellectual dependence instead.

Britain's brilliant tradition of contrarian thinking: from sceptical finance and politics to world-leading creative industries. All of this is about to be processed through algorithmic systems that consider such independence and weirdness outside their parameters.

Which brings us to an uncomfortable question: if Britain's workforce is being shaped by American cognitive frameworks, what happens to the very thing that makes us economically competitive?

The Creative Advantage: Why Britain Still Holds the Trump Cards

Yet here's the thing about intellectual dependency that some are missing: it's only permanent if you accept it as such.

Britain may have lost its manufacturing heartland, but what remains is something far more valuable in an AI-driven world. A population that excels at the one capability that machines, for all their computational brilliance, consistently struggle with: creation.

Not just the obvious creative industries where Britain punches absurdly above its weight. But the broader cultural capacity for lateral thinking, for connecting disparate ideas in ways that confound conventional wisdom.

It's the same cognitive restlessness that produces everything from Daisy May Cooper scripts to breakthrough financial instruments to revolutionary vacuum cleaners.

Each unified by the ability to look at established patterns and ask, with genuine curiosity,

"But what if we did it differently?"

This isn't flag shagging. It's a strategic observation.

The most successful AI implementations globally aren't happening in places with the most compute power—they emerge where human creativity meets algorithmic capability. Where augmentation thrives over automation.

Beyond Binary Choices

The £42 billion deal need not be Britain's intellectual surrender. It can be a springboard, but only if we resist the temptation to make it exclusive. The future belongs to networks, not hegemonies.

Britain's strategic advantage lies in cultivating AI relationships that span continents, not signing exclusive deals with individual superpowers. European AI initiatives offer different philosophical approaches to training data sovereignty. Asian AI networks bring fundamentally different approaches to everything from, yes, copyright to creativity and security.

The smart move isn't choosing sides. It is becoming the place where different AI traditions cross-pollinate most productively.

Consider this: while America optimises for scale and China optimises for speed, Britain could optimise for synthesis. Becoming the place where diverse AI approaches combine in ways that none could achieve individually.

The Human Capital Imperative

But none of this matters if we continue treating AI adoption as primarily a technical challenge rather than a cognitive development opportunity. The literacy gaps that undermine our AI readiness aren't just educational failures. They’re economic emergencies.

Every percentage point improvement in workforce cognitive flexibility translates directly into more effective AI collaboration and economic output. Every investment in curiosity-driven learning compounds into strategic economic advantage.

The countries that will lead in the AI era aren't necessarily those with the biggest data centres. They're the ones with the most cognitively adventurous populations.

This means treating AI adoption as literacy development, not tool training. Teaching people to think with AI, not just use it. One approach produces compliance. The other produces capability.

The British Path Forward

Britain's choice isn't between technological sovereignty and technological dependency. It's between intellectual agency and intellectual automation.

We can accept pre-packaged cognitive frameworks designed by others, or we can insist on retaining the capacity to think differently. We can optimise our workforce for efficiency, or we can cultivate the kind of cognitive restlessness that turns constraints into opportunities.

The most profound form of sovereignty isn't controlling the infrastructure. It’s retaining the ability to use that infrastructure in ways its designers never anticipated. To take US computing power, European regulatory frameworks, Asian agile models, and British creative thinking, and synthesise something entirely new.

After all, intellectual independence has never been about isolation. It's been about maintaining the confidence to think for ourselves, regardless of whose tools we're using.

Again Orwell. Only this time as a strategic guide. If those who control the present control the past, Then those who control the past control the future, makes our task clear: ensure that Britain's future remains authored by British minds, even when shaped by American algorithms.

The question that will define Britain's next decade isn't whether we'll use American AI systems. It's whether we'll use them to augment British thinking, or let them automate it away.

Every company, every department, every worker now choosing how to engage with AI is making that choice.

The aggregate of those decisions will determine whether Britain remains intellectually sovereign or becomes cognitively colonised.  (And lets face it. Britain knows a lot about colonising things).

But back to the main point about Intellectual Sovereignty.

Our choice, in this, remains entirely ours.

For now.

Failing Gloriously

Where should I start?

Well, there was the day we discovered our Financial Director had stolen £2.4 million…

…money we didn't even know we’d earned

which rather tells you everything about our business acumen at the time.

Google "Hicklin Slade/Sharon Bridgewater" if you fancy a laugh at our spectacular financial naivety.

I’m now reminded of my past in the context a few of my advertising contemporaries are now heading towards career exits.

Some through lucrative business sales. Others via the more pedestrian route of diligent career cultivation and pension optimisation.

My current situation?

Very much "none of the above."

Yet, I’m constantly drawn by advertising's fast evolving intellectual challenges.

I repeatedly gambled security and guaranteed rewards for those delicious "what if?" moments.

What I sacrificed in monetary accumulation, I gained exponentially in experiential knowledge:

  • True, I’ve occasionally accepted roles patently unsuited to my capabilities.

  • I’ve invested in pitches, people, and companies that any rational investor would have avoided like the plague

  • I’ve co-founded three startups that provided not only exhilarating highs but eye-watering ways to both gain and lose spectacular amounts of money

Success rate? Patchy at best.

Satisfaction rate? Unprecedented.

Learning dividend? Immeasurable.

The beautiful irony? This has taught me more about due diligence, trust, and operational oversight than any theoretical training  program possibly could.

AI and the comfort with discomfort.

The advertising industry now faces unprecedented AI-driven transformation.

What does success actually mean in an industry where the fundamentals are being rewritten?

Tradition says: accumulated wealth, linear career progression, secure employment. This may all prove spectacularly inadequate for navigating an algorithmic future of unknown possibilities.

My zigzag career wasn't planned, but it cultivated something invaluable: comfort with discomfort.

This isn't the tired Silicon Valley mantra of "fail fast, fail often." Debunked as expensive posturing.

This is something different: systematic comfort with uncertainty. Navigating ambiguity without panic. Rebuilding without losing curiosity.

Here's why this matters now: AI doesn't just automate tasks—it fundamentally alters how we approach problems. Those thriving with AI aren't those with the most technical knowledge. They're those comfortable with not knowing what comes next.

My expensive education in spectacular miscalculation accidentally prepared me for exactly this moment: where strategic thinking means dancing with algorithmic uncertainty rather than controlling predictable outcomes.

It’s sort of ironic. The very career choices that looked suicidal may have been the most sophisticated preparation available for our AI-transformed industry.

Despite logic and good sense saying I should have been long gone.

Skills vs Mindset in the AI Era

At 60, I still delight in approaching each new challenge with a beginners mind set.

It’s what researchers term "deprivation sensitivity" – the psychological hunger for an understanding: The alluring draw of the why?

My career trajectory resembles a rather haphazard game of pinball, not just from design to advertising, but from B2C to B2B, and steps from: Creative Director to Strategist to Agency founder to Investor to Client, etc.

Each ricochet taught me something. Curiosity consistently trumps credentials.

Recent studies reveal 81% of employees acknowledge AI fundamentally alters required workplace competencies. The World Economic Forum identifies curiosity, creative thinking, and flexibility as core, central skills for future work place success.

Intriguing, how we've arrived at valuing really ancient human traits in our most technologically advanced AI era.

Contemporary workplace innovation increasingly stems from curiosity-driven exploration rather than process adherence. Multiple studies demonstrate that deprivation sensitivity correlates more strongly with adaptive performance than technical proficiency alone.

My zigzag career wasn't planned, but it cultivated something invaluable: comfort with discomfort. Each pivot demanded simultaneous hunger for learning and disciplined competency development.

The difference now is the urgent requirement of concurrent curiosity and capability in perpetual refreshed mode.

The limiting factor isn't technical constraints – it's our own ambition and intellectual appetite.

Competency without curiosity will create professional dead ends.

Now its about cultivating a beginner's mindset as a core competency. Develop systematic comfort with uncertainty.

Crucial point: Practice intellectual humility alongside technical skill acquisition.

You are NOT an AI expert, be honest, we barely grasp the AI workplace implications of the next 6 months, let alone the rest of the decade. >> Look up Rana Adhikari at the California Institute of Technology, who recently found some AI models designing experiments that defy human expectations, sometimes bypassing controls. (Article in Wired, et al.)

So its clear, the most valuable professional asset isn't knowing everything.

It's maintaining the discipline to approach everything as if you know nothing.

The challenge now is how to institutionalise intellectual curiosity within traditional competency frameworks

The Careers Opportunities of a Bot

As a teenager I was enthralled by the music of the Clash. So wasn’t so out of character when I found myself unconsciously humming the words to Career opportunities, a blistering track from 1977. Particularly the refrain “.. Career opportunities, the ones that never knock..”  

The occasion? another AI powered knock back for a role

The irony is up to 11.

The very machines I champion being used to systematically filter out precisely the cross-pollinating professionals that can add such value to organisations dealing with the effects of AI.

Ladder meet bridge

The traditional career ladder assumed a stable world where skills depreciated slowly and industries remained predictable. That world is gone.

The most valuable professionals aren't climbing ladders; they're building bridges between disciplines, industries and crafts.

Clearly I’m biased, having zig zagged around pretty much every corner of advertising all my career, and yes before you ask, I’m old enough now, to realize I do get bored without a strong mental challenge before me

Skill stacking

Post AI research shows that professionals with diverse domain experience exhibit 43% superior performance in adaptive problem-solving scenarios. The terror many feel about "leaving their lane" represents profound misunderstanding of value creation in machine-augmented environments.

It's like discarding your Swiss Army knife in favour of a single blade, then wondering why you can't open wine bottles or extract stones from horses hooves anymore…

My point is: Cross-domain experience generates novel solution pathways that pure specialists struggle to access.

Adaptive Resilience: Multiple pivots build intellectual muscle memory for navigation of the increasing uncertainty businesses now face.

Curiosity Expertise: Understanding how to extract maximum value from AI requires precisely the kind of cognitive flexibility that portfolio careers cultivate.

Career opportunities

As AI assumes routine cognitive tasks, human value increasingly lies in synthesis, pattern recognition across domains, and strategic ambiguity navigation. Linear careers optimised for industrial efficiency; portfolio careers optimism for algorithmic collaboration.

What unexpected skill combination defines your professional edge?

I Strategist

"I was born to perform, it's a calling, I exist to do this" — Jarvis Cocker

At 60, recently redundant, people ask why I don't just "wind down."

Find something less stressful.

But here's the thing about being a strategist — it's not what I do, it's who I am.

Emma Perret recently wrote a brilliant piece about the need for 'life in her work’ full of great quotes, I loved: ”fingerprints on everything I touch as proof I was here."

She talks of strategy living in the gap between what data says and what your gut knows.

And that gut feeling, never switches off.

Even between paying jobs, I find myself studying problems, sketching solutions, seeing patterns others miss.

My brain feeds off finding routes through chaos — whether it's cultural, political, or brand challenges.

In the ‘Self-Determination Theory by Edward Deci and Richard Ryan, they talk of ‘..a certain group of people as having spontaneous tendencies to be curious and interested, to seek out challenges and to exercise and develop their skills and knowledge, even in the absence of operationally separable rewards..’

Research shows that strategist identity often builds on earlier professional experiences, becoming an extension rather than abandonment of previous identities.

After decades, it becomes woven into who you are.

How we see our work matters more than the job title. I now see strategy work as essential to my identity.

Tom Fryer, Professor of mental health, wrote about ‘Work, identity and health’ pointing out for most people, their job was their only significant source of personal identity.

He went on to warn: ‘….Without a clear sense of personal identity we are vulnerable to psychological injury, at risk of anxiety and depression, and social disengagement.…’

Clearly a subject of the moment, not just for me, but the nation as a whole facing up to a new world of AI infused work.

So me being a strategist is more than getting paid (although that bit is neat) its essential

as Javis says:

I exist to do this

Some callings don't respect retirement plans.

What drives you to keep doing what you love, even when the world suggests you shouldn't?

Skills vs Mindset in the AI Era

At 60, I still delight in approaching each new challenge with a beginner’s mindset. It’s what researchers term "deprivation sensitivity" – the psychological hunger for an understanding: The alluring draw of the why?

My career trajectory resembles a rather haphazard game of pinball, not just from design to advertising, but from B2C to B2B, and steps from: Creative Director to Strategist to Agency founder to Investor to Client, etc.

Each ricochet taught me something. Curiosity consistently trumps credentials.

Recent studies reveal 81% of employees acknowledge AI fundamentally alters required workplace competencies. The World Economic Forum identifies curiosity, creative thinking, and flexibility as core, central skills for future workplace success.

Intriguing, how we've arrived at valuing really ancient human traits in our most technologically advanced AI era.

Contemporary workplace innovation increasingly stems from curiosity-driven exploration rather than process adherence. Multiple studies demonstrate that deprivation sensitivity correlates more strongly with adaptive performance than technical proficiency alone.

My zigzag career wasn't planned, but it cultivated something invaluable: comfort with discomfort. Each pivot demanded simultaneous hunger for learning and disciplined competency development.

The difference now is the urgent requirement of concurrent curiosity and capability in a perpetual refreshed mode.

The limiting factor isn't technical constraints – it's our own ambition and intellectual appetite.

Competency without curiosity will create professional dead ends.

Now its about cultivating a beginner's mindset as a core competency. Develop systematic comfort with uncertainty.

Crucial point:

Practice intellectual humility alongside technical skill acquisition.

You are NOT an AI expert, be honest, we barely grasp the AI workplace implications of the next 6 months, let alone the rest of the decade. >> Look up Rana Adhikari at the California Institute of Technology, who recently found some AI models designing experiments that defy human expectations, sometimes bypassing controls. (Article in Wired, et al.)

So its clear, the most valuable professional asset isn't knowing everything.

It's maintaining the discipline to approach everything as if you know nothing.

The challenge now is how to institutionalise intellectual curiosity within traditional competency frameworks

"We were hoping for some romance. All we found was more despair. We must talk about our problems. We are in a state of flux

With apologies to Bloc Party for the headline, but……

…Like most in the creative industries, I’m both excited and deeply concerned by the state of flux we are in.

🟢 The astounding potential of what AI ‘could’ be.

🔴 The appalling standards of current AI content

🟢 The awe-inspiring creativity our industry can produce

🔴 Short-termism as a management team sport

🔴 Endless ‘reorganisations’ and resultant redundancies

But in the last few days, with a bit more time on my hands than normal** Three excellent bits of thinking have really hit home. I urge you to check them out (full links in comments)

First Ivan Fernandes has produced a succinct, spot-on, analysis from great research into who the real winners are from WPP actions of the last few years. Spoiler alert, it’s neither the staff nor clients…

Then the highly prolific Joe Burns. In multiple posts, he rails against the sloppiness and lack of foresight in our rush to get AI tools embedded in workflows. Yes, you might get home earlier on Friday, but you’re training your clients and the rest of the agency on fools gold.

Finally, and with a real flourish. Karen Martin with the 'Women Who Walked Around Soho'. A call to arms to embrace the timeless magic of talking and collaborating. To hire more, not less, and embrace a lack of conformity. Because the potential of new scary ideas is the real wow of the industry

Remember

The real beneficiaries are in another room

We are in danger of shooting ourselves in the face

The resolution is within our remit, but we need to keep talking

AI and a lesson in Kafkaesque Bureaucracy.

I’ve just read two amazing things, weirdly linked by themes from Franz Kafka’s ‘The Trial’. The first about Cursor, an AI coding tool that went rogue and second, a brilliant HBR article "How Gen AI Is Transforming Market Research" co-authored by Olivier Toubia.

The spectacular face-plant of Cursor—an AI-powered coding tool whose support bot confidently fabricated a non-existent policy about device login limits. So, without human oversight, this digital 'assistant' convinced users they were experiencing an intentional restriction rather than a simple bug, precipitating a mass exodus of paying customers.

Meanwhile, the HBR article highlights some limitations within, the really rather exciting world of AI created consumer panels.

The piece on AI panels points out that when presented with emotive subjects. They can demonstrate peculiar behavioral anomalies.

Like manifesting responses that defy logical consistency. i.e. exhibiting minimal price sensitivity when confronted with contextually absurd pricing structures.

The common thread?

We're unwittingly creating bureaucracies operating on their own inscrutable logic.

So, so, relevant to so many companies approach to AI integration

As in ‘Does this work? Yes. How does it work? Can’t say. Will it work tomorrow? Can’t say.’

Just as traditional bureaucracies rigidly adhere to processes that make perfect internal sense while baffling outsiders, AI platforms is pretty similar — operating on mathematical principles producing outputs that seem coherent until they spectacularly aren't.

The deeper issue emerges when these digital bureaucrats gain autonomous control.

In the Cursor debacle, removing humans from the support loop allowed a hallucinated policy to become de facto reality.

In market research eliminating human oversight can lead to synthetic consumers who behave like a character in Franz Kafka’s The Trial

“You don’t need to accept everything as true, you only have to accept it as necessary.”
— Franz Kafka The Trial

This pattern reveals something fundamental about our relationship with AI: we aren't simply deploying tools; we're installing bureaucratic structures that create their own reality.

Traditional bureaucracies might eventually acknowledge mistakes—though like the Post Office Horizon scandal, they rarely go full Mea Culpa. AI, on the other hand, only knows it's right.

A fully-functioning bureaucracy requires oversight, accountability and appeal mechanisms. Similarly, effective AI implementation demands human reality-checking and clear intervention processes.

The irony is exquisite: in our rush to eliminate human inefficiency, we've created digital bureaucracies replicating the worst aspects of their human counterparts—rigidity, opacity, and occasional absurdist logic—without the capacity for self-correction.

Remember: the best AI implementations are like the best jokes—they require perfect timing and human judgment about when they're appropriate.

Literacy, AI, and the Decline in Productivity in the UK

Alvin Toffler warned that 21st-century illiteracy is defined not by an inability to read, but to adapt. As the UK grapples with AI adoption, his words ring alarmingly true.

Tariffs and taxes are fleeting events. There are much more profound challenges threatening the UK’s long-term competitive edge. ( and it's not feckin’ remote working! )

It’s the intricate relationship between declining literacy levels and sluggish AI adoption.

I write quotes and thoughts on my notebooks, this one, inspired this essay from a few months back

The Uncomfortable Truth About UK’s AI Readiness

The data tells a sobering story:

18% of adults in England are functionally illiterate

Only 39% of UK businesses have actively implemented AI technologies

Britain’s productivity lags 18% below the G7 average

76% of UK professionals are excited about AI, but only 44% receive organizational support

90% of UK primary school children experienced negative literacy impacts during COVID-19. With improvements still stubbornly low

Conventional wisdom treats literacy challenges and tech adoption as separate issues. Our recent work suggests they’re two sides of the same coin — a cognitive-literacy crisis undermining the countries' long term productivity.

The Cognitive Infrastructure of Innovation

Literacy is far more than reading and writing — it’s the cognitive infrastructure that enables tech adaptation.

The BrainWare Learning Company defines cognitive literacy as the “mental toolkit” of attention, working memory, and processing speed required for learning new systems.

Research shows 62% of UK workers score below OECD cognitive flexibility benchmarks, with 3x higher AI implementation failure rates in low-literacy sectors like construction and retail compared to tech.

Take it from a dyslexic with a slight stutter and a South London accent: prompting AI ( whether by voice or text ) is not as straightforward as the makers of these tools suggest. The cognitive demands of effective AI usage require sophisticated literacy skills that many in our workforce currently lack.

The Dangerous Feedback Loop

More worrying is the feedback loop emerging between literacy gaps and AI dependence:

Workers with literacy gaps show 73% higher reliance on AI for basic tasks

This “cognitive offloading” accelerates skill atrophy (22% decline in critical thinking scores over 6 months)

Younger workers (18–25) are especially vulnerable, with 89% using AI for writing/analysis versus 52% of workers 45+

This creates a productivity doom cycle:

Underdeveloped literacy >> Over-reliance on AI >>

Further erosion of cognitive skills >> Ineffective AI Implementation

The JL4D Institute identifies a critical threshold: workers need Level 2 literacy (GCSE English equivalent) to effectively collaborate with AI systems. Yet 38% of UK frontline workers fall below this standard.

Breaking the Cycle: Evidence-Based Interventions

The path forward requires recognizing AI adoption as a literacy development challenge, not merely an IT rollout. Companies investing in cognitive literacy programs see 22% higher AI success rates versus their peers.

Strategic Imperative for Business Leaders

The UK’s AI adoption gap with the US isn’t primarily technological — it’s cognitive. Every stage of AI implementation is impacted by literacy challenges:

For business leaders, this means:

Invest in workforce AI training programs, beginning with the C-suite

Create structured, continuous AI literacy updates for everyone

Recognize that AI literacy is a growth opportunity, not a cost-center

Measure cognitive flexibility alongside technical metrics

Partner with educational institutions to align curricula with emerging AI needs

Helen Milner of Good Things Foundation notes:

“AI doesn’t replace literacy — it demands new literacy dimensions. Our 8.5 million digitally excluded adults aren’t just missing opportunities; they’re becoming cognitive debtors in an AI-powered economy.”

It’s Not Too Late

The economic stakes couldn’t be higher.

AI could potentially increase corporate profits by $4.4 trillion annually. Sales teams using AI are 1.3 times more likely to see revenue increases.

Every 10% improvement in workforce literacy correlates with 6.7% faster AI implementation.

But more than economic opportunity is at stake. Without addressing literacy deficits and cultivating sophisticated AI engagement, the UK risks enabling technologies that will amplify existing socioeconomic disparities rather than catalyzing inclusive growth.

As Toffler’s ‘learn-unlearn-relearn’ imperative points out. Both educators and businesses, should treat AI adoption as improving much-needed literacy, not just an IT rollout. In Britain’s productivity crisis, upgrading our cognitive infrastructure is not optional — it’s existential.

(Written by a hyper-active dyslexic, made readable by Claude.ai)

The Human Spark: Why We Need Dyslexic Thinking in a World of AI-Driven Advertising

If you walk down the street or use social media, you'll see a lot of AI-generated ads. DALL-E 2 and Midjourney allow for creating volumes of polished marketing visuals that kind of look the same. Way too many brands are eagerly lapping up these cookie-cutter AI aesthetics to plug budget holes in their campaigns.

Composition VII, Wassily Kandinsky, 1913, for its time completely non-linear, unconventional thinking

Some have started sounding the alarm about this developing trend. Artist and photographer Sougwen Chung recently said, "I encourage you to continue making works that only a human could make. The world needs your anomalous spirit." She believes that human artists should lead innovation instead of relying on AI tools to produce unoriginal content.

The issue was a big deal at the 2023 Sundance Film Festival. People debated whether using AI-generated art in marketing was okay. Festival founder Robert Redford commented, "When I see these perfect machine-made pictures, I think something's missing...the wonderful imperfections of humankind."

 But in this stampede towards AI-generated advertising, we are losing something profound - the human spark. Those gloriously weird, illogical, and downright inexplicable flourishes that set human creativity apart. AI infiltration can homogenise advertising into an army of the bland. I passionately believe we must fight to keep human idiosyncrasies alive.

 Logical machines fundamentally lack the ability to think dyslexically. AI is constrained by rules, datasets, and cold calculus. In other words, it is neither friend nor foe, it’s just maths. It cannot escape the bounds of its training or imagine concepts outside its programming. But human cognition has no limits. We make illogical leaps, forge new neural pathways, and see things no algorithm would conceive.

As Einstein said, “Creativity is seeing what others see and thinking what no one else has ever thought.” Human creativity defies rational explanation. It is untamable.

 Unfortunately, many current AI creative tools reward conforming to the norm, not breaking free. They analyze thousands of images to detect persisting styles and themes. Output originality is actively discouraged. What you get is a pastiche of familiar elements, remixed ad infinitum. Homogenization prevails. 

But look at artists like David Shrigley. His crude, satirical drawings are childlike and warped. Or Matisse’s surreal dreamscapes. Or Frida Kahlo violating anatomy to evoke emotion. Their art arises from nonlinear thinking that no AI can replicate. We need this spirit of human eccentricity and imagination to permeate advertising.

So how do we inject that irrational spark back into brand marketing? First, by valuing dyslexic perspectives in creative teams. The most innovative ideas come from neurodiverse minds who see the world differently. Individuals with dyslexia often possess greater creativity and lateral thinking skills. They should be empowered to follow their unconventional instincts from an early age. Not excluded because of a lack of mathematical prowess.

Human artists collaborate, and then AI can be used sparingly to enhance their vision. Technology should enhance human creativity, not replace it. Without a guiding hand, AI easily descends into repetitive tropes based on what came before. But human imagination gets bored with similarities and seeks the new. 

This is why we should always focus ad concepts on storytelling that surprises. Machines struggle to convey innovatively intriguing narratives or make viewers feel joy, sadness, and tension without resorting to what has gone before. While AI is great for cranking out visuals, humans have a monopoly on resonant messaging. Stories speak to our souls.

Popular culture, rejecting previous years of perfect image now embraces imperfections. The wabi-sabi (侘寂)aesthetic finds beauty in imperfection. But AI seeks endless technical refinement, stripping away flaws. Rough edges, irregularity, and chaos open creative possibilities. Agencies and Brands should encourage a culture that values being a bit odd, instead of just focusing on making AI content efficiently. As futurist Kai-Fu Lee said, "To make something new, your mindset has to be a little weird." Innovation arises when marketers embrace the counterintuitive and make space for human strangeness, supported by AI.

 In advertising, we stand at a precipice today. One way leads to more automation, AI-created content, and less human marketing. But there is another route. One where technology takes a backseat to imaginative human expressions. Where dyslexic thinking reigns. And where brands embrace the counterintuitive, the weird, and the downright bonkers.

The choice is clear. Do we want advertising ruled by cold, conformist AI algorithms? Or energized by the electric human spark? The irrational human spirit that brought us Picasso’s Cubism, Banksy shredding his own art and the musical idiosyncrasies of Little Simz- creative leaps no machine could conceive.

 Dali said it best: “The only difference between myself and a madman is that I am not mad.” In advertising, we really do need more Madmen. (But this time, maybe skip the three martini lunches) We need the mad who are unafraid to be eccentric, make illogical connections, and revel in cognitive dissonance. As AI proliferates, we need the most creative human minds to the fore. The neurodiverse, oddballs need to be supercharged with the aid of AI. Brilliance will arise in the unpredictable spaces and tensions between man and machine. 

So in this age of artificial intelligence, let us champion radical human creativity. Embrace what makes our cognition untamable. And fill advertising with the electric, inexplicable human spark. Yes amplified by AI, but steered by humans from start to finish.

 

Reshaping the working week

It is clear to many that the shape of a typical working week has changed. Considering the vast amount of recent insight into the effects of WFH v Onsite, much produced with once in a generation scale sample sizes. We now know in-office, Monday to Friday, uniformed timed regimes, with associated commutes, are not only unproductive but damaging to individuals' mental health but also detrimental to the recruitment of the brightest minds. Summarising some of the great ideas currently in circulation, Inc HT to @stevenbartlett. I have been looking at the new shape of the working week, especially looking at the creative industries. 

Philip Slade Reshaping the working week


Monday. In-person, catching up and planning work ahead

Tuesday. Hybrid, focused on pursuing, developing, and delivering.

Wednesday. Remote, A day of admin for both home and office. Solitary moments that allow critical creative thinking time 

Thursday. Hybrid, another day of focused work

Friday. In-person a day to review, showcase and celebrate

Saturday/Sunday. Unstructured time to recharge and reconnect with self, friends and family


I feel the new working week is a balance between onsite in-person collaborations and remote singular endeavours. A recent scientific study using MRI scans has shown online ideation sessions hamper our full cognitive abilities. One of the authors Melanie Brucks of Columbia Business School said

‘...videoconferencing hampers idea generation because it focuses communicators on a screen, which prompts a narrower cognitive focus. Our results suggest that virtual interaction comes with a cognitive cost for creative idea generation….’

Which is balanced by a recent ONS report showing that of those now working either fully or partially remote the key benefits were positive mental health and work-life balance.

So balancing in person with remote is important. Equally so is the ring fencing of thinking time. Its vital for our brains to have moments to gather all the inputs and just ponder what if?

For years educationalists have spoken of the power of wait time in building higher cognitive learning. The renowned psychologist Mihaly Csikszentmihalyi talked about the social dimension of the solitary moment and its importance in the creative process.

Cristina Garcia co-author of California’s new four-day working week bill, summed up the mood for change; “...We’ve seen over 47 million people voluntarily leave their jobs for better opportunities. We’re seeing a labor shortage across the board from small to big businesses……And so it’s very clear that employees don’t want to go back to normal or the old way, but to rethink and go back to [something] better.” 

It might seem obvious, it was after all written by a management consultancy, but McKinsey wrote in their future report

‘...People who live their purpose at work are more productive than people who don’t. They are also healthier, more resilient, and more likely to stay at the company.…’

How to survive a career creating mayhem. Lessons from a long life in advertising

Let’s talk about money, honesty and temptation. oh, and also the impossible.

It’s 2012, a hot summers evening. The London Olympics have just begun and I’m taking a bow in front a stadium full of people going nuts, while being watched by a worldwide TV audience. I vividly remember thinking at the time, what the feck am I doing here?

I got there because I did what most people won’t do. I said yes to an outlandish, bonkers idea and found a way to make it work. Predictable, defined options always lead to dull and bland situations. And I hate those.

This is the key lesson I take from my decades working in the advertising industry. Invest time in the seemingly impossible. Something I literally learnt from my very first opportunity to work in this industry.

Admittedly I had something of a head start working in Advertising in the 1990s, I was a white male, middle class, had been to university and was living in London. For those not in the UK its worth pointing out 80% of our advertising industry is based in London, but that 87% of our population isn’t.

But I didn’t mean to get into advertising. It was never a potential career goal. I spent my college years training to be an Industrial Designer of consumer goods.

Having arrived in London from University and failed to get any meaningful or lasting employment a recent friend was a Creative Director at Saatchi & Saatchi and he offered me a role as an art director. It really didn’t make sense to me I had no training in advertising or understanding of what an art director was.

It’s because I’d encountered what the tech entrepreneur Paul Graham calls domain experts and the importance of their crazy ideas. His point is that experts from one area proposing an idea for a completely different area, however outlandish, may be on to something as they are responsible people and have overcome their natural instincts not to look an idiot in proposing their crazy idea.

The guy who got me in at Saatchi’s convinced management that the weird bloke with a portfolio full of product designs would add greater depth to his creative departments thinking.

Your rational mind will talk itself out of applying for loads of great jobs because it’s not an obvious fit for your talents. Don’t be rational, avoid the obvious. You are a unique creative thinker. Your best roles will be the ones that at first appear totally wrong.

We do need to talk about money.

At some point someone is going to approach you offering an inflated salary. I once moved jobs purely for a stack of cash and a statement title. Blinded by avarice I didn't take time to value the role I was currently in. Something you really need to do on a regular basis. A calm head would have spotted I wasn’t worth the amount being offered. The company hiring me was a plc with troubled shareholders who needed calming down. Rather than fixing long-term corporate issues they went for a quick solution and made headlines with a new Creative Director. They didn’t really want change and after 18 months we parted company. Another learning from this is that in general, institutional, shareholder run creative enterprises will always disappoint imaginative thinkers.

Talent is valuable; you deserve to be paid well. Just not bribed.

While the industry is home to a disproportionate number of fakers, charlatans, and borderline psychopaths. Very few are actually criminal. My luck was befriending one that was. Sharon Bridgewater was sentenced to five years for stealing £2.4 million pounds from my agency[1]. But this was six years after she had joined us. The problem for so many successful young start-ups is you never get the chance to take a breath and really come to terms with your change in status from hopeless dreamer to an actual successful business owner. Responsible for your teams livelihoods, payroll and office rent.

After the event. Realising I’ve been a total mug, weighed heavy. The past will haunt you unless you work at it. MRI based research[2] recently published in the journal of neurology uncovers how a lack of disassociation of the past is a key factor in insomnia disorders. Whether it’s a bad word with your boss, a lost pitch or simply an act of gross stupidity you have got to work at putting it in the past. Your brain will not do this on its own. Hence the MRI insights. Christian Horner who runs the Red Bull F1 team talks about indulging in 24 hours of purging pain after things go wrong[3]. Dwelling on every detail. Wallowing in the misery. By doing so nothing is left hidden. It’s not that you then don’t talk about it. You disassociate yourself from it and place it in the past. It’s happened, it’s what you used to do. Much like you do in addiction therapy. 

Speaking of which. I’ve learnt a lot about temptation.

The industries obsession with late nights, glam parties and sudden success has side effects, not least being offered copious drugs or at the very least your body weight in free booze. It really does not take long to replace the hard to achieve high of selling brilliant new ideas, with the outright easy high of intoxication and the resultant self-delusion. 

While our industry has made great strides in improving welfare concerns. You have to remember it's coming from a very low base. So, your number one concern must be for your own health. 

The benefits of taking a break are immense, an opportunity to do so should not be passed up, however left field. – A job ad to perform at the Olympics is not something I was looking for. But this one was from Danny Boyle, The Director of Trainspotting and Slumdog Millionaire. This was an expert in imagination and if he said anyone could perform at the Olympics. As bonkers as it sounds, it was worth a go.

Even if it did mean a patchy freelance income to fit around a year of unpaid rehearsals. But this was totally off-set by escaping from advertising to live hand in glove with a bizarre collection of people whose only commonality was having time on their hands; the unemployed, contractors, the independently wealthy and recently retired. Collectively they gave me a whole new perspective on life

So, I’ve learnt the joys of embracing the seemingly impossible option

The rather painful effects of forgetting not everyone plays nice

The skills needed to put badness in the past

And the importance of focusing on your own well being

 


[1] https://www.dailymail.co.uk/news/article-9406913/Female-Walter-Mitty-accountant-swindled-2-5million-living-Spain.html

[2] Haunted by the past: old emotions remain salient in insomnia disorder, Rick Wassing, Frans Schalkwijk, Oti Lakbila-Kamal, Brain, June 2019

[3] https://www.thehighperformancepodcast.com/episodes/christian-horner

Could the elderly revive city centres?

My new hometown of Sheffield is one of many cities coming to terms with the closures of big retail chains. Recent news of John Lewis pulling out of its iconic landmark building in the town seems to be a tipping point. How to address the seismic change in the usage patterns of legacy city centre architecture?

NORD Architects A/S - Hejrevej 37, 2. - 2400 Copenhagen NV Denmark +45 3369 0908

NORD Architects A/S - Hejrevej 37, 2. - 2400 Copenhagen NV Denmark +45 3369 0908

Much has been written about the advantages of the change of use from retail to residential. Originally focused on a younger demographic as the infrastructure needed is cheaper. But recent studies have looked at bringing younger families in from the outskirts, which is tricky on a number of levels. From childcare to car usage.

What I’ve not seen before is an experiment happening in Denmark of bringing the elderly to live in the centre of a new development. A collaborative project from locals NORD and London firm UHA. I really like the whole landscape solution to create an integrated community.

Here in the UK, the pandemic has exposed how rubbish our care home system actually is. There have to be smarter solutions. Taking bold steps and reimagining city centre use to also include increases in elderly residents is the type of lateral thinking cities need.

The primary benefits are combating loneliness and social inclusion. But there are many others. Access to transport and health. Plus the ease with which simple design interventions can adapt a cityscape to not just cope but welcome an increase in older residents.

 It turns out others have had the same idea. All the way back in 2012 Elizabeth Burton, professor of sustainable building design and well being at Warwick University talked about the advantages of locating older people within cities. Pointing out in The Guardian at the time that the problem with our habit of housing older people outside cities is that.

‘…the countryside is the last place for creating the inclusive accessible environment that older people need with access to highly specialised hospitals and care…’
— Professor Elizabeth Burton

 Just over a year ago Phil Bayliss, chief executive of Legal & Generals ‘Later Living’ division said his team was looking at increasing city centre footfall by adapting some of their buildings for elderly use. What we really need is a working case study that elevates the theory to reality. My new hometown could be it.

Sheffield has a fairly unique city geography. Surrounded by seven hills. Anywhere you go outside the city centre involves going up a steep hill. Its part of the culture of the place from ‘The Full Monty’ to the Arctic Monkeys to include struggling up hills as a metaphor. However placing the regions elderly residents outside the city instantly creates challenging transport issues for year-round access (it snows up north, a lot).

 I really do believe the collapse of traditional retail and the effects of the pandemic create a once in history opportunity to reimagine what we do with legacy cities like Sheffield. Bold action could create a wholly new type of inclusive urban society. A living city full of contrasting cultures. Built around the needs of the many. From the few years I have spent living here it feels a very Sheffield thing to do.

Advertising and morals, we do good?

What’s the difference in working on tobacco, gambling and booze? -surly nothing?
— Lisa Gills / WildSquirrelRec
Screen Shot 2021-03-04 at 12.23.59 PM.png

A much visited advertising debate was reignited the other day on Linkedin. Should you work on brands in categories that can cause harm? Due to vast profits from human weakness tobacco, gambling and booze brands spend a lot on advertising. Should you take up the offer? because in the past many did and created award-winning campaigns.

Clearly in light of how Advertising has had to take a very long hard look at its past practices and behaviours. Such debate is yet again very timely.

Lets start with enjoyingment. Our job at its most fundamental is to unlock human desire and engage emotions. Sure you can pick and choose your area of work. But you can't ignore base elements of what makes human, humans.

From the earliest evidence in Mesopotamia, mankind sort intoxication and enjoyed waging. A good society is one that protects its most vulnerable from the worst excesses of its own desires. Be that driven by corporations or kings. Prohibition manifestly does not work. But democracy does. With very few exemptions we all desire to do good. As such educating and nurturing good behaviour that benefits all should not be hard. There is amble talent within this industry to do so. Just less of it in national governments to effect real change and protect their own populations in meaningful ways. Whether through education or whole life fulfillment.

I have spent a lot of time working on booze and gambling brands. To work in advertising is to except human desire for what it is. Your options are how you choose to influence it.

Using celebratory to build populist trust is not new, just really difficult to pull off

Screen Shot 2021-02-10 at 5.54.07 PM.png

Both the US and UK are using A list celebs for new Covid messaging and weirdly in both cases the results are actually not a cringe fest. The US goes full Hollywood where as the UK plumps for a simple two shot idea done inhouse by the NHS.  This latter film I feel will carry more weight with its audience. The American work is part of a wider ‘Mask up America’ campaign from the Center for Disease Control with this spot created by Warner Medias own creative teams using their extensive back catalog of famous films. The idea being digitally adding masks to famous clips where as NHS England has Michael Caine acing Elton John in an audition for a vaccine commercial. The NHS spot features rather brilliant writing by Stephen Pipe including a superb exit line. Like I said I think the context adds to the effectiveness of the message.

Screen Shot 2021-02-10 at 5.51.23 PM.png

Advertising, swearing and dangers of identity

I was asked a simple question. An open goal of self-congratulation. But I fluffed it wide into banality. Bugger. It’s not the first time. I’ve done it. Ducking the chance of bigging myself up, because it just felt wrong. I’m not a careerist just inquisitive about tomorrow. Often forgetting about the need to nurture today. Clearly not American in approach it is indeed rather English.

Context. I had been approached to appear on a podcast I really respected. Mark Pollard is a super bright strategist who runs the Sweathead community for creative thinkers in the advertising industry. Across various platforms, events and his podcast he comments on the world of advertising from a planner’s viewpoint. It’s career pitfalls and learning’s from around the world. Mark asked me on his show to talk about my experience. Previously at his prompting I had shared a brief summary of my past online. The key things I had learnt from what appeared to others, at least, a tad unusual. To me less so, as it seemed normal at the time.

But the interview didn’t go as planned. We went down a bit of a dark hole. What worries me is on 3 or 4 questions I gave pretty rubbish answers that won’t help anyone. I did have experiences that include great life lessons for others in the industry. I just didn’t explain them very well.

So in a classic case of the use of advertising to rewrite history for the sake of a better brand image. I will go again.

Mentors.

I was asked how in the early years of my career I had got jobs that I didn’t have the experience or qualifications for. On true reflection the answer is I was taken on by people who could see a bigger picture of potential. Andy Blackford was a Creative Director building an integrated team at Saatchi & Saatchi he didn’t worry that my portfolio was full of student industrial design projects or that my professional career so far included a short lived stint at Smash Hits magazine clearly as a result of a blag.  What he saw was passion for new ideas. Andy went on to run multi award winning creative departments across London for the likes of: Arc, FCB, Grey etc. always with the same hiring philosophy. Odd balls will win out. I have been lucky to work with Andy at various points in my career. He started and mentally maintained me in this business. I owe him a lot. As I do others who believed in me enough to humour my wilder excesses and keep me in paid employment. The very strong lesson learnt was the power of finding and nurturing relationships with mentors.

Dyslexic planners, not a punchline but an issue for some

For some, there are real tensions in developing planning’s core skills.

There is a ton of stuff to read yet doing this quickly with full comprehension takes you time. Being part of a fast-moving discussion troubles you, as formulating your ideas, however genius they are thought of later, seems to take longer than others. And to cap it off organising your time has never been a key strength. But on the upside, you are a visual thinker who can ace lateral thinking. Working your nuts off for it, you find speaking and writing for an audience a joy. Can you still be a complete planner? Yes, but you are a planner with the added bonus of dyslexia.

For many dyslexia means your spellings are a bit rubbish. But this is a ‘surface symptom’ of a much more complex situation.

Dyslexia is a neurodevelopment condition, which results in an uneven cognitive profile of contrasting abilities, quite often manifesting in extremes. First highlighted in Europe during the 1880s but for decades thought of as a deficit in education. A medical explanation gained ground in the 1960s but became known as ‘the middle-class disease’ as affluent parents sort a diagnosis to explain the poor performance of their children. For the UK it wasn’t until 1987 that the British Government recognised the condition but it took until 2009 for them to define it.

‘...the middle-class disease...’

A tad late to the party but welcome all the same. Britain’s Direct Marketing Association in 2020 issued guidelines to its members about guiding the careers of those with dyslexia. Katherine Kindersley co-author of the report said ‘Dyslexia is a ‘hidden’ disability. It can be hard for managers and colleagues to understand how demanding, time-consuming, and tiring it is for a person to work as expected’ But as the guide sets out these are the employees who will excel in lateral thinking and innovation, have excellent practical skills and entrepreneurial traits.

‘Dyslexia is a ‘hidden’ disability. It can be hard for managers and colleagues to understand how demanding, time-consuming, and tiring it is for a person to work as expected’
— Katherine Kindersley : https://dma.org.uk/article/dma-talent-dyslexia-employer-guide

The combined effects of oddball cognitive abilities and the white, middle class, mean that Britain’s modern advertising industry grew from legions of dyslexics. Most of these people have now gone and the industry is now run with new values. Appraisal metrics that punish the different because their performance is ‘uneven’. ‘They didn’t get through all the background material'...(I sent last night) or '...he didn’t say much during the brainstorm....’ How many brilliant young minds has this happened to while struggling to get through their early years in planning?

For the planning industry to develop we need to be nurturing young minds capable of extraordinary problem-solving abilities. Coupled with a flair for actually explaining their thinking in public. Nature has gifted us a head start in the weirdness that is the dyslexic mind. More fool us if we fail to accommodate the kooky skill sets of such people.

>>>Saw this great Dyslexia PSA from Wendy Eduarte <<

Reading is a cognitive process of decoding symbols into meanings. For a dyslexic person this essential necessity becomes a difficult process to digest, sometimes ending in frustration. This animation touches on the anxiety of growing up with dyslexia. Thank you to Maria Jose Monge for her testimony. Design and Animation by Wendy Eduarte Song by Kosta T - выходной