Philip slade

The Age of Cognitive Inflation

Like currency flooding into an economy. We’re experiencing what might be called cognitive inflation. The volume of content is exploding, but its actual value is plummeting.

Tell me, this has not happened to you. You downloaded a beautifully formatted report. Stack full of details and charts. Looks professional. Reads smoothly.

And yet…its bland, colorless and absolutely feckin’ useless.

Welcome to workslop, the new currency of cognitive inflation.

We're fooling ourselves that having access to so much more information means we now know so much more. In fact, the opposite is happening.

The Workslop Economy

A recent Harvard Business Review article identified a phenomenon 'workslop'. Employees are using AI tools to create low-effort, passable looking work that ends up creating more work for their coworkers.

Damien Charlotin, Legal expert and commentator talks of AI derived filler appearing in court documents “…in a few cases everyday…”

Consider this: K&L Gates got censured and fined in May of this year. Top international law firm. Serious chops. Judge found their case riddled with AI fabrications. Nine out of 27 citations were bollox. They corrected it. Submitted new documents. Six more AI derived mistakes. This is a firm charging each individual lawyer (in a team of 10) out at north of $2500 an hour.

When someone sends AI-generated content, they're not just using a tool. They’re transferring cognitive burden to recipients who must interpret, verify, correct, or redo the whole thing.

The numbers are sobering. 42% report having received workslop in the last month. Half view colleagues who send it as less trustworthy. Whether its McKinsey or your rivals, I cannot believe you have not come across this sudden flood of reports, white papers and think pieces that reek of…nothing…just soulless words, neatly placed together.

Workslop perfectly encapsulates cognitive inflation. More content. Less meaning. The signal-to-noise ratio is collapsing. Trust in any information becomes impossible without extensive verification, which, frankly nobody appears to have the time for.

Literacy: The Line Between Power and Danger

Recently I read a brilliant article by James Marriott: 'Dawn of the post-literate society’ (link in comments). His research is stark. Reading for pleasure has fallen by 40% in America in the last twenty years. In the UK, more than a third of adults say they've given up reading altogether.

Childrens reading skills have also been declining yearly post-pandemic. Experts link this decline not only to decreased traditional reading but also to reduced critical thinking and comprehension.

The National Literacy Trust reports reading among children is now at its lowest level on record.

It’s not just about books.

But about how we think.

The world of print, Marriott argues, is orderly, logical and rational.

Books make arguments, propose theses, develop ideas. "To engage with the written word," the media theorist Neil Postman wrote, "means to follow a line of thought, which requires considerable powers of classifying, inference-making and reasoning."

"The world after print increasingly resembles the world before print,"

Marriott writes. As our cognitive capabilities diminish, we're creating AI systems in our increasingly confused image. We're building dumbing down of the user into the foundations of our future technologies.

As books die, we are in danger of returning to pre-literate habits of thought. Discourse collapsing into panic, hatred and tribal warfare. The discipline required for complex thinking is eroding.

This is the environment, those on the right are thriving on. They flourish amongst populations with limited capacities for inference-making and reasoning. For more on this see the excellent ‘Segmentation of the far right” from Steven Lacey / The Outsiders -link in comments

I heard a great podcast this week. Geoffrey Hinton, one of the architects of today's AI. In conversation with Steven Bartlett (link in comments)

Hinton warns: We're making ourselves stupider before we understand how to use AI safely. That cognitive decline will be baked into the next generation of AI systems we build.

It's that last bit that's most worrying.

Hinton told investors at a recent conference that instead of forcing AI to submit to humans, we need to build 'maternal instincts' into AI models. Because if we don’t, the temptation amongst bad actors is to do the opposite. Hinton's nightmare scenario isn't just more cyber attacks, though those are crippling institutions worldwide. But weaponised AI creating Covid style viruses.

This is not just about having the wrong tools or using them badly. It's about losing the ability to question the value of what is being produced.

The World Economic Forum identifies curiosity, creative thinking, and flexibility as core skills for future workplace success. It's intriguing how we've arrived at the need to really value basic human traits in our most technologically advanced era of AI.

Competency without curiosity creates professional dead ends. The most valuable professional asset isn't knowing everything. It's maintaining the discipline to approach everything as if you know nothing.

Yet literacy, the foundation of that discipline, is collapsing. Many point at it beginning with smartphones, then heighten during Covid lockdowns, and now supercharged by AI. Although, to that last point. Many amazing educators are harnessing AI to try and desperately reverse this trend.

But Universities are now teaching their first truly 'post-literate' cohorts.

"Most of our students are functionally illiterate,"

according to one despairing University entrance assessment.

Productivity Everywhere, but nowhere

This is cognitive inflation's cruelest joke. We have more tools than ever. More information than ever. More 'productivity' software than ever. Yet productivity just... stalls. (In the UK, we've managed a princely +0.5% annually for the last decade…stunning, you’ll agree.

When efficiency becomes the only goal, the outcome is always the same: a world of increasing activity and decreasing value. We mistake motion for progress. Busyness for productivity. Access for understanding.

Companies are investing billions in AI tools. Most, are not seeing any kind of a measurable return. (Other than their head count has dropped, and they can’t understand why their Glassdoor reviews have done the same) The MIT Media Lab found that 95% of organisations have yet to see measurable return on their investment in AI tools. So much activity, so much enthusiasm, so little return. And yet…

…our capacity to think deeply, to read carefully, to reason logically, erodes. We're not getting smarter. We're just getting louder.

What Comes Next

Without intervention, we're heading toward a workplace where nobody trusts anyone else's work, where verification becomes impossible, where cognitive capacity wastes away like an unused muscle. We become like medieval peasants, but with better WiFi.

‘Pilots’ or ‘Passengers’ is a great analogy for how workers are currently using AI. Pilots are navigating their own course, with AI as an instrument of work. Passengers are, well, sat back letting AI take them where they need to be. (See more at: BatterUp Labs/Stanford University)

Zoe Scaman, has just written a brilliant article about helping major organizations roll out AI strategy, seeing passenger behaviour everywhere (Read ‘The Great Erosion’ link in comments) She talks of people outsourcing the twenty percent of work that's genuinely hard thinking, keeping the eighty percent that's just formatting and execution. Backwards. Catastrophic. Because the twenty percent is where you build the muscles. That's not theory. That's happening in organizations today. The cognitive damage is immediate. The supposed productivity gains? Years away, if they arrive at all.

If you're a leader, your most important job isn't to buy more AI tools. It's to build in space and time where your team strengthens the thinking muscles that AI is surreptitiously stealing

Most importantly, we all need to recommit to the hard work of thinking.

Yes, that also includes reading.

And the discipline of following an argument, weighing evidence, changing our minds when presented with better information. (Again, see rise of the far right in this context)

This challenge isn't technical. It's rational. The question isn't whether we can build better AI. It's whether we can remain capable of using it wisely.

In an age of intellectual inflation, attention is the scarcest resource. Not information.

It’s the ability to actually focus, to think deeply, to spot signal in all the noise.

Our futures belong not to those with access to the most information, but to those who retain the ability to think about it clearly.

We've built an economy where information is infinite yet attention is seemingly worthless.

Where everyone has an opinion but nobody has time to think.

Where our tools grow exponentially more powerful while our minds grow…well, let's be honest, weaker.

This. Can. Only. End. Badly.

The answer depends on whether we're willing to do the one thing our AI-saturated culture makes most difficult: slow down, focus, and think.

We all need to (metaphorically) get up, go for a walk and explore the ideas in our own heads.

We need to celebrate the illogical, lateral, properly weird thinking that only humans do well. The very thing AI can't replicate and, worse, is teaching us to devalue.

Because the only way out is through the hard work of becoming human again.


Conceived and written by a dyslexic human, Me.  Made readable by Claude.ai

This whole article coundn’t have been possible without the amazing inspiration of the following writers:

‘AI-Generated “Workshop” is Destroying Productivity

https://tinyurl.com/43kc7m99

‘The dawn of the post-literate society’

https://tinyurl.com/ytdhhns3

Geoffrey Hinton in conversation with Steven Bartlett

https://tinyurl.com/43hsn9ns

‘Segmentation of the Far Right’

https://tinyurl.com/dtfhu28m

‘Pilots & Passengers: The next Evolution in management’

https://tinyurl.com/yhu5e2m3

Zoe Scaman ‘The Great Erosion’

https://tinyurl.com/bdcw3m6k

Who Controls Britain's AI Training Data Controls Britain's Future:

The £42 Billion Question of Intellectual Sovereignty

Elon Musk recently said "the cumulative sum of human knowledge" is becoming exhausted, requiring AI systems to "retrain human knowledge" by deleting "garbage" and introducing "divisive facts." This is not a technical process.

He was announcing a political programme.

One that Britain, with exquisite timing, has this week agreed to be part of.

This week's £42 billion UK-US "Tech Prosperity Deal" commits British infrastructure to American AI systems led by Microsoft, Nvidia, AWS, and Google. On paper, a coup. In practice, something rather more troubling: the systematic outsourcing of how Britain will learn to think.

For a deeper explanation of the full ramifications, read Zoe Scaman's brilliant 'Investment or Surrender' essay see: https://substack.com/inbox/post/173643575

The stakes aren't merely economic. They're epistemic. Those who control training data don't just shape what AI knows. They determine what it considers knowable.

In June, Elon Musk's AI Grok accurately responded that right-wing political violence in the US had been more frequent and deadly since 2016. Elon called this a "major fail," adding his team were "working on it" to change the narrative in the AI's responses.

As Orwell warned with uncomfortable prescience:

"Who controls the present, controls the past; who controls the past controls the future"

In our case, those who control the (present) training data control how we now, and in the future, interpret reality.

Britain's Vulnerable Position

Recent studies by OpenAi (ChatGPT) and Anthropic (Claude) tells an uncomfortable story about Britain's AI readiness.

While 76% of UK professionals express excitement about AI, only 44% receive organisational support. Just 22% of public sector workers report using generative AI, despite high awareness. Meanwhile, our government appears content to hand the keys to US hyperscalers.

This matters because the research reveals a troubling global pattern about usage to date (based on 1.5 million conversations across 195 countries) found adoption growth in low-income nations runs four times faster than wealthy countries.

Yet interactions remain disappointingly mundane: 49% asking questions, 40% completing tasks, just 11% creative expression.

Most people use AI as an advisor, not a collaborator.

Anthropic's usage data shows "directive" conversations those where users delegate wholesale rather than iterate. This jumped from 27% to 39% in just eight months. Automation is displacing augmentation. The difference isn't academic: augmentation builds cognitive muscle, automation risks enfeebling it.

Britain sits dangerously in the middle. Excited but unsupported, aware but unprepared.

The Political Economy of "Truth"

The Trump administration has already mandated that federally-funded AI systems edit out training data on climate change, diversity initiatives, and critical race theory. This isn't content moderation. It is the rewriting of reality at the foundational layer where models learn to interpret the world.

Training data isn't neutral infrastructure.

It's political architecture. Every dataset embeds assumptions about what constitutes knowledge, whose voices matter, which perspectives deserve preservation.

When UK companies build workflows on US-controlled models, they're not just adopting tools. They are accepting cognitive frameworks that are being politicised. Plus let's not forget, these are people who genuinely believe tea tastes better microwaved.

The irony is exquisite. In our anxiety about technological dependence, we've sleepwalked into intellectual dependence instead.

Britain's brilliant tradition of contrarian thinking: from sceptical finance and politics to world-leading creative industries. All of this is about to be processed through algorithmic systems that consider such independence and weirdness outside their parameters.

Which brings us to an uncomfortable question: if Britain's workforce is being shaped by American cognitive frameworks, what happens to the very thing that makes us economically competitive?

The Creative Advantage: Why Britain Still Holds the Trump Cards

Yet here's the thing about intellectual dependency that some are missing: it's only permanent if you accept it as such.

Britain may have lost its manufacturing heartland, but what remains is something far more valuable in an AI-driven world. A population that excels at the one capability that machines, for all their computational brilliance, consistently struggle with: creation.

Not just the obvious creative industries where Britain punches absurdly above its weight. But the broader cultural capacity for lateral thinking, for connecting disparate ideas in ways that confound conventional wisdom.

It's the same cognitive restlessness that produces everything from Daisy May Cooper scripts to breakthrough financial instruments to revolutionary vacuum cleaners.

Each unified by the ability to look at established patterns and ask, with genuine curiosity,

"But what if we did it differently?"

This isn't flag shagging. It's a strategic observation.

The most successful AI implementations globally aren't happening in places with the most compute power—they emerge where human creativity meets algorithmic capability. Where augmentation thrives over automation.

Beyond Binary Choices

The £42 billion deal need not be Britain's intellectual surrender. It can be a springboard, but only if we resist the temptation to make it exclusive. The future belongs to networks, not hegemonies.

Britain's strategic advantage lies in cultivating AI relationships that span continents, not signing exclusive deals with individual superpowers. European AI initiatives offer different philosophical approaches to training data sovereignty. Asian AI networks bring fundamentally different approaches to everything from, yes, copyright to creativity and security.

The smart move isn't choosing sides. It is becoming the place where different AI traditions cross-pollinate most productively.

Consider this: while America optimises for scale and China optimises for speed, Britain could optimise for synthesis. Becoming the place where diverse AI approaches combine in ways that none could achieve individually.

The Human Capital Imperative

But none of this matters if we continue treating AI adoption as primarily a technical challenge rather than a cognitive development opportunity. The literacy gaps that undermine our AI readiness aren't just educational failures. They’re economic emergencies.

Every percentage point improvement in workforce cognitive flexibility translates directly into more effective AI collaboration and economic output. Every investment in curiosity-driven learning compounds into strategic economic advantage.

The countries that will lead in the AI era aren't necessarily those with the biggest data centres. They're the ones with the most cognitively adventurous populations.

This means treating AI adoption as literacy development, not tool training. Teaching people to think with AI, not just use it. One approach produces compliance. The other produces capability.

The British Path Forward

Britain's choice isn't between technological sovereignty and technological dependency. It's between intellectual agency and intellectual automation.

We can accept pre-packaged cognitive frameworks designed by others, or we can insist on retaining the capacity to think differently. We can optimise our workforce for efficiency, or we can cultivate the kind of cognitive restlessness that turns constraints into opportunities.

The most profound form of sovereignty isn't controlling the infrastructure. It’s retaining the ability to use that infrastructure in ways its designers never anticipated. To take US computing power, European regulatory frameworks, Asian agile models, and British creative thinking, and synthesise something entirely new.

After all, intellectual independence has never been about isolation. It's been about maintaining the confidence to think for ourselves, regardless of whose tools we're using.

Again Orwell. Only this time as a strategic guide. If those who control the present control the past, Then those who control the past control the future, makes our task clear: ensure that Britain's future remains authored by British minds, even when shaped by American algorithms.

The question that will define Britain's next decade isn't whether we'll use American AI systems. It's whether we'll use them to augment British thinking, or let them automate it away.

Every company, every department, every worker now choosing how to engage with AI is making that choice.

The aggregate of those decisions will determine whether Britain remains intellectually sovereign or becomes cognitively colonised.  (And lets face it. Britain knows a lot about colonising things).

But back to the main point about Intellectual Sovereignty.

Our choice, in this, remains entirely ours.

For now.

Failing Gloriously

Where should I start?

Well, there was the day we discovered our Financial Director had stolen £2.4 million…

…money we didn't even know we’d earned

which rather tells you everything about our business acumen at the time.

Google "Hicklin Slade/Sharon Bridgewater" if you fancy a laugh at our spectacular financial naivety.

I’m now reminded of my past in the context a few of my advertising contemporaries are now heading towards career exits.

Some through lucrative business sales. Others via the more pedestrian route of diligent career cultivation and pension optimisation.

My current situation?

Very much "none of the above."

Yet, I’m constantly drawn by advertising's fast evolving intellectual challenges.

I repeatedly gambled security and guaranteed rewards for those delicious "what if?" moments.

What I sacrificed in monetary accumulation, I gained exponentially in experiential knowledge:

  • True, I’ve occasionally accepted roles patently unsuited to my capabilities.

  • I’ve invested in pitches, people, and companies that any rational investor would have avoided like the plague

  • I’ve co-founded three startups that provided not only exhilarating highs but eye-watering ways to both gain and lose spectacular amounts of money

Success rate? Patchy at best.

Satisfaction rate? Unprecedented.

Learning dividend? Immeasurable.

The beautiful irony? This has taught me more about due diligence, trust, and operational oversight than any theoretical training  program possibly could.

AI and the comfort with discomfort.

The advertising industry now faces unprecedented AI-driven transformation.

What does success actually mean in an industry where the fundamentals are being rewritten?

Tradition says: accumulated wealth, linear career progression, secure employment. This may all prove spectacularly inadequate for navigating an algorithmic future of unknown possibilities.

My zigzag career wasn't planned, but it cultivated something invaluable: comfort with discomfort.

This isn't the tired Silicon Valley mantra of "fail fast, fail often." Debunked as expensive posturing.

This is something different: systematic comfort with uncertainty. Navigating ambiguity without panic. Rebuilding without losing curiosity.

Here's why this matters now: AI doesn't just automate tasks—it fundamentally alters how we approach problems. Those thriving with AI aren't those with the most technical knowledge. They're those comfortable with not knowing what comes next.

My expensive education in spectacular miscalculation accidentally prepared me for exactly this moment: where strategic thinking means dancing with algorithmic uncertainty rather than controlling predictable outcomes.

It’s sort of ironic. The very career choices that looked suicidal may have been the most sophisticated preparation available for our AI-transformed industry.

Despite logic and good sense saying I should have been long gone.

Skills vs Mindset in the AI Era

At 60, I still delight in approaching each new challenge with a beginners mind set.

It’s what researchers term "deprivation sensitivity" – the psychological hunger for an understanding: The alluring draw of the why?

My career trajectory resembles a rather haphazard game of pinball, not just from design to advertising, but from B2C to B2B, and steps from: Creative Director to Strategist to Agency founder to Investor to Client, etc.

Each ricochet taught me something. Curiosity consistently trumps credentials.

Recent studies reveal 81% of employees acknowledge AI fundamentally alters required workplace competencies. The World Economic Forum identifies curiosity, creative thinking, and flexibility as core, central skills for future work place success.

Intriguing, how we've arrived at valuing really ancient human traits in our most technologically advanced AI era.

Contemporary workplace innovation increasingly stems from curiosity-driven exploration rather than process adherence. Multiple studies demonstrate that deprivation sensitivity correlates more strongly with adaptive performance than technical proficiency alone.

My zigzag career wasn't planned, but it cultivated something invaluable: comfort with discomfort. Each pivot demanded simultaneous hunger for learning and disciplined competency development.

The difference now is the urgent requirement of concurrent curiosity and capability in perpetual refreshed mode.

The limiting factor isn't technical constraints – it's our own ambition and intellectual appetite.

Competency without curiosity will create professional dead ends.

Now its about cultivating a beginner's mindset as a core competency. Develop systematic comfort with uncertainty.

Crucial point: Practice intellectual humility alongside technical skill acquisition.

You are NOT an AI expert, be honest, we barely grasp the AI workplace implications of the next 6 months, let alone the rest of the decade. >> Look up Rana Adhikari at the California Institute of Technology, who recently found some AI models designing experiments that defy human expectations, sometimes bypassing controls. (Article in Wired, et al.)

So its clear, the most valuable professional asset isn't knowing everything.

It's maintaining the discipline to approach everything as if you know nothing.

The challenge now is how to institutionalise intellectual curiosity within traditional competency frameworks

The Careers Opportunities of a Bot

As a teenager I was enthralled by the music of the Clash. So wasn’t so out of character when I found myself unconsciously humming the words to Career opportunities, a blistering track from 1977. Particularly the refrain “.. Career opportunities, the ones that never knock..”  

The occasion? another AI powered knock back for a role

The irony is up to 11.

The very machines I champion being used to systematically filter out precisely the cross-pollinating professionals that can add such value to organisations dealing with the effects of AI.

Ladder meet bridge

The traditional career ladder assumed a stable world where skills depreciated slowly and industries remained predictable. That world is gone.

The most valuable professionals aren't climbing ladders; they're building bridges between disciplines, industries and crafts.

Clearly I’m biased, having zig zagged around pretty much every corner of advertising all my career, and yes before you ask, I’m old enough now, to realize I do get bored without a strong mental challenge before me

Skill stacking

Post AI research shows that professionals with diverse domain experience exhibit 43% superior performance in adaptive problem-solving scenarios. The terror many feel about "leaving their lane" represents profound misunderstanding of value creation in machine-augmented environments.

It's like discarding your Swiss Army knife in favour of a single blade, then wondering why you can't open wine bottles or extract stones from horses hooves anymore…

My point is: Cross-domain experience generates novel solution pathways that pure specialists struggle to access.

Adaptive Resilience: Multiple pivots build intellectual muscle memory for navigation of the increasing uncertainty businesses now face.

Curiosity Expertise: Understanding how to extract maximum value from AI requires precisely the kind of cognitive flexibility that portfolio careers cultivate.

Career opportunities

As AI assumes routine cognitive tasks, human value increasingly lies in synthesis, pattern recognition across domains, and strategic ambiguity navigation. Linear careers optimised for industrial efficiency; portfolio careers optimism for algorithmic collaboration.

What unexpected skill combination defines your professional edge?

I Strategist

"I was born to perform, it's a calling, I exist to do this" — Jarvis Cocker

At 60, recently redundant, people ask why I don't just "wind down."

Find something less stressful.

But here's the thing about being a strategist — it's not what I do, it's who I am.

Emma Perret recently wrote a brilliant piece about the need for 'life in her work’ full of great quotes, I loved: ”fingerprints on everything I touch as proof I was here."

She talks of strategy living in the gap between what data says and what your gut knows.

And that gut feeling, never switches off.

Even between paying jobs, I find myself studying problems, sketching solutions, seeing patterns others miss.

My brain feeds off finding routes through chaos — whether it's cultural, political, or brand challenges.

In the ‘Self-Determination Theory by Edward Deci and Richard Ryan, they talk of ‘..a certain group of people as having spontaneous tendencies to be curious and interested, to seek out challenges and to exercise and develop their skills and knowledge, even in the absence of operationally separable rewards..’

Research shows that strategist identity often builds on earlier professional experiences, becoming an extension rather than abandonment of previous identities.

After decades, it becomes woven into who you are.

How we see our work matters more than the job title. I now see strategy work as essential to my identity.

Tom Fryer, Professor of mental health, wrote about ‘Work, identity and health’ pointing out for most people, their job was their only significant source of personal identity.

He went on to warn: ‘….Without a clear sense of personal identity we are vulnerable to psychological injury, at risk of anxiety and depression, and social disengagement.…’

Clearly a subject of the moment, not just for me, but the nation as a whole facing up to a new world of AI infused work.

So me being a strategist is more than getting paid (although that bit is neat) its essential

as Javis says:

I exist to do this

Some callings don't respect retirement plans.

What drives you to keep doing what you love, even when the world suggests you shouldn't?

Skills vs Mindset in the AI Era

At 60, I still delight in approaching each new challenge with a beginner’s mindset. It’s what researchers term "deprivation sensitivity" – the psychological hunger for an understanding: The alluring draw of the why?

My career trajectory resembles a rather haphazard game of pinball, not just from design to advertising, but from B2C to B2B, and steps from: Creative Director to Strategist to Agency founder to Investor to Client, etc.

Each ricochet taught me something. Curiosity consistently trumps credentials.

Recent studies reveal 81% of employees acknowledge AI fundamentally alters required workplace competencies. The World Economic Forum identifies curiosity, creative thinking, and flexibility as core, central skills for future workplace success.

Intriguing, how we've arrived at valuing really ancient human traits in our most technologically advanced AI era.

Contemporary workplace innovation increasingly stems from curiosity-driven exploration rather than process adherence. Multiple studies demonstrate that deprivation sensitivity correlates more strongly with adaptive performance than technical proficiency alone.

My zigzag career wasn't planned, but it cultivated something invaluable: comfort with discomfort. Each pivot demanded simultaneous hunger for learning and disciplined competency development.

The difference now is the urgent requirement of concurrent curiosity and capability in a perpetual refreshed mode.

The limiting factor isn't technical constraints – it's our own ambition and intellectual appetite.

Competency without curiosity will create professional dead ends.

Now its about cultivating a beginner's mindset as a core competency. Develop systematic comfort with uncertainty.

Crucial point:

Practice intellectual humility alongside technical skill acquisition.

You are NOT an AI expert, be honest, we barely grasp the AI workplace implications of the next 6 months, let alone the rest of the decade. >> Look up Rana Adhikari at the California Institute of Technology, who recently found some AI models designing experiments that defy human expectations, sometimes bypassing controls. (Article in Wired, et al.)

So its clear, the most valuable professional asset isn't knowing everything.

It's maintaining the discipline to approach everything as if you know nothing.

The challenge now is how to institutionalise intellectual curiosity within traditional competency frameworks