We are running an uncontrolled experiment in human cognitive development. An entire generation of young professionals is learning to think through AI – and early evidence suggests we may be undermining the very skills that make humans capable of competing with it.

2025 Microsoft and Carnegie Mellon study found that 40% of AI-assisted tasks involved no critical thinking, while researchers at MIT warn that we risk automating away the very experiences that build expertise.

Young professionals are being deprived of the foundational skills they need to grow, as entry-level positions that once served as intellectual apprenticeships disappear faster than education systems can adapt.

Goldman Sachs estimates that 44% of legal tasks could be automated, displacing work historically done by junior staff. Consulting firms are deploying internal AI platforms to create slides and synthesise information – duties once assigned to trainee analysts. The result is the elimination of cognitive apprenticeships before we’ve found alternatives.

This may be the sharpest blow yet to a generation already burdened by unfairness – in taxation, housing, climate, and opportunity. History makes one thing clear: when the young are sidelined and the future feels stolen, revolt follows – not always immediately, but inevitably.

Contents

  1. Outsourcing Expertise
  2. ESG’s downfall was partially self-inflicted
  3. The Illusion of Competence
  4. The Great Displacement
  5. The Way Forward
  6. The Stakes

Outsourcing Expertise

Every transformative technology has displaced human capability. Socrates feared writing would weaken memory. The printing press threatened oral tradition. Calculators diminished mental arithmetic.

But AI is different. Unlike earlier tools that augmented narrow tasks, large language models interfere with the mechanics of thought itself – how we process, reason, and argue. And while past disruptions unfolded over decades, AI is scaling faster than our understanding can keep up.For centuries, competence was forged through productive struggle – a first draft that made no sense, the presentation that bombed, the late-night analysis that missed a glaring flaw. Beautiful minds were beaten into shape by failure. Through error, repetition, and discomfort, they earned the capacity for critical thought.

As a young diplomat in Beirut, I spent days crafting reports on Lebanese politics and the Syrian civil war. I agonised over every paragraph – especially the conclusion – where the Australian Government permitted a short personal reflection on the interviews I’d conducted.

My early dispatches read like the fevered imaginings of someone who had mistaken complexity for profundity, laced with phrases like “cultural zeitgeist” and “multi-layered sectarian dynamics”. Each gentle correction from my ambassador was a small baptism by fire. The pain was exquisite – and entirely necessary.

AI turns this hard-won apprenticeship into an illusion. Junior associates produce polished memos without grasping the ideas within them. First-year analysts submit reports stripped of judgment. What once required effort, failure, and growth now arrives on command – slick and unearned. This creates a vicious cycle: as skills atrophy, dependence deepens.

ESG’s downfall was partially self-inflicted

  • ESG became fragile due to misaligned incentives.
  • It drifted from its core mission of responsible value creation.
  • The backlash was inevitable.

Even before ESG was politicised, it was losing credibility through the bad faith and misaligned incentives of its own advocates.

  • Ratings agencies profited by turning ESG scores into a pay-to-play racket.
  • Consultancies greenwashed for fossil fuel giants, slowing the transition while preaching ‘purpose’ at every opportunity.
  • Corporate sustainability teams pretended win-win outcomes were always possible, ignoring the reality of financial and ethical trade-offs.
  • And boards and management resorted to virtue signalling, championing social causes with little relevance to their business, while grossly misjudging public sentiment.

The result? ESG drifted from its core mission—responsible value creation—into a parody of itself. It was only a matter of time before the backlash hit.

The Illusion of Competence

Ask ChatGPT to draft a corporate strategy and you’ll get the blended aftertaste of every Big Four PowerPoint: “leveraging synergistic opportunities to drive transformational outcomes while empowering stakeholders to thrive across the value chain.”

This kind of stultifying corporatese – a verbal black hole of guff – creates what researchers call the illusion of competence. For young professionals still learning to separate nonsense from insight, it’s especially seductive.

What Orwell once diagnosed in political writing – language assembled for effect, not meaning – has now been industrialised. “The attraction”, he wrote, “is that it is easy.” When you say nothing clearly, it’s hard to be wrong.

The Great Displacement

AI isn’t just dulling minds – it’s displacing them. Young professionals are being squeezed from both sides: losing the critical thinking skills they need to compete against AI, while watching AI replace the very roles designed to teach them those skills.

Law firms are replacing first-year associates with document review algorithms. Consultants use AI to generate the kind of analysis once assigned to new hires. Investment banks are automating the pitch books that junior analysts once spent late nights perfecting.

From any single organisation’s perspective, these moves are rational. A top-tier law graduate earns up to $115,000 a year. A large language model costs $30 a month. For routine tasks, the choice is obvious.

But this creates what economists call a coordination problem. Each firm acting rationally produces a collectively irrational outcome – the erosion of the training pipeline that develops future industry leaders. A 2025 forecast by industry leaders, including Anthropic’s CEO, predicts that up to 50% of entry-level white-collar jobs could vanish within five years.

The Way Forward

This isn’t a call to abandon AI. These tools are astonishing – I’m using one right now to probe my arguments, stress-test assumptions, and scour the internet for supporting evidence. But I do so with the ballast of 20 years spent thinking and writing unaided, shaped by philosophy and literature, diplomacy and law, and five humbling years of Arabic.

That scaffolding took decades to build – one verbose essay, one tedious ministerial brief, one patient mentor at a time. Most young professionals don’t yet have that foundation. Their challenge isn’t mastering AI – it’s the slow, stubborn work of building a mind AI can’t replace.

Three principles can help:

  • Make AI your adversary, not your scribe. Don’t ask it to write – ask it to argue. As Wharton’s Ethan Mollick puts it, “AI is a bicycle for the mind – but you still have to pedal.” He’s right, but the road matters too. Don’t let the machine steer.

  • Do the hard thinking before you outsource it. Judgment isn’t a plug-in. It comes from effort – figuring things out, getting it wrong, and chasing the dopamine hit of original insight. Use AI to test your ideas, not to generate them whole.

  • Save the best of you for what matters most. Strategic decisions, creative leaps, ethical trade-offs – these should begin with human intelligence. The struggle isn’t a bug – it’s the feature that builds intellectual muscle.

The Stakes

We face a choice between two futures.

In one, we outsource too much, producing a generation fluent in prompting but untrained in critical thought. Minds that once grew through error, effort, and reflection are left underdeveloped – and unemployed.

In the other, we act. We reimagine education and work as crucibles of growth. We use AI not to bypass thought, but to deepen it. We restore the value of struggle – not out of nostalgia, but because it sharpens what machines can’t replicate: discernment, imagination, vision, hope, the long fuse of thought.

The generation entering the workforce today may be the last to build deep analytical skill before AI becomes ubiquitous. If we choose wisely, they could become the most intellectually capable cohort in human history. If we don’t, we risk producing digital natives who can’t think natively at all.

The window is narrowing – but it remains open.

We’d love to hear your thoughts – email luke@bwdstrategic.com or message him on LinkedIn if you’d like to continue the conversation.

About the Author

Luke Heilbuth is CEO of sustainability strategy consultancy BWD Strategic, and a former Australian diplomat.