The One Skill You Need to Survive a World in Rapid Flux

The One Skill You Need to Survive a World in Rapid Flux

Across the hundreds of keynotes, board talks, and seminars I do each year, one question surfaces more than any other: what should I - or my children - be learning to stay relevant? This essay argues that the answer has been sitting inside the academic definition of intelligence since 1982, and that taking it seriously means rethinking not just individual skill development but the entire architecture of education, organisations, and human-AI collaboration.

“We are approaching a critical threshold in the history of our species. Everything is about to change.” That’s Mustafa Suleyman - co-founder of DeepMind, now CEO of Microsoft AI - writing in The Coming Wave. Sam Altman, CEO of OpenAI, believes we could reach superintelligence within “a few thousand days.” Demis Hassabis, the Nobel Prize-winning head of Google DeepMind, told the Axios AI+ Summit that AGI will be “probably the most transformative moment in human history.” Sundar Pichai has called AI “more profound than fire or electricity.” Jensen Huang predicts AI will pass virtually every human benchmark within five years.

These are not fringe voices. These are the people building this technology, and they are telling us - with unusual consensus - that the next five years may reshape human civilisation more profoundly than any period in our history.

So what do we do about it?

The question I hear most

I do hundreds of keynotes, board talks, podcasts, and seminars each year. I speak to CEOs, policymakers, educators, students, and parents across every continent. And there is one question that comes up more than any other  -  more than questions about regulation, more than questions about consciousness, more than questions about which AI tools to adopt. The question is this:

What should I - or my children - be learning to stay relevant?

It’s the right question. But most people aren’t ready for the answer, because it starts with an uncomfortable truth.

The uncomfortable truth

Knowledge is a commodity.

That statement would have been controversial a decade ago. Today it’s self-evident. Anything you can Google, an AI can retrieve faster, synthesise better, and explain more clearly. But here’s where it gets truly uncomfortable: intelligence itself - or at least the kind we’ve traditionally valued and credentialed - is rapidly becoming a commodity too. AI systems already match or exceed PhD-level performance across law, medicine, mathematics, and coding. When OpenAI’s models can pass the bar exam, ace medical licensing tests, and write publishable research, the premium on knowing things collapses.

Whether we like it or not, a workforce revolution is underway - and it’s not a distant hypothetical. It’s happening now.

The revolution is already here

The numbers are stark. Goldman Sachs estimates that generative AI could expose the equivalent of 300 million full-time jobs globally to automation, with up to 7% entirely replaced. The World Economic Forum’s 2025 Future of Jobs Report - surveying over 1,000 companies across 55 economies - projects 92 million jobs will be displaced by 2030. McKinsey’s November 2025 analysis found that 57% of US work hours are now technically automatable with existing technology, up from roughly 30% just two years earlier. The IMF estimates 40% of global employment is exposed to AI, rising to 60% in advanced economies.

These aren’t projections from futurists with crystal balls. These are sober analyses from the world’s most rigorous economic institutions.

And the company-level evidence is just as striking. Klarna reduced its workforce from 5,000 to 3,000 employees, with its CEO crediting AI for replacing the workload of 700 customer service agents. Amazon cut 14,000 corporate roles in late 2025. Microsoft eliminated approximately 15,000 positions, with Satya Nadella noting that 30% of its code is now AI-generated. In total, over 50,000 layoffs in 2025 across major firms explicitly cited AI as a contributing factor.

Perhaps most telling: a Harvard Business School study analysing 1.4 million freelance job listings found a 21% decline in automation-prone postings within just eight months of ChatGPT’s launch. Writing jobs fell over 30%. Software development dropped more than 20%. The gig economy - often held up as the future of flexible work - is being disrupted before it even fully matured.

A good friend of mine, Calum Chace, coined the term “Economic Singularity” to describe the point when machines can do everything humans do for money - cheaper, better, and faster. He’s written extensively, eloquently, and comprehensively on the subject, and I’d highly recommend his books, particularly The Economic Singularity and Surviving AI. Calum’s key insight is that this wave of automation is fundamentally different from all previous ones: earlier revolutions replaced our muscle power, and we had our cognitive abilities to fall back on. This time, it’s our cognitive work that’s being automated. There is no obvious fallback.

A necessary dose of humility

Now, before this descends into doom and prophecy, let me offer a necessary counterweight: I believe that anyone who thinks they can predict what the world will look like beyond the next five years probably doesn’t know what they’re talking about.

The pace of change makes medium-term prediction essentially impossible. Two years ago, nobody predicted that AI would be writing 30% of Microsoft’s codebase. Five years ago, the idea of an AI winning a Nobel Prize would have seemed absurd. The honest answer to “what will the world look like in 2035?” is: we don’t know. And we should be deeply suspicious of anyone who claims otherwise.

But this uncertainty is precisely the point. It leads us directly to the one skill that matters.

The one word hiding inside the definition of intelligence

My favourite definition of intelligence comes from Sternberg and Salter, who in 1982 defined it as “goal-directed adaptive behaviour.” I’ve used this definition in keynotes for years, and every time I do, I pause on one word: adaptive.

That word is doing all the heavy lifting.

Intelligence isn’t about what you know - it’s about how effectively you can adjust your behaviour to meet new challenges in pursuit of meaningful goals. The keyword isn’t “goal-directed” (though purpose matters). The keyword is adaptive.

This insight echoes through the greatest minds in history. Darwin’s entire theory of natural selection rests on the principle that survival belongs not to the strongest or the most intelligent, but to the most adaptable. As Leon C. Megginson beautifully paraphrased it: “the species that survives is the one that is able best to adapt and adjust to the changing environment in which it finds itself.” Einstein captured a related truth when he said, “Imagination is more important than knowledge. Knowledge is limited. Imagination encircles the world.” Stephen Hawking put it even more directly: “Intelligence is the ability to adapt to change.”

In a world where the environment is shifting faster than at any point in human history, adaptability isn’t a nice-to-have soft skill. It is the skill. It is, quite literally, the definition of intelligence.

The anatomy of adaptability

Adaptability isn’t a single trait. It’s a constellation of characteristics that, taken together, enable a person to navigate the unknown. Let me unpack the most important ones.

Creativity is the ability to generate novel solutions to unfamiliar problems. When the playbook no longer applies - when the rules of the game are being rewritten in real time - creativity is what allows you to write a new one. The most creative organisations I work with don’t just tolerate unconventional thinking; they structurally incentivise it.

Curiosity is the engine of learning. Curious people don’t wait to be told what to study; they follow questions wherever they lead. In a world of accelerating change, the half-life of any specific skill is shrinking. What endures is the desire to learn - the insatiable pull toward understanding how things work and why.

Critical thinking is the ability to evaluate information, identify assumptions, and reason clearly under uncertainty. In an age of AI-generated content, deepfakes, and information overload, the capacity to distinguish signal from noise is more valuable than ever. It’s not enough to have access to information; you need the judgement to know what to trust and how to interrogate it.

Resilience is the capacity to absorb setbacks and recover without losing momentum. Adaptability necessarily involves failure - you cannot navigate the unknown without stumbling. What separates those who adapt from those who stall is not whether they fall, but how quickly and thoughtfully they get back up.

Courage is the willingness to act under uncertainty. Adaptability without courage is just theoretical flexibility. The courage to make decisions with incomplete information, to challenge orthodoxy, to take calculated risks when the stakes are high - this is what turns the capacity to adapt into the act of adapting.

Navigating ambiguity is perhaps the most undervalued skill in professional life. Most of the important problems in the world don’t come with clear instructions. The ability to operate effectively when the problem itself is poorly defined - to make progress when you can’t yet see the destination - is a hallmark of adaptive intelligence.

Empathy is the ability to understand and share the perspectives of others. In a world increasingly mediated by technology, the capacity for genuine human connection becomes a superpower. Empathy enables collaboration, builds trust, and ensures that the solutions we create actually serve human needs.

Considerate thinking - the practice of weighing the broader implications of your actions - keeps adaptability grounded. Moving fast without thinking about consequences isn’t adaptability; it’s recklessness. The adaptive individual considers second- and third-order effects, weighing trade-offs with care.

Taken together, these characteristics form a toolkit for navigating a world that refuses to sit still. And crucially, they are not fixed traits. They can be developed, practised, and strengthened - if we choose to prioritise them.

The real differentiator: asking better questions

In a world where knowledge is a commodity and people have access to armies of PhD-equivalent AI agents, the key differentiator is no longer having answers. It’s knowing what questions to ask.

This is a profound shift. The person who can frame the right problem, decompose it into the right sub-questions, and orchestrate AI tools to explore solution spaces will consistently outperform the person who merely has more facts at their disposal. The ability to engineer and orchestrate this incredible cognitive resource - to direct it toward problems that matter and extract tangible value from its outputs - is becoming the most commercially valuable skill in the economy.

This connects directly to the philosophical concept of the Extended Mind, proposed by Andy Clark and David Chalmers in their landmark 1998 paper. They argued that cognitive processes don’t stop at the boundary of the skull - that when we couple with external tools in a tight, two-way interaction, those tools become genuine extensions of our minds. Clark himself, writing in Nature Communications in 2025, applied this directly to AI: we are “hybrid thinking systems defined across a rich mosaic of resources, only some of which are housed in the biological brain.”

When you use AI well - when you know how to prompt, interrogate, challenge, and build on its outputs - you’re not outsourcing your thinking. You’re extending it. You’re becoming cognitively larger.

But here’s the catch: the quality of that extension depends entirely on the breadth and depth of your existing knowledge. Learning how to harness AI as a superpower is unavoidable. And using these tools to educate yourself across a breadth of subjects - from anthropology to art history, philosophy to psychology, photography to engineering - is what enables you to ask dramatically better questions and surface dramatically better solutions. The polymath, once seen as a dilettante, becomes the ideal architect of AI-augmented intelligence.

Our education systems are testing for the wrong things

Here’s the problem: our education systems are almost entirely set up to impart knowledge and test for its retention. Exams measure what you can recall under timed conditions. Grades reflect your ability to reproduce information. Curricula are structured around subject-matter expertise in narrow domains.

In a world where AI can recall and reproduce information with superhuman speed and accuracy, this model is not merely outdated - it’s actively counterproductive. We are training people to compete with machines on the machines’ terms.

The WEF’s Future of Jobs Report found that 39% of core skills required in the job market will change by 2030, and that 63% of employers cite the skills gap as their number-one barrier to transformation. UNESCO’s 2025 report on AI and education warned against “excessive reliance on automation in assessment,” arguing that “AI must recognise the incomputable nature of human learning.” The OECD’s 2025 Skills Outlook found declining performance in foundational skills across member countries, even as the demand for creative and adaptive capabilities soars.

Our education systems need to change - fundamentally. They need to test for applied knowledge, not recalled knowledge. They need to assess and develop the adaptive characteristics I’ve outlined: creativity, critical thinking, resilience, curiosity, courage, empathy. They need to help people identify these traits in themselves and provide structured pathways to strengthen them.

But can our education systems adapt quickly enough? I’m sceptical. The institutions responsible for preparing the next generation are themselves among the least adaptive structures in society. University curricula take years to update. National examination frameworks move at glacial speed. Teacher training programmes are perpetually playing catch-up with technological reality. The gap between what education delivers and what the world demands is widening, not narrowing.

AI as the great accelerator of human development

Here’s the paradox - and the opportunity. The very technology that’s creating this urgency also offers the most powerful solution.

Benjamin Bloom demonstrated in 1984 that students receiving one-on-one tutoring outperformed 98% of conventionally taught students - a two standard deviation improvement he called the “2 Sigma Problem.” The challenge was that personalised tutoring was too expensive to scale. AI is solving that problem.

We can now use AI to simulate complex, real-world scenarios that test applied knowledge under realistic conditions. Imagine a graduate negotiating a simulated M&A deal with an AI counterpart that adapts its strategy in real time. A medical student diagnosing a rare condition across a sequence of AI-generated patient encounters, receiving granular feedback on every decision. A policy analyst stress-testing a proposal against AI-simulated economic models with adversarial feedback loops.

This isn’t theoretical. Early evidence from AI-assisted surgical simulation shows novice learners achieving significantly higher performance scores with reduced instructor time. Khan Academy’s AI tutor Khanmigo has produced 20% greater-than-expected learning gains in early trials. The US military is deploying AI-driven synthetic training environments that adapt in real time to trainee skill levels.

What these systems share is the ability to dramatically compress the journey from novice to expert. Where it might traditionally take a decade to move from graduate to seasoned practitioner - accumulating experience through slow, sometimes random exposure to complex situations - AI can provide structured, deliberate practice with immediate feedback at a pace and scale that was previously impossible.

Creating the tools and infrastructure to enable this kind of accelerated adaptive development will be critical - not just for individuals, but for organisations that want to remain relevant. The companies that invest in building AI-powered learning environments will develop more adaptive workforces. The ones that don’t will find themselves out-evolved.

Liquid Extended Minds

In my other writings I have explored the concept of liquid organisations - fluid, adaptive structures inspired by Zygmunt Bauman’s “liquid modernity,” where rigid hierarchies dissolve in favour of dynamic, project-based teams that form, execute, and reconfigure around emerging challenges. Combined with principles from decentralised autonomous organisations, these structures represent an organisational model purpose-built for a world in constant flux.

But I want to push this idea further. When you combine the concept of liquid organisations with the Extended Mind thesis - with the idea that AI doesn’t just assist our thinking but genuinely extends it - you arrive at something I’d call Liquid Extended Minds.

A Liquid Extended Mind is what emerges when adaptive individuals, equipped with AI as a genuine cognitive extension, operate within fluid organisational structures that can reconfigure as fast as the world changes around them. It’s not just about having AI tools. It’s not just about flexible org charts. It’s the synthesis: human adaptability, amplified by AI, expressed through structures that are themselves adaptive.

In this model, the boundaries between individual capability and collective intelligence blur. A person with deep curiosity, broad knowledge, and mastery of AI orchestration can function as a one-person research institute, a creative studio, or a strategic consultancy - and then, within a liquid organisational structure, combine with others doing the same to tackle challenges that no individual or static team could address alone.

This is, I believe, where we’re heading. Not toward a world where AI replaces human intelligence, but toward one where the most adaptive humans  -  those who cultivate creativity, curiosity, critical thinking, resilience, and the ability to ask the right questions - extend their minds through AI and operate within structures fluid enough to channel that extended intelligence toward the problems that matter most.

The choice ahead

The workforce revolution is not coming. It’s here. Knowledge is already a commodity. Intelligence, in its traditional credentialed form, is rapidly following. The education systems designed to produce knowledgeable, credentialed professionals are not adapting fast enough.

But adaptability - real, practised, cultivated adaptability - remains irreducibly human. No AI system embodies goal-directed adaptive behaviour in the way a creative, curious, courageous human does. The question is whether we choose to develop that capacity in ourselves, in our children, and in our organisations - or whether we cling to the illusion that knowing more will be enough.

Sternberg and Salter gave us the answer four decades ago. The word is right there in the definition.

Adaptive.

It was always the one skill that mattered. The difference now is that the stakes are infinitely higher - and the window for developing it is closing fast.