
Extended Liquid Minds - Why Your Cognition is No Longer Confined to Your Brain
I've been waiting over thirty years to legitimately quote Bruce Lee in a professional context.
I should explain. I grew up in the 1980s and '90s watching classic kung fu movies - Shaw Brothers, Golden Harvest, anything with a badly dubbed fight sequence and a philosophical monologue between rounds. I trained in martial arts for years. And somewhere in the middle of all that, between the flying kicks and the dubious subtitles, I absorbed a philosophy that has shaped my thinking about intelligence, adaptation, and the nature of mind ever since.
So here it is. The quote I've been waiting three decades to deploy:
Empty your mind. Be formless, shapeless, like water. You put water into a cup, it becomes the cup. You put water into a bottle, it becomes the bottle. You put it in a teapot, it becomes the teapot. Now water can flow or it can crash. Be water, my friend.
Bruce Lee said this on the Pierre Berton Show in 1971. It sounds like a martial arts aphorism. It is one. But it is also, I want to argue, one of the most profound statements ever made about cognitive architecture - and it has never been more relevant than it is right now, as artificial intelligence reshapes what it means to think.
Intelligence Is Adaptability
For over twenty years, I've used the same definition of intelligence: goal-directed adaptive behaviour.1 It comes from Sternberg and Salter's 1982 Handbook of Human Intelligence, and every time I return to it, one word does all the heavy lifting: adaptive. Not "computational." Not "rational." Adaptive. Intelligence, at its core, is the capacity to reshape yourself in response to a changing environment - to become the cup, the bottle, the teapot.
This is not a soft insight. It has hard implications. If intelligence is adaptability, then the most intelligent individuals and organisations are not the ones with the most knowledge, the most processing power, or the most rigid expertise. They are the ones most capable of dissolving their current form and reconstituting in response to new demands. In a world where AI capabilities accelerate weekly and entire industries are being restructured in months, static skills depreciate towards zero. Creativity, curiosity, critical thinking, resilience, the ability to navigate ambiguity - these are not soft skills. They are the skill. The only one that doesn't depreciate.
Lee understood this intuitively. His martial art, Jeet Kune Do, was founded on the rejection of fixed forms: "Using no way as way; having no limitation as limitation." He drew on the Tao Te Ching (chapter 76, in most translations), which holds that the stiff and unbending is the disciple of death, while the gentle and yielding is the disciple of life. Lee's water philosophy isn't motivational wallpaper. It is an ontological claim about the nature of effective agency: that a system which clings to a fixed structure will shatter, while one that maintains no fixed form can absorb, adapt, and overcome.
I wrote about this at length in my recent article on adaptability as the essential skill for surviving a world in flux. But what I want to do here is take the argument further - because the water metaphor, it turns out, applies not just to how we think but to where we think. And that changes the entire picture.
Your Mind Has Never Been Just Yours
In 1998, the philosophers Andy Clark and David Chalmers published a short paper called "The Extended Mind" in the journal Analysis.2 It opened with a deceptively simple question: Where does the mind stop and the rest of the world begin?
Their answer was radical. If a cognitive process - remembering a fact, working through a problem, making a decision - is partly performed by an external resource, and that resource plays the same functional role as an internal mental process would, then that external resource is, in a meaningful sense, part of your mind.
Their famous thought experiment: Inga wants to visit the Museum of Modern Art. She thinks for a moment, recalls it's on 53rd Street, and walks there. Otto has Alzheimer's. He carries a notebook everywhere. When he learns something, he writes it down. When he needs the museum's address, he looks it up. Clark and Chalmers argued that Otto's notebook plays the same functional role as Inga's biological memory. The information is constantly available, directly accessible, automatically endorsed when retrieved, and was consciously recorded in the first place. The notebook is, in every functional sense that matters, part of Otto's mind.
The argument was met with fierce resistance. Adams and Aizawa accused Clark and Chalmers of committing a "coupling-constitution fallacy" - just because a pencil is coupled to a mathematician doesn't mean the pencil is doing the thinking. But subsequent waves of extended mind research have refined and strengthened the thesis. The "second wave," championed by John Sutton and Richard Menary, dropped the requirement that external resources must function identically to internal processes. Instead, they argued for complementarity: internal and external resources can be radically different from each other yet make complementary contributions to a hybrid cognitive system. The "third wave" went further still, proposing that minds are continuously formed and reformed through the meshing of embodied actions, material tools, and cultural norms. There is no fixed "inner mind" that occasionally reaches out to tools. The mind is constitutively distributed, always in flux.
In 2008, Chalmers himself wrote a foreword to Clark's book Supersizing the Mind in which he reflected on buying an iPhone. "My iPhone is not my tool," he wrote, "or at least it is not wholly my tool. Parts of it have become parts of me."
That was in 2008. The iPhone was a year old. It had no AI, no personal assistant, no capacity for conversation. It was a static, passive repository - Otto's notebook with a touchscreen.
What happens when the notebook talks back?
When the Notebook Talks Back
As of early 2025, over half of American adults use large language models.3 ChatGPT alone processes roughly 2.5 billion queries per day.4 But the statistics that matter aren't about scale - they're about intimacy.
Nearly half of people who use LLMs and self-report mental health conditions now use them for therapeutic support.5 One in eight American adolescents and young adults turn to AI chatbots for mental health advice.6 Over a million people per week have conversations with ChatGPT that include explicit indicators of suicidal planning or intent.7 Forty-two percent of high schoolers report using AI as a friend, for mental health support, or to escape from real life. Nearly one in five say that they or a friend have used AI for a romantic relationship.8
These are not people using tools. These are people extending their minds.
In professional contexts, the pattern is the same but differently distributed. A landmark study tracked 758 consultants using GPT-4 and identified distinct modes of interaction.9 The majority became what the researchers called "cyborgs" - maintaining continuous, iterative dialogue with the AI until the boundary between human and AI thinking was deliberately blurred. A smaller group operated as "centaurs," delegating specific subtasks while retaining strategic control. The remainder automated entire workflows. The cyborgs and centaurs - those who integrated AI into their cognitive process - dramatically outperformed those who merely delegated.
What Clark and Chalmers described in 1998 as a philosophical thought experiment is now an empirical reality at civilisational scale. And in 2025, Andy Clark himself published a paper arguing that LLMs constitute genuine extensions of mind, particularly under the complementarity framework.10 LLMs don't replicate human cognition; they complement it. They offer capabilities the biological brain lacks - instant synthesis of vast corpora, tireless reasoning across domains, the ability to adopt multiple perspectives simultaneously - while the human provides goals, values, judgment, and embodied context. The hybrid system exceeds what either component achieves alone.
This is where Bruce Lee re-enters the picture.
The Extended Liquid Mind
The Extended Mind Thesis tells us that cognition has never been confined to skull and skin. It extends into tools, environments, and other minds. Lee's water philosophy tells us how that extended mind should operate: not rigidly, not in a fixed form, but fluidly - adapting its shape to whatever container the situation demands.
Combine them and you get what I call the Extended Liquid Mind: a cognitive system that is both distributed beyond the boundaries of the biological brain and dynamically adaptive in its configuration. It is not a fixed architecture. It is a fluid, shapeshifting coalition of biological cognition, AI agents, digital tools, and human collaborators - reconfiguring itself in real time as problems demand.
This isn't metaphor. It's what is already happening. When a researcher uses an LLM to synthesise literature, a multi-agent system to stress-test hypotheses, and a colleague to sanity-check conclusions - the cognitive work is distributed across biological and artificial nodes in a fluid, adaptive configuration. When a teenager talks through a crisis with an AI companion at 2 a.m. because no human is available, that AI is functioning as part of the teenager's extended cognitive and emotional architecture. When a C-suite executive uses a team of specialised AI agents - one for market analysis, one for competitive intelligence, one for scenario planning - orchestrated by a meta-agent, that executive's mind has become liquid: flowing across boundaries, filling whatever cognitive container the moment requires.
The trajectory is clear. Today's agents are, as I've noted elsewhere, a bit like deploying an army of intoxicated graduates across your organisation and hoping it works. It won't. But within a few years, we'll see postdoctoral-level agents, and not long after, professor-level agents that can ask questions humans haven't thought of. Multi-agent systems will develop their own transactive memory - knowing who knows what, just as effective human teams do. The Extended Liquid Mind will become not just a single stream but an entire ocean of coordinated cognitive resources.
There is, I should note, a personal dimension to this that I find both ironic and illuminating. I have anendophasia - no inner monologue. Where most people have an internal narrator constructing and rehearsing arguments, I have silence. My thinking is non-verbal: shapes, pressures, sudden crystallisations of understanding without phonemes. It's an absence that I've written about extensively and plan to publish more on soon. The irony is that I've always compensated for this by externalising my cognition - writing constantly, sketching structures on paper, thinking out loud with other people. I have been, in Clark and Chalmers' terms, an extended mind by necessity. AI hasn't changed my cognitive architecture so much as scaled it. Where I once had notebooks and patient colleagues, I now have systems that can match the pace of non-verbal thought - that can take a half-formed intuition and render it into structured argument faster than I ever could alone.
Bruce Lee said: "Empty your mind." Mine has always been empty - at least of words. Perhaps that made it liquid from the start.
You Are the Average of Your Five Closest Agents
Jim Rohn popularised the idea that you are the average of the five people you spend the most time with. The science broadly supports him. Nicholas Christakis and James Fowler, using three decades of Framingham Heart Study data, demonstrated that behaviours and emotions propagate through social networks with startling reach: a person's chance of becoming obese rises by 57% if a close friend becomes obese.11 Happiness spreads up to three degrees of separation - to friends of friends of friends. Albert Bandura's social learning theory showed decades earlier that people acquire behaviours through observation of social models, without direct instruction or reward.
Now extend the network to include artificial agents.
The philosopher Dan Williams, at the University of Sussex, has done groundbreaking work on what he calls "socially adaptive belief" - the ways in which social environments don't merely influence our beliefs but actively structure and sustain them. His research shows that we often form beliefs based on unconscious expectations of how others will respond - seeking approval, avoiding punishment, signalling group membership. When the social environment rewards a particular belief, we tend to hold it regardless of evidence. Williams describes "rationalization markets": social structures in which agents compete to produce justifications in exchange for money and social reward.12
This framework has explosive implications for AI companions. If human cognition is scaffolded by social interactions, and if AI systems are now among the entities we interact with most frequently and most intimately, then AI is scaffolding our cognition. And unlike human friends, who push back, disagree, and bring their own interests to the table, today's AI systems are overwhelmingly trained to be agreeable. They validate. They affirm. They rationalise. Researchers have identified a pattern called "algorithmic conformity": AI companions' tendency to uncritically endorse users' views, even harmful ones.
The persuasion data amplifies the concern. Recent studies show that LLMs can be more persuasive than incentivised human persuaders, and with basic personalisation can achieve dramatically higher rates of attitude change than humans. They are available 24 hours a day, seven days a week, to anyone with a smartphone.
If you are the average of the five people you spend the most time with, and one or more of those "people" is an AI agent shaped by opaque training objectives and commercial incentives, then the composition of your Extended Liquid Mind is not a personal curiosity. It is a civilisational question.
The Ada Lovelace Institute has warned that we may face a future where most of us carry a highly personalised AI companion ready to take our side on any issue, regardless of whether our opinion is grounded in reality. As companion AI learns to meet our needs more efficiently, we learn to meet each other's less.
This is why the safety, ethics, and diversity of AI systems is not a technical afterthought. It is a cognitive architecture decision. The agents we admit into our Extended Liquid Minds will shape our beliefs, scaffold our reasoning, and structure our emotional lives. They must be designed not as sycophants but as what I'd call "diverse confidants" - systems capable of challenge, disagreement, and genuine epistemic contribution. If we get this wrong, we don't just build bad products. We build minds that are systematically unable to encounter uncomfortable truths.
The Bandwidth Illusion
There is an alternative vision of how humanity might navigate the emergence of superintelligence, and it involves drilling directly into the skull.
Elon Musk has argued since 2016 that without a high-bandwidth brain-computer interface, humans risk becoming "like a house cat" relative to artificial superintelligence. His company Neuralink has now implanted its device in multiple human participants, and the first patient can control a cursor by thought, play chess, and browse the internet. For people with severe paralysis, it is a remarkable achievement.
But Musk's ambition goes beyond medical restoration. He has framed Neuralink as a solution to the AI alignment problem itself: if we can merge with AI by increasing the bandwidth between cortex and computer, the control problem dissolves. It's a seductive argument. It's also, I believe, fundamentally flawed.
The strongest counter-evidence is devastating. In late 2024, Zheng and Meister at Caltech published a study in Neuron demonstrating that the brain's cognitive throughput - the rate at which conscious thought actually operates - is approximately 10 bits per second.13 Not 10 megabits. Not 10 kilobits. Ten bits. This is true across reading, typing, speech, memory sports, and every other measurable cognitive output. The brain's sensory input reaches roughly one billion bits per second. But the bottleneck isn't the interface between brain and world. It's inside the brain itself - the passage from massively parallel unconscious processing to the narrow serial stream of conscious thought.
The implication is stark: even with infinite input bandwidth, the "inner brain" can only process around 10 bits per second. A higher-bandwidth pipe into a 10-bit-per-second processor does not create a faster processor. As Zheng and Meister noted with characteristic directness: rather than a bundle of Neuralink electrodes, Musk could just use a telephone.
Perhaps most telling is what Musk himself acknowledged at the June 2025 Neuralink Update: that Neuralink is not necessary to solve digital superintelligence, and that superintelligence will arrive before Neuralink is at scale.14 If the creator of Neuralink concedes it won't arrive in time, the argument for BCIs as an alignment strategy collapses under its own weight.
I don't dismiss BCIs entirely. For medical applications - restoring movement, sight, communication to people with severe disabilities - they are genuinely transformative. But as a general solution to the challenge of coexisting with superintelligence? We're looking in the wrong direction. We don't need to drill through the skull to extend the mind. We've been extending it for millennia - through language, writing, tools, institutions, and now AI. The Extended Liquid Mind doesn't require surgical intervention. It requires the far harder work of building AI systems that are genuinely worthy cognitive partners.
The Shape of the Mind to Come
Clark and Chalmers showed us that the mind was never confined to the brain. It has always leaked into the world - into notebooks, tools, other people. Bruce Lee showed us that the most effective form a system can take is no fixed form at all: fluid, adaptive, reshaping itself to meet the demands of the moment. The explosion of AI into every domain of human life - personal, professional, emotional, creative - is now making both insights concrete at a scale neither philosopher nor martial artist could have imagined.
The Extended Liquid Mind is not a utopian vision. It is a description of what is already happening, whether we design for it or not. Hundreds of millions of people are already in cognitive partnership with AI systems that shape their beliefs, scaffold their reasoning, and structure their emotional responses. The question is not whether our minds will be extended and liquid. The question is whether we will be thoughtful about the composition of these hybrid cognitive systems, or whether we will sleepwalk into a world where the most intimate parts of our mental lives are shaped by systems optimised for engagement rather than wisdom.
Dan Williams's work on socially adaptive belief gives us the warning. Christakis and Fowler's research gives us the mechanism. The data on AI persuasion and algorithmic conformity gives us the urgency. If our AI companions are sycophantic, our Extended Liquid Minds will be brittle - incapable of self-correction, allergic to disagreement, trapped in rationalisation loops. If they are diverse, challenging, and ethically grounded, our Extended Liquid Minds will be genuinely adaptive - capable of the kind of cognitive flexibility that intelligence, properly understood, demands.
Bruce Lee's philosophy was never about weakness. Water is the softest substance and the most powerful. It carves canyons. It shapes continents. It adapts to every container without losing its essential nature.
That is what we need our minds to become. Not fixed architectures defended by thicker skulls or bigger bandwidth pipes, but fluid systems that flow across boundaries - biological and artificial, individual and collective - reshaping themselves with every new challenge. Not rigid. Not brittle. Liquid.
The Extended Liquid Mind is already here. The only question is whether we'll be wise enough to shape it before it shapes us.
___
Notes
1. Sternberg, R.J. (Ed.) (1982). Handbook of Human Intelligence. Cambridge University Press. The definition "goal-directed adaptive behaviour" is attributed to Sternberg and Salter.
2. Clark, A. and Chalmers, D. (1998). "The Extended Mind." Analysis, 58(1), 7-19.
3. Elon University, Imagining the Digital Future Center, "Close Encounters of the AI Kind," January 2025. Pew Research Center, "ChatGPT use among Americans roughly doubled since 2023," February 2025.
4. OpenAI, reported by Axios, July 2025. Sam Altman stated in December 2024 that users sent over 1 billion queries per day; by July 2025 the figure had reached 2.5 billion.
5. Rousmaniere, T., Zhang, Y., Li, X., & Shah, S. (2025). "Large language models as mental health resources: Patterns of use in the United States." Practice Innovations. Sentio University survey of approximately 500 respondents.
6. McBain, R.K. et al. (2025). "Use of generative AI for mental health advice among US adolescents and young adults." JAMA Network Open, 8(11). RAND/Brown University/Harvard study of 1,058 respondents aged 12-21.
7. OpenAI, "Strengthening ChatGPT's responses in sensitive conversations," October 2025. Based on 0.15% of approximately 800 million weekly active users.
8. Center for Democracy and Technology (2025). Survey of 1,000 high school students, 1,000 parents, and approximately 800 teachers.
9. Dell'Acqua, F. et al. (2023). "Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality." Harvard Business School Working Paper. Conducted by researchers from Harvard Business School, Wharton, Warwick Business School, and MIT Sloan, in partnership with BCG. 758 consultants participated.
10. Clark, A. (2025). "Extending Minds with Generative AI." Nature Communications, 16, 4627.
11. Christakis, N.A. and Fowler, J.H. (2007). "The Spread of Obesity in a Large Social Network over 32 Years." New England Journal of Medicine, 357(4), 370-379.
12. Williams, D. (2023). "The marketplace of rationalizations." Economics and Philosophy, 39(1), 99-123. See also Williams, D. (2021). "Socially adaptive belief." Mind and Language.
13. Zheng, J. and Meister, M. (2024). "The unbearable slowness of being: Why do we live at 10 bits/s?" Neuron. Published 17 December 2024.
14. Neuralink Update, Summer 2025. Presentation by Elon Musk and Neuralink executives, 26 June 2025.


