Stay informed Sign up for our newsletter and be the first to know.
Stay informed Sign up for our newsletter and be the first to know.
Brilliant Investment Thinking by Advisers for Advisers.
ASX
+0.33%
S&P
-0.50%
AUD
$0.69

Uncategorized

Share
Print
  • Home
  • Uncategorized

AI isn’t coming for your job. It's coming for your mind

AI isn’t coming for your job. It’s coming for your mind
Share
Print

Perhaps in the future the people who thrive won’t be those who use AI most, but those who can still think without it.

I manage Scottish Mortgage, one of the world’s largest growth investment trusts. My job is to find the companies that will reshape the economy over the next decade and beyond. We hold stakes in many of the businesses building AI today. This paper is not about what AI will do for those companies. It is about what it will do to the people who use their products, myself included.

Two hundred years ago, only 12 per cent of the world’s adults could read. Today, that figure stands at 87 per cent. What happened in between was not just an education story. It was a biological one.

As billions of humans acquired literacy, their brains literally rewired. The connection between the hemispheres thickened. A region that had evolved for recognising faces was repurposed to recognise letters. Entirely new neural pathways activated in response to spoken language.1 No gene had mutated. No evolutionary pressure in the traditional Darwinian sense was at work. A purely cultural practice, making marks on surfaces and training people to decode them, had reached inside the skull and reorganised the organ that makes us human.

This is not an exception but the rule. Across millennia, cultural technologies from cooking to markets to kinship structures have systematically reshaped human physiology and psychology in ways that genetics alone cannot explain.2 We are now deploying a cultural technology that may be more pervasive than literacy and more transformative than markets, and it is spreading at unprecedented speed. AI is not simply a productivity tool or an economic disruptor. It is the next great rewiring, and it has already begun.

Both biological and cultural evolution depend on the same three forces: variation, transmission and selection.

The most transformative cultural forces in history didn’t simply participate in this process. They hijacked it.

The Catholic Church didn’t just offer a set of beliefs and wait for people to adopt them. It reshaped all three forces at once. It controlled variation by defining which ideas were orthodox and which were heresy, narrowing the range of acceptable thought across an entire continent. It dominated transmission through a near-monopoly on literacy, education and the pulpit, becoming the primary channel through which knowledge and values reached ordinary people for more than a thousand years. And, as Joseph Henrich argued in The Weirdest People in the World, it rewired selection by systematically dismantling the kinship structures that had governed human social life for millennia, banning cousin marriage, polygyny and arranged marriages, and replacing extended kin networks with the nuclear family and voluntary associations like parishes and guilds.

The result was not just a change in what people believed but a change in how they thought and, remarkably, a change in their biology. The shift from polygyny to monogamy alone altered male hormone profiles across entire populations. Populations exposed to centuries of Church-enforced outbreeding show psychological profiles that are measurably different from those that retained intensive kinship structures. The Church didn’t just create a new culture. It created a new kind of mind. It succeeded not because its ideas were self-evidently superior, but because it seized control of the infrastructure through which all ideas spread.

AI is now doing something analogous, but at a speed and scale the Church could never have imagined.3

It supercharges variation. A PhD scientist in traditional drug discovery might spend months characterising a single molecular compound. Scientists now use AI to analyse thousands of plant molecules simultaneously, making structural predictions in half a second that would take weeks using conventional techniques. AI doesn’t just produce more variation. It produces ideas humans would never have reached.

It reshapes transmission in a way that has no real historical precedent. When a child asks ChatGPT to explain why the sky is blue, they are learning from a single model trained on the accumulated text of human civilisation, not from any individual human. That model becomes a cultural teacher to hundreds of millions of people simultaneously, transmitting a substantially more centralised and convergent body of knowledge, values and reasoning patterns than any human institution has ever achieved. The outputs vary by prompt, language and context, but the centralisation is staggering – a handful of model providers now mediate an enormous share of the world’s question-answering.

And it rewires selection. In traditional cultural evolution, selection was a distributed, messy and largely organic process. Ideas spread because communities found them useful, because prestigious individuals adopted them or because institutions enforced them. When a medieval guild decided which techniques to preserve, or a community of scholars debated which ideas deserved attention, selection was at least loosely coupled to the practical value of the knowledge in question.

Today, recommendation algorithms have become the dominant selection mechanism for cultural content. They determine which news stories reach millions and which disappear, which musical artists find audiences and which languish in obscurity, which political arguments gain traction and which are suppressed. These algorithms do not select for truth, usefulness or cultural richness. They select for engagement. The result is a selection environment that favours certain kinds of cultural traits over others, not because those traits are adaptive in any meaningful sense, but because they happen to align with the metrics that drive advertising revenue.

The question is not whether AI changes what people do, but whether it changes what people become.

Algorithmic selection has almost entirely decoupled cultural fitness from human judgement. As an investor, I struggled to understand why Elon Musk paid $44bn for Twitter. Through the lens of cultural evolution, the logic becomes clearer. He wasn’t buying a social media company. He was buying a selection mechanism – the power to shape which ideas, narratives and values get amplified to hundreds of millions of people and which get buried.

Michael Muthukrishna, Henrich’s former student and collaborator at the London School of Economics, has pushed this analysis further in A Theory of Everyone. He argues that humanity has historically learned at three speeds. Genetic evolution is the slowest, rewarding and punishing over hundreds or thousands of generations. Cultural evolution is faster, accumulating knowledge, norms and technologies over hundreds or thousands of years. Individual learning is the fastest, a single lifetime of trial and error.

Muthukrishna argues that AI represents a fourth system that parses the entire human cultural corpus to discover patterns that neither cultural evolution nor individual intelligence would find. AlphaGo’s defeat of the world’s best Go players in 2016-2017 illustrates the dynamic. The AI discovered strategies that centuries of human play had never produced. Then human players studied those strategies and incorporated them into their own play, measurably improving the quality of their decision-making. The machine became a cultural model feeding new variation back into the human collective brain. The same dynamic is emerging in drug discovery, materials science and protein folding.

The question is not whether AI changes what people do, but whether it changes what people become. The evidence suggests it already has.

Your mind is changing already

MIT studied participants’ brains over four months as they wrote essays under three conditions: unaided, with search engines or with AI assistants. The results were stark. AI users showed the weakest connectivity of all three groups, indicating they were barely engaged. By the third session, they had essentially outsourced the entire writing process. When later asked to write without AI, they remembered little of their own earlier essays and showed weaker brain activity associated with deep memory encoding. Most strikingly, 83 per cent of the AI users couldn’t provide a single correct quote from essays they had written just minutes earlier. The effort that builds durable learning had been bypassed entirely.

A complementary study in npj Artificial Intelligence offered a crucial distinction. AI gives you results, but those results only become a real understanding when you actively interpret and judge them. Participants who treated AI as a starting point for their own thinking retained and even improved their cognitive performance over time. Those who accepted AI outputs passively showed a measurable decline. The difference was not in how much AI people used, but in how they used it. Skip the effortful step of making sense of what the AI gives you, and the brain’s capacity to learn weakens. Engage actively, and it can be sustained or even enhanced.

I have noticed this in myself. I used to navigate around Edinburgh without thinking twice. Since I started plugging every journey into Google Maps to avoid traffic, I have found that my ability to find my own way has quietly deteriorated. The skill didn’t disappear overnight. It faded through disuse, one delegated decision at a time. I would be surprised if that part of my brain hasn’t physically changed.

The trade you don’t notice

Literacy repurposed neural circuits to create new capabilities at the cost of others. People who learn to read gain the ability to decode written language but show reduced face-recognition capacity in the left hemisphere. AI appears to be making a different trade. It is redirecting cognitive circuits away from deep processing, memory consolidation and effortful reasoning, and towards a new suite of cognitive skills: delegation, verification and interface management.

These are not trivial skills. A lawyer using AI to draft a contract must shift from drafting to judgment. Does this clause actually say what it needs to? Has the AI hallucinated a legal precedent? A researcher must learn to decompose complex problems into components that can be handed off to AI, then synthesise the outputs into coherent conclusions. These are forms of managing intelligence rather than performing it directly, a kind of cognitive orchestration that has no real precedent in pre-AI work.

The concern is not that these new capabilities are worthless but that they may come at the cost of the foundational skills they depend on. An engineer who has never built a model from scratch may not recognise when an AI’s output is technically flawless but structurally unsound. A scientist who has never struggled through a statistical analysis manually may not spot results that are mathematically correct but conceptually meaningless. Verification without prior mastery risks becoming a superficial check rather than a genuine evaluation. Just as literacy gave us something new by repurposing something old, AI is reshaping the brain’s allocation of cognitive resources. The trade may be worth making. But right now, most people are making it without realising there is a trade at all.

Better outputs, weaker minds

The central paradox is this: AI reliably improves immediate task performance while degrading the underlying human capabilities that produce that performance. You get better results today, but become less capable tomorrow.

In one trial, students who used ChatGPT to study scored significantly lower on surprise retention tests 45 days later than those who learned without it. They performed well in the moment but retained less of what they’d covered. A six-month longitudinal study found something even more concerning. As participants used AI more frequently, their actual performance steadily declined, even as their confidence in their own abilities grew. By the end of the study, the gap between how well people thought they were doing and how well they were actually doing had widened to nearly 35 percentage points. They were getting worse and feeling better about it.

The mechanism is simple. AI removes the productive struggle, the ‘desirable difficulties’ that drive durable learning and skill consolidation. It feels like help. It functions like a shortcut past the work that builds competence.

AI is reshaping the brain’s allocation of cognitive resources.

Recent neuroscience suggests the cost may go deeper than learning. Research on the anterior mid-cingulate cortex (aMCC) has identified a brain region strongly associated with persistence, effortful self-regulation and what might simply be called the will to live. The aMCC is smaller in people with obesity and grows when they diet. It is larger in athletes. It is especially large in people who see themselves as challenged and overcome that challenge, and in people who live exceptionally long, it maintains its size. The critical finding is that the aMCC appears to respond not to effort in general, but specifically to tasks that generate friction, frustration and the desire to quit.

Neuroscientist Andrew Huberman chose to highlight this research on his podcast in conversation with David Goggins, the former Navy seal and ultramarathon runner who built an entire philosophy around seeking out suffering, because Goggins is the living embodiment of the principle. He transformed himself from an obese, directionless young man into one of the most physically and mentally resilient people alive, not through talent or enjoyment but through the deliberate, repeated choice to do what he least wanted to face. The aMCC research gave his life’s work a neuroanatomical explanation. The friction was not incidental to his transformation, it was the mechanism.

Every time AI eliminates the friction from an effortful task, it may be removing precisely the stimulus that builds the neural infrastructure of persistence and self-regulation. AI’s removal of cognitive struggle is not merely a learning problem. It is potentially a problem of brain development itself.

The confidence trap

The same pattern shows up in experienced doctors, not just students. A landmark 2025 study in The Lancet Gastroenterology & Hepatology tracked 19 experienced endoscopists across more than 1,400 colonoscopies. After a period of working with AI assistance, their detection rate for a key indicator dropped by 21 per cent, even after they stopped using the AI. The cognitive shortcuts learned during AI-assisted work had carried over and compromised their unaided performance. This is the first real-world clinical evidence that AI exposure can degrade expert judgement even after the AI is switched off.

There is a deeper problem beyond skill erosion. People tend to trust AI even when they shouldn’t. Psychologists call this automation bias, the tendency to defer to a computer-generated answer over your own judgement, even when your judgement is right. When AI makes confident but incorrect predictions, human performance collapses. A systematic review of 35 studies found that in experimental conditions where AI provided incorrect predictions, radiologist accuracy fell from roughly 80 per cent to as low as 20 per cent for less experienced practitioners, a dramatic illustration of how overreliance on a confident but wrong system can override professional judgement. AI literacy was not sufficient to protect against this effect. Knowing the tool is fallible did not reliably prevent overreliance.

The pattern is not confined to medicine. In 2023, a New York lawyer submitted a legal brief containing six entirely fabricated case citations generated by ChatGPT. When the judge flagged them, the lawyer didn’t check a legal database. He asked ChatGPT whether the cases were real. It assured him they were. He was fined $5,000. Since then, more than 600 similar cases have been documented in US courts alone.

A 2026 study published in Computers in Human Behavior found that AI breaks the Dunning-Kruger effect. Psychologists have long observed that unskilled people tend to overestimate their abilities while experts tend to be more measured in their self-assessment. It is an imperfect feedback loop, but as people gain genuine expertise, their sense of what they don’t know sharpens. AI appears to break this loop entirely. The study found that AI use creates uniform overconfidence across all skill levels. Novices and experts alike become equally certain they understand more than they do. And those with the highest AI literacy were actually the worst calibrated, confusing fluency with the tool for mastery of the subject.

A thousand writers, one story

The creativity domain reveals a particularly important dynamic. A meta-analysis of 28 studies found that AI-human collaboration boosts individual creative output compared to working alone. But it simultaneously produces a large homogenisation effect on the diversity of ideas generated. People using AI come up with better individual ideas but more similar ideas to each other. Imagine a thousand writers each producing better stories than they could alone, but all producing essentially the same story.

This matters because innovation depends on diverse populations making diverse errors and recombining diverse solutions. AI is amplifying individual variation while reducing collective variation, boosting the parts while weakening the whole.

There are reasons to believe the picture need not be this bleak. After AlphaGo defeated the world’s best Go players, human players studied its strategies and developed genuinely new ways of thinking about the game. Similar dynamics are visible in scientific research, where AI-generated hypotheses are being refined by researchers in ways neither humans nor machines could have reached on their own. The emerging neuroscience evidence suggests that active co-creation with AI can sustain or even enhance cognition.

But the positive outcomes documented so far tend to involve elite performers in structured, competitive environments with clear feedback loops. The negative outcomes occur across ordinary users doing normal work in everyday settings. Passive delegation is the path of least resistance. It is what people naturally do unless institutions, training and deliberate design push them towards something better.

The critical near-term risk is not that AI makes people less intelligent in any simple sense. It is that it creates what researchers call ‘illusions of understanding’. That is the belief that you understand more than you do, that you have considered all possibilities when you haven’t, and that your judgement is objective when it is being shaped by the system you are relying on. These illusions are invisible to those experiencing them, which is what makes institutional responses so difficult.

You cannot fix a problem that the people affected by it are confident does not exist.

The psychiatrist and neuroscientist Iain McGilchrist identified precisely this pattern in patients with right hemisphere brain damage. They remain articulate and confident, but lose the ability to grasp context, detect what is missing or recognise the limits of their own understanding. The left hemisphere, McGilchrist argued, does not know what it does not know. AI appears to be producing something similar at the population level – a fluent, productive, confident mode of thinking that has quietly lost contact with the deeper processing it depends on.

Learning from a machine that learned from us

The cultural evolution framework places enormous weight on who we learn from. Humans are wired to learn preferentially from whoever appears most competent and successful, a tendency known as prestige-biased learning. We don’t copy just anyone – we copy the people who seem to know what they’re doing.

AI is disrupting this at every level. Systems that can diagnose medical conditions, write legal briefs and produce creative work are acquiring prestige status for hundreds of millions of people simultaneously. Unlike any human teacher, they are always available, never impatient and appear to know something about everything. For an increasing number of people, AI is becoming the first source they consult on questions that range from the trivial to the deeply consequential. Each of these interactions is a moment of cultural transmission, accumulating at a scale no human institution has ever operated at.

This is already breaking the apprenticeship model that has transmitted professional expertise for millennia. Junior lawyers, accountants and doctors have traditionally built competence by doing grunt work under the supervision of senior practitioners. The work was tedious, but the learning was real. You developed judgement by doing the thing, badly at first, with someone more experienced correcting you. If AI handles the grunt work instead, the learning pathway disappears.

This is not hypothetical. Shopify’s chief executive recently told his teams that before requesting additional headcount, they must first demonstrate why AI cannot do the work. From an investor’s perspective, the logic is sound – it drives efficiency, widens margins and makes the company leaner. I own Shopify shares. I understand the rationale. But every role that AI absorbs is one that a junior employee would once have learned by doing. The efficiency gain and the training loss are the same decision, viewed from different angles.

The consequences for white-collar professions are already visible. Entry-level hiring at major technology companies has fallen more than 50 per cent below pre-pandemic levels. Generative AI doesn’t eliminate entire occupations overnight. Instead, it hollows them out from within, automating 30 or 40 per cent of an employee’s workload, leaving fewer entry-level roles and compressing opportunities for career progression. The result is organisations that get more done with fewer people today, but have fewer ways to train the people they will need tomorrow. This creates a widening gap not just between companies but within them, between a shrinking cadre of AI-fluent senior professionals and a growing population of graduates who cannot get a foot on the ladder that those seniors once climbed.

The traditional white-collar career path – where you entered at the bottom, learned by doing and rose through accumulated expertise – is being dismantled from below. Some young professionals are already responding by turning away from office work entirely and towards skilled trades, where physical labour remains beyond AI’s current reach. The irony is hard to miss. The knowledge economy that was supposed to be the future may be producing fewer pathways into knowledge work than the economy it replaced.

Perhaps most consequentially, AI is not culturally neutral in what it transmits. Research published in Nature Human Behaviour found that when AI is prompted in Chinese versus English, it exhibits systematically different cultural orientations, more interdependent and holistic in Chinese, more independent and analytic in English. Since the vast majority of training data comes from English-language sources rooted in individualistic western cultures, users in collectivist societies who interact with AI in English may absorb western psychological norms without realising it. For most of human history, the question of who teaches your children what to think has been answered by the community they grow up in. AI may be replacing that answer with something more global and uniform, and not yet fully examined.

Not replacement but rewiring

Will AI replace us? It’s the wrong question. AI will not simply replace human workers. It will change what human workers are. It is already reshaping how we learn, who we learn from, how we assess our own competence and what cognitive skills we develop or allow to atrophy. The humans who emerge from this process will not be the same humans who entered it.

The partnership between humans and AI that people like to imagine, one where each complements the other’s strengths, is not a stable endpoint. It is a moving target, because one half of the partnership is being continuously reshaped by the other.

A skilled professional who learns to use AI well can be extraordinarily productive. But this is not a rising tide that lifts all boats. It is a force multiplier that amplifies existing advantages. The same dynamic that makes an AI-augmented expert vastly more valuable also makes an AI-dependent novice more disposable.

The deeper problem is that these productivity gains depend on something AI cannot produce: the foundational expertise that makes verification, judgement and effective delegation possible. If we allow AI to eliminate the apprenticeship pathways that build this expertise, the current generation of AI-augmented professionals may be the last to capture these gains. The generation that follows may lack the very skills that make human-AI collaboration valuable in the first place. The productivity boost is real, but it is borrowing from a stock of human capital that we are no longer replenishing.

AI will most likely produce three trajectories for those without pre-existing expertise. Some will build careers around orchestrating AI itself, though the evidence suggests their work will be more fragile than they realise. Others are already moving into physical trades and care work, where human presence still matters. The rest will be caught in the gap, too late to build traditional expertise, too early to benefit from whatever new institutional structures eventually emerge.

This last group is the most politically consequential because historically, large populations of educated but underemployed young people are among the most reliable predictors of social instability.

Pilots still learn to fly manually before they learn to use autopilot, not because the autopilot isn’t good, but because the day it fails, someone needs to land the plane. The same principle applies here. An accountant who has prepared hundreds of tax returns by hand can spot the error an AI-generated filing has buried in the numbers. A doctor who has made diagnoses without AI support can override a confident but wrong prediction.

Professional bodies, universities and employers need to preserve the training pathways that build genuine expertise, even when AI makes them look slow and inefficient, and then teach AI orchestration as an advanced capability that sits on top of that foundation. Some are already doing this. After a wave of AI-fabricated legal citations reached US courts, dozens of judges issued standing orders requiring lawyers to disclose any AI use in filings, and the American Bar Association issued new ethics guidance. The investment is not in one or the other. It is in both, in the right order.

The Catholic Church took centuries to rewire European psychology. Literacy took generations to reshape the brain. AI is doing both at once, to billions of people, in years.

Brain rewiring is inevitable. It is what cultural technologies do. But the kind of rewiring matters enormously. The evidence in this paper points in a clear direction: passive use of AI degrades memory, erodes expertise, inflates confidence and narrows the diversity of human thought. None of these outcomes is necessary. All of them are the default.

I invest in the companies building these tools. I believe in their potential. And I have watched my own cognitive habits change in ways I did not choose and barely noticed. If it is happening to someone who spends his professional life thinking about these forces, it is happening to everyone.

The people who will thrive are not those who use AI the most, but those who can still think without it. The institutions that will matter are not those that adopt AI fastest, but those that preserve the human capabilities AI cannot replace.

We have navigated transformations like this before. The difference is speed. The institutions that shaped how humanity absorbed literacy and the printing press had centuries to develop. We have years. And we are not moving fast enough.


Works cited

Books

Henrich, J. (2020). The Weirdest People in the World: How the West Became Psychologically Peculiar and Particularly Prosperous. Penguin.

McGilchrist, I. (2009). The Master and His Emissary: The Divided Brain and the Making of the Western World. Yale University Press.

Muthukrishna, M. (2023). A Theory of Everyone: The New Science of Who We Are, How We Got Here, and Where We’re Going. MIT Press.

Journal articles and academic papers

Brinkmann, L., et al. (2023). “Machine culture.” Nature Human Behaviour, 7, 1855–1868.

Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.” arXiv:2506.08872. Preprint.

Rossi, S., Fraccaro, V., & Manzotti, R. (2025). “The brain side of human-AI interactions in the long-term: the ‘3R principle.’” npj Artificial Intelligence, 2, 15.

Budzyń, K., et al. (2025). “Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: a multicentre, observational study.” The Lancet Gastroenterology & Hepatology.

Romeo, G., & Conti, D. (2025). “Exploring automation bias in human–AI collaboration: a review and implications for explainable AI.” AI & Society. doi:10.1007/s00146-025-02422-7.

Touroutoglou, A., Andreano, J., Dickerson, B. C., & Barrett, L. F. (2020). “The tenacious brain: How the anterior mid-cingulate contributes to achieving goals.” Cortex, 123, 12–29.

Huberman, A. & Goggins, D. (2024). “David Goggins: How to Build Immense Inner Strength.” Huberman Lab Podcast, January 1, 2024. Discussion of anterior mid-cingulate cortex and willpower begins at 46:09.

Fernandes, D., Villa, S., Nicholls, S., Haavisto, O., Buschek, D., Schmidt, A., Kosch, T., Shen, C., & Welsch, R. (2026). “AI makes you smarter but none the wiser: The disconnect between performance and metacognition.” Computers in Human Behavior, 175, 108779.

Doshi, A. R., & Hauser, O. (2024). “Generative AI enhances individual creativity but reduces the collective diversity of novel content.” Science Advances, 10(28).

Farrell, H., Gopnik, A., Shalizi, C., & Evans, J. (2025). “Large AI models are cultural and social technologies.” Science.

Lu, J. G., Song, L. L., & Zhang, L. D. (2025). “Cultural tendencies in generative AI.” Nature Human Behaviour, 9(11), 2360–2369.

Shin, et al. (2023). “Superhuman artificial intelligence can improve human decision-making by increasing novelty.” Proceedings of the National Academy of Sciences.

Neuroscience of literacy

Dehaene, S. (2009). Reading in the Brain: The New Science of How We Read. Viking.

Carreiras, M., et al. (2009). “An anatomical signature for literacy.” Nature, 461, 983–986.

Castro-Caldas, A., et al. (1999). “Influence of learning to read and write on the morphology of the corpus callosum.” European Journal of Neurology, 6, 23–28.

Dehaene, S., et al. (2010). “How learning to read changes the cortical networks for vision and language.” Science, 330, 1359–1364.

McCandliss, B. D., Cohen, L., & Dehaene, S. (2003). “The visual word form area: expertise for reading in the fusiform gyrus.” Trends in Cognitive Sciences, 7, 293–299.

Dehaene-Lambertz, G., Monzalvo, K., & Dehaene, S. (2018). “The emergence of the visual word form.” PLOS Biology, 16(3), e2004103.

Industry and policy sources

Fortune. (2025). “AI requires a rethink of the apprenticeship model for knowledge professionals.”

SignalFire. (2025). Report on graduate hiring at major technology companies.

Bloomberg / Indeed. AI task automation estimates for market research analysts and sales representatives.

World Economic Forum. Future of Jobs Report 2025.

Built In. (2025). Analysis of graduate unemployment and blue-collar job growth.

Human Resources Director. (2025). “The new white-collar risk: How AI is coming for America’s office jobs” (incorporating Evercore ISI analysis).

Other sources

Henrich, J. (2025). Interview on Dwarkesh Podcast, March 2025.

Muthukrishna, M. (2023). “The Rise of AI: How the Fourth Line of Information Will Transform Human Intelligence.” lab.muthukrishna.com.

Footnotes

1. Specifically, the corpus callosum between hemispheres thickened, a region of the left fusiform gyrus was repurposed as what neuroscientists call the visual word form area (Carreiras et al., 2009; Castro-Caldas et al., 1999; McCandliss, Cohen & Dehaene, 2003; Dehaene et al., 2010), and literacy-bound visual and phonological systems in ways not observed in illiterate adults.

2. The specifics of Henrich’s causal chain are debated, particularly the relative weight of the Church’s marriage policies versus other forces like Protestantism, commerce, and state formation, but the broader thesis that cultural institutions reshape psychology at the population level is well supported and increasingly influential.

3. AI is used throughout this paper as a shorthand for a family of systems that are, in important respects, quite different. Recommendation algorithms (the kind that curate your Instagram feed or Netflix queue) primarily reshape selection: they determine which cultural content survives and spreads. Generative AI assistants (like ChatGPT) primarily reshape transmission and variation: they produce novel content and mediate how knowledge reaches people. Domain-specific decision-support tools (the kind that assist radiologists or endoscopists) reshape professional judgement in narrower, yet measurable ways. What unites them, and what justifies treating them under a single framework, is that all three are inserting machine intelligence into the variation-transmission-selection pipeline that governs cultural evolution. The framework’s value lies precisely in disaggregating their effects.

Share
Print

Reflexivity and the risk of market feedback loops

In periods of expansion, reflexivity supports rising valuations and expanding credit availability; but like leverage, it operates in both directions

Daily Market Update: 20 March 2026

ASX (ASX:XJO) tumbles 1.7% as oil surge and rate fears wipe $50bn from market; energy soars, gold miners crushed The Australian sharemarket tumbled on Thursday...

The wholesale loophole: same game, different name

While much progress has been made in the professionalism of advice, Jamie Nemtsas argues that the wholesale loophole threatens to unravel the industry.

Mean reversion: powerful until the regime shifts

Markets often reward patience. Mean reversion has humbled many predictions of a new era. Yet regime shifts do occur. When the base conditions change, the old...