A new MIT study titled Your Brain on ChatGPT is making headlines with claims that AI "hurts your brain." But as someone who's spent considerable time analyzing educational research, I want to cut through the sensationalism and examine what this study actually tells us and what it doesn't.
Researchers from MIT Media Lab and Wellesley College recruited 54 college students for a carefully controlled experiment. Over four months, participants completed 3-4 essay-writing sessions under different conditions:
LLM group: Used ChatGPT for assistance
Search group: Used Google search
Brain-only group: No external tools
The researchers measured brain activity using EEG, analyzed essay quality, and tracked participants' ability to recall and quote their own work. In a clever twist, they also included "crossover" sessions where LLM users switched to brain-only writing and vice versa.
The Key Findings
The results reveal important patterns about how we interact with AI tools:
Cognitive Engagement: EEG data showed that participants using ChatGPT had lower levels of brain activity associated with deep thinking and problem-solving.
Memory and Ownership: LLM users struggled to quote their own previous work and reported feeling less ownership over their essays. When they switched to brain-only writing in session 4, they performed notably worse than those who had been writing without assistance all along.
Performance Paradox: While LLM users produced essays faster and sometimes of higher quality, they seemed to learn less from the process.
Let's be clear about what this study does and doesn't demonstrate.
It doesn't prove that AI turns your brain to mush. The sample was small (54 participants), the writing sessions were brief (20 minutes), and the timeline was limited. This isn't evidence of permanent cognitive damage or long-term intellectual decline.
It does suggest that passive AI use can impair learning. When students let ChatGPT do their thinking for them, they don't develop the same depth of understanding or memory formation as those who wrestle with ideas themselves.
The real insight is about cognitive offloading. Just as GPS navigation might make us less capable of reading maps, relying too heavily on AI for thinking tasks might atrophy our own reasoning muscles.
This research offers three crucial lessons for educators and students:
1. Prior Knowledge Matters
Participants who had experience with brain-only writing performed better when they later used AI tools. They approached ChatGPT more strategically, using it to enhance rather than replace their thinking. This suggests that developing foundational skills first creates better conditions for productive AI collaboration.
2. Active vs. Passive Engagement
The difference isn't between using AI and not using AI. It's between active and passive engagement. Students who mindlessly accept AI outputs learn less than those who critically evaluate, revise, and build upon AI-generated content.
3. The Substitution vs. Augmentation Distinction
When AI substitutes for human thinking, learning suffers. When it augments human capabilities by helping students explore ideas they've already begun developing, the results can be more promising.
The viral interpretation of this study, "AI hurts your brain," misses the nuanced reality. The research reveals something we should have expected: when we outsource cognitive work, we don't develop the same level of understanding or retention.
This isn't unique to AI. Students who copy answers from the back of a math textbook don't learn mathematics. Those who rely entirely on spell-check might struggle with spelling. The principle is consistent: learning requires engagement, and shortcuts that bypass thinking often bypass learning too.
The convergence of these studies points toward a sobering reality: we're not just dealing with a technology problem but with an accelerated erosion of intellectual habits that were already under pressure. Yet this doesn't mean we should despair or retreat into AI rejection.
Instead, we need what James O'Sullivan calls "deliberate cultivation of hugely unpopular attitudes and practices that slow cognition, foreground ambiguity, and demand active engagement." This means:
Restoring epistemic agency: Before consulting AI, students should formulate initial hypotheses, sketch argument structures, and identify their own assumptions. This repositions AI as a tool to be interrogated rather than an authority to be trusted.
Embracing sustained engagement: Long-form reading, writing, and thinking resist the logic of digital distraction. Students need practice inhabiting complex arguments and tolerating intellectual uncertainty.
Designing for cognitive involvement: Rather than automating thinking, we should use AI to create conditions where deeper human thinking becomes more likely and more rewarding.
The goal isn't to avoid AI but to use it in ways that enhance rather than replace human thinking. Students who develop strong foundational skills, learn to engage critically with AI outputs, and maintain agency in their learning process are better positioned to benefit from these powerful tools.
A Decline Already in Motion
This research emerges at a critical moment, but it's important to recognize that the cognitive challenges it identifies didn't begin with AI. As O'Sullivan argues in his recent analysis, critical thinking was already in decline before generative AI arrived on the scene.
The evidence is compelling: OECD's 2022 PISA results showed the most significant decline in reading and mathematics scores since the assessment began. Social media platforms have restructured public discourse around speed and emotional charge rather than careful analysis. Educational systems have increasingly prioritized standardized metrics over open-ended exploration.
Into this already-degraded landscape came AI tools that, as O'Sullivan notes, don't just store or transmit information but "produce linguistic outputs that simulate reasoning." The MIT study we've been examining provides empirical evidence for what many educators have intuited: when students can effortlessly generate well-written but critically shallow work, the "slow and often disfluent labour of original thought" gets displaced.
A recent Microsoft study reinforces this concern, finding that employees who rely heavily on AI tools like Copilot struggle more when faced with scenarios requiring independent critical thinking. One researcher noted the key irony: "by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature."
The goal isn't to avoid AI but to use it in ways that enhance rather than replace human thinking. Students who develop strong foundational skills, learn to engage critically with AI outputs, and maintain agency in their learning process are better positioned to benefit from these powerful tools. This study reminds us that developing that capacity requires intentional design and a commitment to keeping human learning at the center of the educational enterprise.
The cognitive debt metaphor in the study's title is apt: when we borrow thinking from AI without investing our own cognitive effort, we may find ourselves intellectually impoverished. But when we use AI as a thinking partner rather than a thinking replacement, we can leverage its capabilities while continuing to develop our own.
That's the challenge and the opportunity before us.
Never Stop Asking,
Nathan