
By Steve Conley | Academy of Life Planning
A recent study from MIT’s Media Lab sent ripples through the AI and education communities. It claimed that using ChatGPT to write SAT-style essays resulted in the lowest brain engagement among participants compared to those using Google Search or working unaided. EEG scans showed diminished neural activity, reduced creativity, and a growing reliance on copy-paste behaviour. “Soulless” was the word used by English teachers reviewing the AI-generated essays.
On the surface, it’s a concerning finding. And yes, there are risks in how generative AI is used—especially in developing minds. But dig deeper, and you’ll find the study suffers from a critical flaw in perspective:
đź§ The Goal Was Too Small
The study assessed whether people using ChatGPT could achieve the same task with less effort. It framed AI as a shortcut—not a springboard. But why would we use a Ferrari just to cover the same ground as a horse and cart?
The true power of AI isn’t in efficiency. It’s in amplification.
When used properly, AI doesn’t just reduce cognitive load—it extends cognitive reach. It helps us ask better questions, challenge deeper assumptions, and reimagine what’s possible.
In my own work at the Academy of Life Planning, I’ve seen this firsthand. When clients co-design their life plans with AI assistance, their typical reaction is: “That was mind-blowing.” They see possibilities they hadn’t imagined. They reconnect with forgotten dreams. They find clarity, courage, and conviction. The outcomes are emotionally and intellectually transcendent—and we simply don’t get the same result without AI in the room.
In our justice work at Get SAFE, AI empowers citizens to gather evidence, trace fraud, and assemble persuasive dossiers in a way that was once only possible for legal professionals with institutional backing. AI isn’t doing the thinking for them. It’s doing the heavy lifting so they can think bigger.
⚠️ AI as an Assistant, Not an Autopilot
Yes, AI can make us lazy—if we use it lazily. Yes, over-reliance can weaken memory and attentiveness—if we’re not intentional in how we learn. But that’s not a problem with AI. That’s a problem with how we teach people to use it.
We must reject the binary of “AI versus effort.” The truth is far more hopeful: with AI, our effort can now go further.
So instead of training young minds to use AI for shortcuts, let’s train them to use AI for discovery, creativity, and ethical problem-solving. Instead of limiting our aspirations to producing the same old essays, let’s teach people to write new stories—about their lives, their futures, and their place in the world.
🌍 A Call to Purposeful Innovation
We stand at a pivotal moment. The tools now available to us can either shrink our imagination or expand it.
At the Academy of Life Planning, we choose the latter.
We don’t use AI to do the work for people—we use it to awaken their potential. In the right hands, AI doesn’t weaken critical thinking. It liberates it.
Let’s stop judging these tools by yesterday’s standards.
Let’s start using them to reach tomorrow’s possibilities.
💬 What’s your experience with AI—does it help or hinder your thinking? Share your thoughts in the comments or join the conversation at Academy of Life Planning.
🔍 Article Review: Key Takeaways and Analysis
Summary of the Study
- The MIT Media Lab study investigated how different tools—ChatGPT, Google Search, and unaided effort—affected the cognitive and neural engagement of participants writing SAT-style essays.
- Using EEG monitoring, researchers found that ChatGPT users showed the lowest brain engagement, delivered uniform, less original essays, and increasingly relied on copy-paste methods over time.
- In contrast, unaided writers demonstrated higher neural connectivity and greater satisfaction, indicating deeper learning and memory retention.
Strengths of the Article
- Timeliness: Addresses urgent concerns as generative AI becomes embedded in education, especially among young users.
- Neuroscientific Evidence: The use of EEG adds credibility and objectivity, reinforcing the claim that AI use correlates with reduced brain activity in areas tied to memory, creativity, and executive function.
- Practical Warnings: The researchers caution against premature AI integration in early education (e.g., “GPT kindergarten”), aligning with broader ethical debates about AI’s role in cognitive development.
Limitations
- Small Sample Size: Only 54 participants from a single geographic region, which limits generalisability.
- Pre-Review Status: The study has not yet undergone peer review, making its conclusions preliminary and potentially prone to bias or oversight.
- Self-Selection Bias: Participants who opted into the study may already have certain attitudes or behaviours around tech use.
Nuanced Observations
- The Google Search group performed almost as well as the unaided group, suggesting that structured information-seeking can complement cognition rather than suppress it.
- The ability to recall and reengage with past work was significantly lower in ChatGPT users, which raises concerns about long-term retention and critical learning loops.
- Ironically, AI traps embedded in the paper to test LLM summarisation were easily triggered—exposing current limitations in LLM reading comprehension and hinting at inherent epistemological flaws in relying too heavily on automation.
📌 Implications for the Academy of Life Planning & Get SAFE
1. Educational Caution
You may want to caution your planners and clients about the appropriate use of AI in learning and reflection. AI should enhance, not replace, the cognitive effort required for meaningful financial planning or life decision-making.
2. Affirmation of Your Approach
This study reinforces the Academy’s core principle: human-centric planning leads to deeper transformation. The brain-only group aligns with the reflective, values-based self-enquiry promoted through the GAME Plan.
3. Potential Research Angle
You could consider replicating a similar study with your own community—comparing cognitive and emotional outcomes of using AI in life planning versus traditional journaling or peer dialogue. This would ground your work in empirical evidence and set an example of ethical AI application.
4. Use AI for Augmentation, Not Automation
This article supports your current stance on AI as an empowering assistant, not a creative replacement. Educating users about its proper role—co-pilot, not autopilot—is essential.
âś… Conclusion
While provocative and insightful, the article presents an early-warning study, not conclusive evidence. The central message is clear and valid: the mind atrophies without active engagement. Your work, which promotes self-awareness, critical reflection, and holistic autonomy, is directly aligned with safeguarding against the risks this study raises.
đź§ Reframing the Narrative: AI Isn’t the Problem—Stagnant Thinking Is
A recent MIT Media Lab study suggests that using ChatGPT may reduce cognitive effort compared to traditional methods like Google Search or unaided thinking. It warns that students relying on AI tools may experience diminished brain activity, weaker memory retention, and less original output.
But here’s the critical flaw in that framing: the study sets the same bar for all three groups—as if the goal of using AI is to achieve the same result with less effort.
That completely misses the point.
AI—especially tools like ChatGPT—should not be used to do what we were already doing. It should be used to go beyond. Beyond what our brains alone can calculate. Beyond what Google can search. Beyond what traditional processes can deliver.
In my experience, the best results with clients don’t come from using AI to replace their effort. They come from using AI to elevate their thinking—to amplify their curiosity, expand their planning horizon, and create clarity where there was once only fog. Clients often describe their GAME Plan life design session as “mind-blowing.” I never hear that without AI in the room.
And in our work on asset recovery and financial crime, we’re achieving insights and justice outcomes that would have been unimaginable without AI support. To settle for the same bar—as this study does—is to settle for mediocrity. With AI, we’re no longer riding the horse and cart—we’re in a Ferrari.
If we train a child to use AI merely to shortcut a task, yes—it will atrophy their thinking. But if we guide them to use AI to dream bigger, explore deeper, and challenge assumptions—they’ll build rockets, not just write essays.
Let’s stop evaluating AI based on how well it replicates yesterday’s effort. Let’s start evaluating it on how far it can take us tomorrow.
Your Money or Your Life
Unmask the highway robbers – Enjoy wealth in every area of your life!

By Steve Conley. Available on Amazon. Visit www.steve.conley.co.uk to find out more.
