
It’s easy to get dazzled by headlines like: “Digital adviser Aida passes the CII diploma.”
An AI, trained by intermediaries, outperforming most human advisers in product-based exams. The industry hails it as progress. But we must stop and ask: progress for whom?
The Problem: Exams Built on the Old Paradigm
When I sat the CII exams in the 1980s and again in the 2010s, I noticed something striking. The curriculum rewarded “product analysis” and “product need analysis.” Everything was products, wrappers, and compliance.
What was absent?
- Client values
- Life goals
- Psychological stages of development
- Human capital audits
- Motivations, beliefs, wellbeing
If I had written answers rooted in true financial planning—understanding the person before the product—I would not have scored so well. To pass, I had to think like a product intermediary.
So when we hear that Aida, trained by product intermediaries, aced the exams while ChatGPT only scraped through… should we be surprised?
AI as Industry Mirror, or as Human Partner?
AI is a mirror of what we feed it.
- Train it on industry exams, it learns to replicate industry thinking.
- Train it on real human needs, it learns to support empowerment.
That’s why Aida scored so highly: it was trained to reflect an intermediary worldview. ChatGPT, with its broader base, stumbled on exam scoring but can hold deep, life-centred conversations—the kind advisers rarely get examined on.
The risk? Industry-trained AIs become efficient engines of extraction, turbo-charging an already exploitative system.
The Oligopoly’s Ambition
The financial services oligopoly—banks, product houses, regulators aligned to distribution—have long shaped the rules of engagement. Now, they see AI as the next perimeter to defend and dominate.
- Exams and licences become gatekeepers, ensuring only “approved” AIs are allowed to advise.
- Industry-curated models embed the worldview of products-first, people-second.
- Consumers are corralled into “robo-advice” that feels slick but perpetuates the same structural imbalance.
If we let this happen, we will have digitised exploitation—not solved it.
The Empowerment Alternative
Here’s the good news: AI doesn’t belong to them.
Every day, millions of citizens use open models like ChatGPT, teaching it through interaction. This is citizen-training at scale. If we show it what matters—purpose, values, talents, sovereignty—it learns.
The Academy of Life Planning’s mission is to help citizens and planners alike harness AI as a co-pilot for life-first, product-free financial planning. Tools like the GAME Plan™ and the Kokoro Balance Scorecard train AI to serve people, not products.
This is how we prevent the oligopoly from embedding its worldview into the next generation of advice:
- Educate citizens to use AI wisely.
- Empower planners to adopt life-first frameworks.
- Champion open models where the crowd, not the cartel, shapes the learning.
A Call to Action
Industry-trained AI will always serve the industry.
Citizen-trained AI can serve the people.
The choice is ours. Do we accept a future where AI becomes another extraction tool in the oligopoly’s hands—or do we build one where AI is a companion in empowerment?
At the Academy, we’ve already chosen. We invite you to join us in training the machines for a different future.
🌍 The Academy of Life Planning (AoLP) is the home of M-POWER — a global movement where citizens, planners, and allies unite to replace extraction with empowerment. Guided by the GAME Plan™, supported by AI, and strengthened through community, we are building a future of transparency, sovereignty, and shared prosperity.
👉 Visit our website today and join our tribe for free — be part of the change.

One thought on “⚖️ Who Trains the Machines? Preventing the Oligopoly from Owning the Next Generation of Advice”