
“Can I record this meeting?”
It sounds like a simple, harmless question. Yet behind it lies a hidden complexity that the financial planning profession must urgently address — one that goes far beyond compliance and cuts to the heart of client trust.
In his recent article, “The hidden AI reality behind ‘Can I record this meeting?’”, Elemi Atigolo exposes an uncomfortable truth: that many advisory firms are deploying AI tools in ways that neither advisers nor clients fully understand. While the article is not an attack on technology — and rightly so — it is a clear and necessary call for transparency, accountability, and consent in the age of AI.
At the Academy of Life Planning, we welcome this call. But we also believe the discussion must go further.
Trust Isn’t About Technology — It’s About Truth
Atigolo’s key message is clear: client meetings today are no longer just recorded. They are transcribed, summarised, indexed, and fed through a network of AI tools that may operate beyond the adviser’s visibility — and the client’s awareness.
In many cases, that data journey may include:
- Transcription engines that process recordings offshore
- AI platforms that generate suitability reports or meeting summaries
- Cloud services that index past conversations for future use
These systems can be helpful. They save time. They reduce errors. But they also raise a vital question: what exactly did the client consent to?
If a client agrees to a meeting being “recorded for accuracy,” do they know their life story is being parsed by large language models, or used to train AI tools they’ve never heard of?
We doubt it.
And that is where trust begins to fracture — not because of the AI, but because of the silence.
Consent Is More Than a Checkbox
Consent is not valid if it isn’t informed. A client agreeing to a recording doesn’t automatically agree to:
- Their voice being stored in a vector database
- Their statements being used to fine-tune commercial AI models
- Their personal narrative being reused to auto-generate future financial advice
Yet in too many firms, this is exactly what’s happening — and neither the adviser nor the client knows it.
This is what Atigolo calls the AI consent gap — and he’s right to highlight it.
But we would add: this isn’t simply a failure of training. It’s a failure of leadership. Of governance. Of ethics.
Where the Article Shines — and Where It Could Go Further
We applaud Atigolo’s clarity. He does not call for a halt to AI adoption — far from it. Instead, he urges firms to:
- Map their entire AI data flow
- Equip advisers with a working understanding of the tools they use
- Disclose clearly and explicitly what happens after the ‘record’ button is pressed
This is all excellent guidance.
Yet the article misses an opportunity to differentiate between responsible and irresponsible AI use. Not all AI systems leak data. Privacy-preserving AI models exist. Closed-loop systems can respect boundaries. Federated learning, local processing, and encryption-in-memory are all options.
Let us not throw out AI’s transformative potential because some firms use it poorly.
The goal is not to turn off the technology. The goal is to turn on the lights.
The Future: AI-Literate, Consent-Led Advice
The profession now stands at a fork in the road. One path leads to efficiency at the cost of trust. The other leads to transparency, empowerment, and deeper relationships.
At the Academy of Life Planning, we believe in the second path. That’s why we support:
- AI-aware advice, where advisers can clearly explain the systems they use
- Layered consent frameworks, giving clients genuine choice without penalising them
- Open data disclosure, so clients know what happens to their voice and their story
Trust is not built on tools. It’s built on truth.
Final Thought: What Happens After You Press Stop?
As AI becomes embedded in financial planning, firms must ask themselves not just can we use this technology, but should we?
And if the answer is yes — as it often will be — then the next question must be:
“Can we explain it?”
“Can our clients consent to it meaningfully?”
“Can we still look them in the eye and say: ‘You’re in safe hands’?”
Because in the end, the question isn’t can I record this meeting?
It’s: what happens after I press stop?
And that, dear reader, is where trust is either cemented — or quietly erased.
Here’s an example of a clear, ethical, and informed consent question for recording a client meeting with AI involvement:
✅ Model Consent Question:
“To help serve you better, I’d like to record our meeting. The recording will be transcribed and may be processed using secure AI tools to generate meeting notes, identify key points, and assist in drafting your financial plan. These tools do not share your data externally or use it to train commercial models, and your information is stored in line with data protection regulations.
Do you consent to the recording and AI-assisted processing of this conversation? You can say no, and I’ll take manual notes instead.”
🧩 Why this works:
- Transparency: It clearly states what will happen to the data and why.
- Boundaries: It sets limits on how the AI will use the data.
- Alternatives: It offers a real, penalty-free opt-out.
- Empowerment: It respects the client’s autonomy and invites questions.
Here’s a critical analysis of “The hidden AI reality behind ‘Can I record this meeting?’” by Elemi Atigolo.
The FT Adviser article is insightful, timely, and well-constructed — but not without its limitations and underlying assumptions. Below is a structured critique:
1. Strengths of the Article
✅ Raises a Vital Ethical Issue
The article powerfully surfaces a growing AI consent gap in financial advice. The comparison to past mis-selling scandals (PPI, DB transfers) is especially poignant — not alarmist, but cautionary. It successfully reframes AI adoption not as a tech issue, but as a trust and transparency issue.
✅ Grounds Itself in Real Industry Context
As a former financial adviser and now tech consultant, Atigolo speaks with credibility. His examples of firms using transcription tools unknowingly set to train models highlight real-world failures of due diligence.
✅ Clear Regulatory Implications
The link to GDPR and FCA Consumer Duty is compelling. If firms don’t know where client data is going, they cannot ensure informed consent — a fundamental regulatory breach. The hypothetical subject access request scenario is particularly effective.
✅ Practical Recommendations
The call for AI-fluent advisers, data flow mapping, better vendor questions, and transparent consent practices is highly actionable. The term “AI-aware advice” is a helpful frame.
2. Areas for Further Development or Challenge
⚠️ Technological Oversimplification
While the article notes terms like LLMs, RAG, and vector databases, it oversimplifies their operation. For example:
- RAG systems can be designed to use anonymised, encrypted, or client-local data.
- Not all LLM integrations pose risk — it depends on architecture (e.g., in-memory processing, federated models).
The piece could benefit from distinguishing between closed-loop, privacy-preserving AI (which many firms now use) and open-loop, cloud-reliant systems. Not all AI tools are data-leaky or poorly governed.
⚠️ Consent is Nuanced, Not Binary
The article suggests clients should be able to say “no” to AI without consequence — but this oversimplifies the design trade-offs of modern advice processes:
- Many firms offer AI summaries as part of their compliance framework.
- Clients refusing AI processing may require alternative service pathways, which will be costlier and slower.
A more balanced framing would consider how to build layered consent frameworks, rather than rejecting AI use outright.
⚠️ Assumes Advisers Are Uninformed
While many advisers are indeed ill-equipped to explain AI pipelines, others (like our Academy members) are ahead of the curve. We recognise best practice — we are a growing movement of AI-literate, holistic planners.
PlannerPal isn’t just another tool — it’s a revolutionary AI assistant transforming the way planners manage client interactions and documentation.
With voice-to-text functionality, it captures every critical detail from meetings, whether in person or over Zoom. You’ll receive instant, actionable summaries, eliminating the need to replay recordings. PlannerPal also helps draft emails and complex documents in seconds, saving you valuable time. Its smart search features make recalling past conversations effortless — even personal touches like a client’s grandchild’s name. Seamlessly integrating with your CRM and built on secure platforms like Microsoft and AWS, PlannerPal keeps your practice efficient, accurate, and secure.
Read more about Planner Pal’s security and data compliance.
3. Strategic Implications for the Industry
💡 Data = Liability as well as Asset
Firms must treat recorded client data not just as an asset for automation, but as a potential litigation risk. AI pipeline transparency should become part of compliance reviews and tech procurement processes.
💡 Opportunity for Differentiation
Advisory firms that lead on AI transparency — e.g., by publishing their tech stack, data flow charts, and opt-out protocols — could gain competitive advantage. Transparency becomes a brand trust differentiator.
💡 Role for Consumer Advocacy and Education
There’s a space here, for planning and education service providers to offer AI consent checklists for consumers or create model privacy disclosures that firms can adopt — pre-empting the next wave of consent-related grievances.
Conclusion: A Timely Wake-Up Call
Atigolo’s article is a thoughtful, well-informed critique of current industry practices around AI and client data — and it rings alarm bells without descending into technophobia. However, it would benefit from:
- Nuanced distinctions between different AI architectures,
- Recognition of firms already doing it right,
- And a more practical vision for scalable, consent-based AI integration.
Bottom line: Trust is not broken by technology — it’s broken by opacity. And rebuilding that trust will require fluency, frameworks, and above all, full disclosure.
Your Money or Your Life
Unmask the highway robbers – Enjoy wealth in every area of your life!

By Steve Conley. Available on Amazon. Visit www.steve.conley.co.uk to find out more.
