
By Steve Conley, Founder – Academy of Life Planning
“It is not machines we should fear, but those who program the system to serve profit over people.”
On June 24th, the UK Treasury Committee convened a panel of academic experts to explore the role of Artificial Intelligence (AI) in financial services. The headlines were cautious—if not outright alarmist.
We heard from Professor Neil Lawrence of Cambridge, who warned of a “slow-motion flash crash,” drawing parallels between today’s emerging AI systems and the $1 trillion market disruption in 2010. Oxford’s Professor Sandra Wachter voiced concern that AI systems rely on historical data to predict an inherently unpredictable future. Both called for a “change in regulatory mindset.”
They’re not wrong—but they’re missing the bigger picture.
The Risk of Over-Simplified Caution
The concerns raised are valid: AI systems, if left unchecked, can amplify volatility, systemise bias, and reinforce poor decisions based on outdated data. But the narrative remains dangerously one-sided.
What the committee didn’t hear is that AI also offers the greatest opportunity we’ve ever had to mitigate conduct risk in financial services. It can enhance transparency, enforce ethical standards, and shine light into the dark corners of a system that has long operated with impunity.
AI vs. Regulatory Capture: A Moral Upgrade
Unlike human regulators, AI systems don’t play golf with lobbyists or bury reports inconvenient to political agendas. They don’t push for deregulation to serve “growth” narratives while consumer protections get quietly watered down.
As one financial reformer once said, AI has “the morals of a calculator”—and that might be a very good thing, especially when compared to regulators with a track record of being captured by the very firms they’re meant to police.
In fact, this may be the real reason we’re hearing so much fear and doubt. The problem isn’t AI. It’s that AI threatens to expose the contradictions at the heart of our current regulatory system.
The Myth of Historical Data as a Limitation
Another myth raised at the hearing is that AI’s reliance on past data makes it unsuitable for unpredictable markets. But that assumes AI’s only function is forecasting.
What if its true strength lies in pattern detection, forensic audit, real-time compliance monitoring, and democratising knowledge currently locked away in legal jargon and policy silos?
This is not a failure of technology—it’s a failure of imagination.
What Kind of Regulation Do We Really Need?
The experts call for a “more agile regulatory mindset,” but stop short of saying what that means in practice. Here’s what it should include:
- Real-time supervision powered by explainable AI
- Open-source regulatory tools available to citizen investigators and consumer advocates
- AI systems embedded to flag misconduct as it happens—not five years later
- Greater transparency about who influences regulatory decisions, and how
Let’s be honest: agility isn’t just about speed. It’s about who sets the rules and whose interests they serve. If we don’t confront that, no amount of AI risk-aversion will protect the public.
A More Honest Conversation
At the Academy of Life Planning, we believe AI must be used ethically, deployed transparently, and shared equitably. But we also believe it holds revolutionary potential to reform the financial system from the ground up—if we dare to use it wisely.
Perhaps what we’re hearing in these parliamentary hearings isn’t public interest caution at all. Perhaps it’s the echo of lobbyist voices—those who see AI not as a threat to the consumer, but as a threat to the status quo.
Final Thought
AI is neither saviour nor saboteur. It’s a tool. But like all tools, it reflects the hand that wields it. The question isn’t whether we should regulate AI, but whether we’ll also regulate the interests that have long gone unchallenged.
Let’s stop using fear of the future to protect the sins of the past.
Core Argument: A “Change in Mindset” is Needed for AI Regulation in Financial Services
Summary of Position:
Experts from Cambridge and Oxford suggest that financial regulators must become more agile and responsive in light of AI’s growing presence in financial services. They argue that current regulatory approaches are ill-suited to technologies that evolve rapidly and pose new systemic risks.
Strengths of the Argument
1. Recognition of AI’s Systemic Risk
The analogy to the 2010 flash crash is apt. AI, especially when deployed in high-frequency trading and risk assessment, can amplify instability. The absence of a clear “kill switch” in today’s distributed AI-driven systems underscores the fragility of digital finance infrastructure.
2. The Call for Agile Regulation
AI systems operate in a feedback loop with real-time data. Static, rules-based regulation—designed for slower-moving financial environments—struggles to govern such fluid systems. The suggestion to adopt a more dynamic regulatory model is timely and aligns with the concept of regulatory sandboxes or living regulation.
3. Emphasis on Feedback Loops
Incorporating continuous monitoring and adaptive feedback into regulation reflects a sound understanding of complex systems theory. It acknowledges that regulators need real-time intelligence, not just retrospective compliance data, to govern AI safely.
4. Historical Data Cannot Predict Black Swans
Wachter rightly highlights the fallacy of relying solely on historical data in inherently unpredictable markets. AI’s predictive models are vulnerable to unprecedented events—pandemics, wars, or political shocks—that do not exist in training datasets. This is a clear limitation for risk forecasting.
Weaknesses and Gaps in the Perspective
1. Oversimplification of the “Mindset Shift”
While calling for a mindset change is rhetorically effective, it lacks specificity. What structural reforms are needed within regulatory bodies? How can agility be institutionalised? Without concrete proposals, the call for change risks being too abstract to implement.
2. Underestimation of Human Bias in Manual Regulation
Ironically, regulators themselves often act based on outdated assumptions and incomplete data. Replacing flawed human decision-making with explainable AI could improve certain aspects of supervision. The article presents a lopsided view, focusing only on AI’s risks without acknowledging its potential to enhance regulatory insight and enforceability.
3. Limited Consideration of Decentralised Finance (DeFi) and AI Convergence
The discussion misses how AI is being embedded into decentralised financial systems (blockchain-based protocols), where regulatory levers are virtually non-existent. These innovations are moving faster than traditional finance, and the same regulatory tools won’t work.
4. Lack of Engagement with Ethics and Fairness
The article is concerned with market predictability and system failures but doesn’t address AI’s capacity to reinforce discriminatory patterns (e.g., biased lending models). Regulation must also address algorithmic fairness, transparency, and the right to explanation under frameworks like GDPR.
5. No Mention of Regulatory Capture or Industry Resistance
The discussion avoids political economy questions—such as whether regulators have the will or independence to enforce meaningful change in the face of lobbying by AI-driven fintech and incumbent banks.
Broader Implications for Financial Planning
For organisations like the Academy of Life Planning, this debate reinforces the need for:
- Transparency-first technology: AI should be a co-pilot, not an opaque decision-maker.
- Client empowerment over prediction: Planning should focus on human agency and possibility, not just algorithmic forecasting.
- Regulation from the ground up: Support for citizen-centric AI governance—placing tools like yours in the hands of users rather than intermediaries—might be part of the new mindset that institutions still resist.
Conclusion
The call for a change in regulatory mindset is well-founded, especially in light of AI’s growing complexity and scale. However, for this shift to be meaningful, it must be accompanied by:
- Specific reforms,
- Ethical safeguards,
- Political courage,
- And public participation in shaping what “safe AI” in finance should look like.
A holistic response will require not just reactive regulation, but a proactive redesign of the financial system itself.
Your Money or Your Life
Unmask the highway robbers – Enjoy wealth in every area of your life!

By Steve Conley. Available on Amazon. Visit www.steve.conley.co.uk to find out more.
