
A conversation with whistleblower Artur Nadolny
One of the most revealing exchanges this week came from Artur Nadolny, whose long-running case exposes many of the quiet behaviours that lie beneath the surface of UK financial services. His observation cuts to the heart of the AI debate:
“I already see signs in my own case of insurers using AI in the wrong way.
They bend the outputs. They push it to support their narrative.
Only independent AI gives people any real leverage.”
This is not an isolated concern. It is exactly what victims, whistleblowers, and citizen investigators are now encountering. Institutions are experimenting with AI that is not designed to illuminate the truth, but to shape it. Internal prompts can be steered. Outputs can be constrained. Contradictions can be quietly suppressed behind closed guardrails that the public never gets to see.
The risk is simple:
If the industry controls the tools, they will not use them to speed up justice.
They will use them to automate obstruction.
Artur put it plainly:
“If the industry controls the tools, they’ll make life harder for consumers.
Without AI on your side, your chances as a customer are close to zero.”
And he’s right. For decades, consumers have entered disputes with a fraction of the information and none of the firepower. Insurers, banks, and pension providers have legal teams, technical departments, internal codes, historic product structures, and years of advantage.
All the customer has is a DSAR they can’t decipher and a complaint process engineered to exhaust them.
Why Independent AI Is the Turning Point
This is where open, independent AI changes everything.
When people can analyse their own documents objectively — without institutional interference — they suddenly see what was previously invisible:
- hidden liabilities
- contradictory statements
- unlawful structures buried in technical jargon
- omissions in correspondence
- account features never disclosed
- misaligned narratives that collapse under scrutiny
For many victims, independent AI provides the first accurate explanation of what actually happened to them. It replaces confusion with comprehension. It restores visibility. It turns noise into evidence.
And as Artur noted:
“The positive side is that honest people now have a way to fill the gaps left by systems that claim to protect them but don’t.”
This is the crux of the structural trust debate.
AI does not create injustice — it reveals it.
Where This Leaves Us
The emergence of independent AI is not just a technological development.
It is a constitutional one.
If AI remains open, citizens can finally understand their own financial data.
If AI becomes captured, we automate the very asymmetries that caused the harm.
Artur’s experience shows both sides of this future:
- Captured AI → narrative manipulation, suppression, and systemic obstruction.
- Independent AI → clarity, leverage, and the first realistic pathway to accountability.
This is why Parliament must safeguard the public’s right to use independent AI — and why Get SAFE and the Academy of Life Planning will continue championing structurally trustworthy tools that empower consumers rather than institutions.
What the Whistleblowers Are Showing Us — A Pattern the System Never Wanted to See
Artur’s experience is not an outlier. Across the cases we’ve seen through Get SAFE, the Academy of Life Planning, the Transparency Task Force, and the APPG whistleblower network, a consistent pattern is emerging — one that could not be fully understood until independent AI made it visible.
Here are the key themes whistleblowers are now bringing forward:
1. Hidden Liabilities Were Not Accidents — They Were Architecture
Several industry insiders have now confirmed what victims long suspected:
certain products and facilities were deliberately structured to conceal risks, fees, and downstream liabilities.
Whistleblower testimonies point to:
- “reserve” and “shadow” facilities not shown on customer-facing documentation
- retrofitted interest features hidden within legacy systems
- securitisation trails that break the chain of accountability
- internal codes designed to obscure, not clarify
Consumers were never meant to decode these systems.
Independent AI is now decoding them anyway.
2. Manufactured Distress Was a Profit Centre
Multiple whistleblowers describe similar behaviours across different firms and years:
- engineered arrears through timing tactics
- “default triggers” aligned with profitability targets
- internal dashboards that treated consumer hardship as an opportunity
- recovery units incentivised by asset valuations instead of customer outcomes
These accounts align with whistleblower reports like BankConfidential and with the documents victims are now reading accurately for the first time.
AI is connecting testimony to evidence.
3. Regulators Were Alerted — But Often Did Nothing
A recurring theme is the presence of internal warnings that never translated into enforcement.
Whistleblowers report:
- risk officers removed from committees
- internal reports watered down before submission
- “informal guidance” discouraging escalation
- entire product teams aware of systemic problems that never reached the FCA
These institutional silences are now being investigated — not because regulators uncovered them, but because AI-enabled citizens did.
4. Whistleblowers Are Afraid — Not of Truth, But of Retaliation
Across testimonies, the common language is fear:
- fear of blacklisting from the industry
- fear of legal action
- fear of losing professional accreditation
- fear of character attacks by firms with deep pockets
Independent AI changes this dynamic too.
It allows whistleblowers to:
- anonymise
- validate claims
- cross-reference documents
- reconstruct what happened without relying on institutional cooperation
Open AI creates safety where the system never did.
5. The Pattern Is Now Machine-Readable
This is the breakthrough moment.
Whistleblowers have been telling parts of this story for years, but no single person had:
- all the documents
- the technical language
- the structural context
- the ability to analyse thousands of pages
- or the institutional support to be heard
Independent AI has changed that overnight.
Victims, journalists, planners, and whistleblowers are now working with:
- the same tools
- the same language
- and the same analytical power
For the first time, the public has equivalent visibility.
And that is exactly why the push for industry-controlled AI guardrails is escalating — because the system has never been more exposed.
Where This Leaves Us
The whistleblower insights reinforce one central message:
This is no longer about individual misconduct.
It is about a structural pattern now visible at scale.
And independent AI, not institutional reform, is what made it visible.
The next phase is clear:
- protect these tools
- protect those who use them
- and rebuild financial services around structural trust rather than structural concealment
The whistleblowers have spoken.
Now the public can finally see what they meant.
🔍 The Goliathon Taster – Your First Step to Justice

From Victim to Victor. Begin your journey for just £2.99.
If you’ve suffered financial harm, felt dismissed, or struggled to make sense of your documents, you don’t have to fight alone—or start blind.
The Goliathon Taster is a low-risk, high-clarity introduction to the mindset, tools, and methods used by survivors, campaigners, and citizen investigators to rebuild their cases with confidence.
This is your invitation to experience the power of structured evidence, professional systems, and independent AI—so you can finally see the bigger picture the system never explained.
Link to Goliathon Taster £2.99.
What You Get for £2.99
✅ 90-minute training video
A clear, accessible introduction to the Goliathon framework and what it means to become a justice-seeker.
✅ Guided workbook
Set your goals, reflect on your case, and define your own “Why I Fight” statement.
✅ AI co-pilot setup guide
Step-by-step instructions to start using ChatGPT as your investigative assistant.
✅ Practical exercises
Test-drive AI tools with prompts directly relevant to your own evidence and situation.
Who This Is For
This taster is designed for:
- Survivors of financial misconduct
- Community advocates and citizen investigators
- QROPS victims and supporters of financial justice
- Anyone seeking clarity, structure, and a safe starting point
If you want to understand what happened to you, build a credible case, and take your first empowered step—this is where to begin.
Why It Works
The Goliathon Taster helps you move from confusion to capability by giving you:
- A clear personal mission
- A practical “Justice & Recovery” roadmap
- A working AI partner to support your investigation
- The confidence to take your next step
As one founding investigator, Mike, put it:
“This model you’re building is absolutely amazing… it’s a win for everybody.”
📥 Instant Access
Purchase today for £2.99 and get your secure link to:
- the training video, and
- the downloadable workbook.
Link to Goliathon Taster £2.99.
If the session resonates, you can upgrade to the full Goliathon Programme for £29 and continue your journey toward clarity, justice, and recovery.
Every year, thousands across the UK lose their savings, pensions, and peace of mind to corporate financial exploitation — and are left to face the aftermath alone.
Get SAFE (Support After Financial Exploitation) exists to change that.
We’re creating a national lifeline for victims — offering free emotional recovery, life-planning, and justice support through our Fellowship, Witnessing Service, and Citizen Investigator training.
We’re now raising £20,000 to:
✅ Register Get SAFE as a Charity (CIO)
✅ Build our website, CRM, and outreach platform
✅ Fund our first year of free support and recovery programmes
Every £50 donation provides a bursary for one survivor — giving access to the tools, training, and community needed to rebuild life and pursue justice with confidence.
Your contribution doesn’t just fund a project — it fuels a movement.
Support the Crowdfunder today and help us rebuild lives and restore justice.
🌐 Join us at: http://www.aolp.info/getsafe
📧 steve.conley@aolp.co.uk | 📞 +44 (0)7850 102070


Steve Conley I’m with you on every word. I share your article with people who contact me because they’re living the same thing. Some even have it worse. When they read my own case files, they start to realise they’re not alone. They also realise the pattern is the same across sectors.
What troubles me is how many victims still stay away from AI. They look only at what’s written in a document and not at why it was written, what it hides, or what it tries to shift. They miss the metadata, the timelines, the hidden layers, the edits.
It all looks like simple mistakes. It isn’t.
Every “mistake” matches another “mistake” months later. That’s when you see the structure.
AI won’t solve a case for you. It won’t do your thinking. It gives you the wider view, and it shows you the parts that don’t fit.
But people still need to understand their case at the same depth. When both meet, the truth becomes clear.
Your article perfectly captures this, and I highly recommend it to all victims to come across your work.
hashtag#RSAWhistleblowerFiles
Love
1Reply
Artur, your insight cuts right to the truth.
What you describe — the “mistakes,” the patterns, the duplicated behaviours months apart — is exactly why independent AI matters so much. These aren’t accidents. They’re signatures. And only when you combine lived experience with the wider lens of AI do those signatures finally come into view.
You’re right: AI doesn’t replace the investigator.
It reveals the structure the investigator is trapped inside.
Most victims assume the document in front of them tells the whole story. But as you’ve shown so powerfully in the RSA Whistleblower Files, the real story sits in the parts they were never meant to see:
the edits, the timestamps, the contradictions, the missing steps, the reconstructed logic.
That’s where the truth lives — and where the system works hardest to hide it.
Thank you for sharing your own case so generously with others. You’re helping people see that they’re not isolated incidents; they’re part of a pattern that stretches across sectors.
When victims bring their understanding, and AI brings its visibility, that’s when the wall starts to crack.
Love and solidarity, Artur — we move forward together.