
There was a revealing moment this week in the financial services industry.
Speaking at a financial crime conference in London, the chief executive of the Financial Conduct Authority warned that artificial intelligence is accelerating fraud, cybercrime, sanctions evasion, and money laundering. Criminals, we were told, are becoming faster, more organised, and more adaptive. AI, according to the speech, is amplifying threats “at a speed and scale the likes of which we’ve never seen.”
None of this is untrue.
AI clearly introduces profound risks.
But something else is happening beneath the surface of this conversation — something less openly discussed, yet arguably more historically significant.
Artificial intelligence is beginning to dissolve information asymmetry.
And institutions are starting to feel it.
For most of modern economic history, large organisations held structural advantages over ordinary people. They possessed the expertise, the legal teams, the analysts, the systems, the data, the process knowledge, and the institutional memory. Citizens, consumers, and even small businesses often operated in partial darkness by comparison.
This imbalance shaped almost every area of modern life.
Banks understood contracts better than customers.
Employers understood systems better than workers.
Governments understood process better than citizens.
Financial firms understood products better than investors.
The gap was not simply one of intelligence. It was one of access.
Access to interpretation.
Access to analysis.
Access to procedural understanding.
Access to time.
Most people simply could not afford the level of expertise required to navigate increasingly complex systems with confidence.
That reality created dependency structures.
The public became dependent on advisers, intermediaries, institutions, consultants, platforms, and gatekeepers — not necessarily because they lacked intelligence, but because complexity itself had become industrialised.
AI changes that equation.
For the first time in modern history, an ordinary person can now access forms of cognitive leverage previously available only to institutions.
A citizen can:
- analyse lengthy contracts in minutes,
- compare complex financial products,
- organise thousands of pages of evidence,
- identify contradictions,
- understand regulatory frameworks,
- challenge institutional correspondence,
- model financial scenarios,
- and interrogate claims made by authority figures.
All at near-zero marginal cost.
That is not a small technological upgrade.
It is a civilisational shift.
And it explains why the emotional tone surrounding AI often feels more charged than the public discussion admits.
Because the debate is not only about fraud.
It is also about power.
Historically, institutions have always used advanced intelligence-amplifying systems for themselves:
- algorithmic trading,
- behavioural analytics,
- surveillance systems,
- predictive modelling,
- legal automation,
- targeted persuasion,
- and increasingly sophisticated data science.
Few objected when intelligence amplification strengthened institutions.
The anxiety rises when intelligence amplification strengthens individuals.
That is the uncomfortable tension sitting underneath many modern AI debates.
Of course, institutions are right to warn about scams, deepfakes, cybercrime, and synthetic identity fraud. These are genuine dangers. AI lowers the cost of deception just as it lowers the cost of analysis.
But the public conversation often omits the balancing truth:
AI also lowers the cost of understanding.
And that may prove equally disruptive.
The most revealing line in the FCA speech may have been this:
“Criminals don’t see our org charts. They see seams.”
Quite right.
But increasingly, citizens see seams too.
They can now identify:
- inconsistencies,
- governance gaps,
- asymmetries of treatment,
- misleading framing,
- procedural manipulation,
- and institutional blind spots.
Not perfectly. Not infallibly. But at a level previously inaccessible to most people.
This matters because institutional authority has historically relied partly upon informational superiority. When that superiority weakens, the relationship between institution and citizen changes fundamentally.
The public becomes less psychologically dependent.
That shift is already visible across society.
Patients increasingly challenge medical assumptions.
Employees question corporate narratives.
Consumers scrutinise contracts.
Retail investors interrogate financial products.
Citizens analyse legislation and policy directly.
The monopoly on interpretation is weakening.
And while this democratisation carries risk, it also carries extraordinary possibility.
The danger now is that society frames citizen empowerment itself as inherently suspicious.
That would be a profound mistake.
Human agency is not the threat.
Unethical behaviour is the threat.
The objective should not be to preserve dependency structures in the name of safety. Nor should it be to abandon safeguards entirely in pursuit of technological liberation.
The real challenge is more mature than either extreme.
How do we build a society capable of handling intelligence amplification responsibly?
How do we cultivate:
- ethical literacy,
- procedural understanding,
- psychological maturity,
- discernment,
- and resilient civic capability?
Because AI is not merely a technological transition.
It is a power transition.
The institutions that adapt best may not be those that attempt to retain informational dominance, but those willing to operate in a world where citizens become more capable, more informed, and more psychologically sovereign.
That future will undoubtedly be messier.
But it may also prove healthier.
For centuries, power flowed largely toward those who controlled information.
For the first time in a very long time, that balance is beginning to shift.
And history suggests that moments like this are rarely comfortable for established systems.
Perhaps regulators need to apply the same standards of balance to AI discussions that they expect from firms communicating with consumers.
Right now, much of the institutional narrative focuses heavily on AI risk:
- fraud
- manipulation
- cybercrime
- misinformation
Those risks are real.
But presenting only one side of the equation risks missing something equally important:
AI is also increasing citizen capability.
Here’s a simple way to think about this:
The same technology that helps criminals scale deception also helps ordinary people scale understanding.
By agency, I mean the ability to think clearly, make informed decisions, and act independently.
AI can help people:
- understand contracts,
- organise evidence,
- challenge poor outcomes,
- detect inconsistencies,
- and navigate increasingly complex systems.
That doesn’t remove the risks.
But balanced regulation should probably acknowledge both realities simultaneously:
AI can increase harm…
…and reduce dependency.
That’s a more nuanced conversation than “AI equals danger.”
Curious how others see this.
