🤖 When Algorithms Deny Humanity: The AI Arms Race in Health Insurance — and What It Teaches Us About Financial Justice

Mech up for the battle

By Steve Conley, Founder, Academy of Life Planning


In the United States this winter, a quiet war is unfolding in the world of health insurance.
It’s not between doctors and patients, or even between insurers and regulators.
It’s between two algorithms.

One denies life-saving treatment.
The other fights back.

This “AI arms race” is reshaping the boundaries of justice — not just in healthcare, but across every system that governs our financial, social, and emotional well-being.

And if we’re paying attention, it offers a prophetic warning — and a path forward — for financial planners, policymakers, and ordinary citizens everywhere.


🚨 The Story: When AI Denies a Claim

A recent PBS NewsHour segment revealed that 73 million health-insurance claims under the Affordable Care Act were denied in 2023 alone.
Fewer than 1 percent of patients appealed.
Not because they agreed — but because they were exhausted, frightened, or simply didn’t understand how.

For the first time, insurers admitted what advocates had long suspected:
71 percent now use AI to decide which claims to reject.

In one lawsuit, patients even received letters stating, “Your claim was reviewed by an AI program.”
No human review. No empathy. No context. Just code.

The supposed safeguard — a “human in the loop” — turns out to be a rubber stamp.


⚙️ How the System Really Works

Insurers deploy AI to sift through millions of data points:

  • Predicting which patients won’t appeal
  • Flagging high-cost treatments as “low priority”
  • Even identifying those unlikely to survive long enough to complete the appeal process

In plain terms: AI is being trained to deny the vulnerable — not protect them.

It’s a chilling inversion of purpose.
Technology meant to speed care is instead weaponised to ration it.


✊ The Fightback: AI for the People

But something remarkable is happening.
Independent developers have created tools that let patients upload their denial letters, medical notes, and records.
For around $40–£50, the system drafts a professional appeal — written in the precise legal and medical language insurers respect.

These tools have proved “quite successful,” according to Indiana law professor Jennifer Oliva, who featured in the report.

This isn’t just a story about healthcare.
It’s a story about agency — about citizens reclaiming the power of the written word, assisted by technology once used against them.

It’s what we at the Academy of Life Planning call human-centred automation:
AI designed not to exploit but to empower.


đź§© The Deeper Pattern: When Systems Forget Their Soul

What we see in the US health-insurance industry is not unique.
It’s the same structural sickness we’ve been diagnosing in finance for decades.

When profit is tied to denial, the system’s intelligence — human or artificial — becomes adversarial by design.

  • Banks profit from overdraft fees.
  • Insurers profit from unpaid claims.
  • Asset managers profit from complexity and opacity.
  • Mortgage lenders profit from denied borrowers rights.

Every incentive points away from empathy, transparency, and trust.

The AI merely makes visible the logic that was already there.


🌍 Structural Trustworthiness: Our Missing Standard

Regulation lags far behind innovation.
“Human oversight” clauses sound noble but are meaningless when those humans lack authority, training, or courage to question the machine.

What’s needed now — in healthcare, finance, and beyond — is structural trustworthiness:

  1. Transparency: Every algorithm must be open to independent audit.
  2. Accountability: Decisions must trace back to real humans, not faceless systems.
  3. Integrity by Design: AI must be programmed with the same ethical standards we demand of people.
  4. Empowerment: Citizens must have equal access to tools that defend their rights.

At the Academy, we’re applying these principles to financial planning.
Our Holistic Wealth Planners use AI not to sell products but to help clients understand themselves, their goals, and their means.
We don’t extract data — we return insight.
We don’t monetise confusion — we clarify truth.


🛡️ Lessons for Planners and Policymakers

  1. Automation is not neutral.
    Every algorithm reflects its creator’s intent. Ethical review must precede deployment.
  2. Empowerment must be reciprocal.
    As institutions automate, so must individuals.
    AI should never be one-sided power.
  3. Education is protection.
    The more people understand how systems think, the harder it becomes for those systems to exploit them.
  4. Justice needs design.
    The architecture of fairness cannot rely on goodwill alone — it must be coded, audited, and enforced.

đź”® From Denial to Renewal

The same technology that denies care today could deliver compassion tomorrow — if guided by conscience.

The same AI that predicts who will die before an appeal could predict who needs help before a crisis.

And the same algorithms used to exclude could become the very ones that rebuild trust — if we, collectively, choose a different story.

That is the work of the Academy of Life Planning and our Get SAFE initiative:
to use AI as a shield for citizens, not a sword for corporations;
to train planners who serve conscience, not commission;
to build systems where the algorithms serve life.


“AI doesn’t create injustice — it amplifies whatever system it’s placed in.”

If we want a future where intelligence serves humanity,
we must first design systems worthy of being intelligent.


đź’¬ Join the Conversation

How do you see AI reshaping the ethics of financial planning, healthcare, or justice?
Have you witnessed systems that use automation for empowerment rather than exclusion?
Share your thoughts below — and help us design a world where technology reflects the best of human nature, not the worst of it.


Every year, thousands across the UK lose their savings, pensions, claims, homes, and peace of mind to corporate financial exploitation â€” and are left to face the aftermath alone.

Get SAFE (Support After Financial Exploitation) exists to change that.
We’re creating a national lifeline for victims â€” offering free emotional recovery, life-planning, and justice support through our Fellowship, Witnessing Service, and Citizen Investigator training.

We’re now raising ÂŁ20,000 to:
 Register Get SAFE as a Charity (CIO)
 Build our website, CRM, and outreach platform
 Fund our first year of free support and recovery programmes

Every ÂŁ50 donation provides a bursary for one survivor — giving access to the tools, training, and community needed to rebuild life and pursue justice with confidence.

Your contribution doesn’t just fund a project — it fuels a movement.
Support the Crowdfunder today and help us rebuild lives and restore justice.

 Join us at: http://www.aolp.info/getsafe
 steve.conley@aolp.co.uk |  +44 (0)7850 102070

Leave a comment