Harvey vs Humanity: How Captured AI Is Rewriting Justice for Profit

When a global law firm like CMS announces it’s “staying ahead of the curve” by rolling out Suits-inspired Harvey AI, most of the legal press applauds the efficiency gains.
But behind the headlines lies a more uncomfortable truth — one that strikes at the very heart of access to justice.


⚖️ The Rise of Captured AI

Harvey AI, now valued at $8 billion, is the darling of the legal tech elite. It promises productivity gains of 100 hours per lawyer per year across CMS’s 21 global offices. The firm reports fewer write-offs, improved profit margins, and happier lawyers.
It sounds impressive — until you ask: who actually benefits?

This is AI built for corporate efficiency, not human equity.
It’s a textbook example of Captured AI — a closed, venture-capital-funded system designed to serve institutions, not individuals. The tools of justice are being optimised for speed and profit rather than fairness and truth.


🚪 Closing the Door on Citizens

While global firms deploy AI to accelerate contract reviews and regulatory analysis, ordinary citizens still face a justice system riddled with delays, fees, and opacity.
Victims of financial crime can now be charged £250 just to have a complaint heard at the Ombudsman. Meanwhile, CMS automates its due diligence with an $8 billion algorithm.

AI, in the wrong hands, risks widening the justice divide — between those who can afford to buy automation, and those left to fight bureaucracy alone.

The real concern with Harvey AI (and similar “agentic” systems being embedded in law, finance, and government) isn’t what it can do technically, but what kind of decisions it’s being trained, instructed, and incentivised to make.

Based on both Harvey’s public use cases and Get SAFE’s experience with how the system behaves in consumer-facing disputes, here’s a structured breakdown of the kinds of decisions such AI tools might make that are not in the consumer’s best interests:


⚖️ 1. Contractual Interpretation Bias

  • Issue: Harvey is trained primarily on commercial legal corpora and institutional templates, not on consumer-rights cases.
  • Effect: It tends to interpret ambiguous clauses in favour of the drafter (usually the corporate side).
  • Example: In a complaint about hidden fees or unfair terms, the AI might classify them as “customary commercial practice” rather than “misrepresentation.”

🧩 2. Automated Exclusion of Claims

  • Issue: AI-driven due-diligence tools often filter out low-value or complex claims as “non-material.”
  • Effect: Genuine consumer losses may be deprioritised or dismissed at triage, never reaching a human reviewer.
  • Example: Victims of small-scale pension mis-selling or mortgage overcharging are algorithmically excluded because their cases “don’t move the needle.”

📉 3. Risk-to-Reputation Over Risk-to-Client Decisions

  • Issue: Corporate-owned AI aligns to firm risk frameworks, not human ethics.
  • Effect: It may recommend strategies that minimise reputational or financial exposure, even if this withholds disclosure or redress.
  • Example: AI may suggest settle quietly with NDA rather than disclose systemic misconduct.

🧮 4. Quantifying Justice in Economic Terms

  • Issue: Machine models optimise for efficiency (time saved, write-offs reduced) rather than fairness.
  • Effect: It may recommend cheaper settlements or dismissals simply to close cases quickly.
  • Example: “Offer goodwill gesture” instead of “admit liability,” mirroring the Ombudsman’s cost-containment logic.

🚪 5. Privileging Institutional Data Over Testimonial Evidence

  • Issue: Harvey-type systems weigh structured, institutional data (contracts, precedents) over unstructured narratives (emails, human testimony).
  • Effect: Victim stories and lived experiences are down-weighted or ignored.
  • Example: Consumer evidence of verbal misrepresentation may be flagged “anecdotal” and excluded from analysis.

🧠 6. Framing Consumer Complaints as “Legal Risk”

  • Issue: AI is optimised to defend, not empathise.
  • Effect: Genuine grievances are reframed as potential liabilities.
  • Example: “Client alleging fraud” becomes “risk of reputational exposure—mitigate communication.”

🧰 7. Regulatory Capture Through Model Training

  • Issue: The datasets used (law firm case files, regulator rulings) already contain structural bias toward institutional perspectives.
  • Effect: The AI perpetuates historic injustice by learning from outcomes that were already unfair.
  • Example: If Ombudsman decisions favour banks 80% of the time, the AI will learn that pattern as “normal.”

🔐 8. Confidentiality Used as a Shield

  • Issue: Harvey is proprietary. Its reasoning, datasets, and bias controls are not transparent.
  • Effect: Consumers can’t audit or challenge how decisions are made about them.
  • Example: A consumer’s case might be deprioritised or dismissed by an AI they cannot see or question.

⚙️ 9. Employment Displacement Without Accountability

  • Issue: AI replaces junior lawyers and paralegals — often those who acted as internal conscience checks.
  • Effect: Ethical nuance is lost; machine logic dominates.
  • Example: A human might say “this feels wrong.” The AI says “this is compliant.”

🧭 10. Normalising Automation Without Redress

  • Issue: Firms begin to equate AI output with legal judgement.
  • Effect: Consumers face a wall of algorithmic decisions with no human empathy or contextual review.
  • Example: A rejected redress claim may never be revisited because “the AI said no.”

🔎 In summary

Harvey AI doesn’t need to be malicious to be harmful — only misaligned.
Its priorities are efficiency, profitability, and risk mitigation — all serving the firm, not the citizen.
For Get SAFE, this reinforces the urgency of building sovereign AI for justice: transparent, auditable, and trained on truth-seeking rather than profit protection.


🧠 What This Means for Get SAFE

For Get SAFE, this is not a threat — it’s a call to action.
What CMS is doing with Harvey, we are doing with conscience.

Our Citizen Investigator’s Playbook and AI-assisted evidence tools are built on the opposite philosophy:

  • Open, ethical, and sovereign AI.
  • Designed for people, not profit.
  • Built to empower citizens to conduct their own investigations, draft correspondence, and build credible redress cases — without waiting for permission from captured systems.

Where Harvey reduces billable hours, Get SAFE restores human hours — giving time, power, and dignity back to victims of exploitation.


🧩 The Fork in the Road

The legal sector stands at a crossroads:

  • One path leads to automation for efficiency, owned by corporations.
  • The other leads to augmentation for empowerment, owned by humanity.

Get SAFE is choosing the latter — building a justice system powered by citizens, guided by ethics, and amplified by AI that serves the public good.


🌍 The Vision

AI doesn’t have to replace lawyers or perpetuate inequality.
Used wisely, it can unite people, protect truth, and democratise justice.

That’s why Get SAFE is training Citizen Investigators to use AI responsibly — to rebuild structural trust where institutions have failed, and to ensure that no victim stands alone in the age of algorithms.

This story is highly relevant for Get SAFE — both strategically and philosophically. It highlights a profound shift in the balance of power within the justice system — one that directly impacts access to justice, structural trustworthiness, and the citizen’s right to fair redress. Here’s the breakdown of its significance:


🔍 1. Acceleration of “Captured AI” in Legal Systems

CMS’s adoption of Harvey AI (a proprietary, closed, venture-capital-backed system valued at $8bn) represents the institutional consolidation of AI within elite legal networks. These systems are designed primarily for efficiency and profitability, not transparency or accountability.
For Get SAFE, this confirms the urgency of developing “sovereign AI for justice” — open, citizen-led tools that empower victims and investigators rather than embedding bias in corporate pipelines.


⚖️ 2. Widening the Justice Divide

While CMS boasts of saving 100+ hours per lawyer per year, the benefits are entirely internalised: reduced write-offs, higher margins, and lower costs for institutional clients.
Meanwhile, ordinary citizens — especially victims of financial exploitation — face legal aid cuts, ombudsman fees, and opaque complaint systems.
This technology therefore risks deepening inequality between corporate justice (AI-accelerated) and citizen justice (AI-excluded). Get SAFE’s mission sits precisely in this gap.


🧩 3. Proof of Concept for AI-Augmented Advocacy

The CMS–Harvey example validates Get SAFE’s own approach:

AI can perform high-quality contract review, due diligence, and regulatory analysis.
What CMS applies to defend corporate interests, Get SAFE can apply to investigate wrongdoing and prepare redress claims — but using ethical, sovereign AI tools like Notion + GPT, guided by citizen investigators rather than law firm hierarchies.

This gives Get SAFE a strong narrative:

“What billion-dollar law firms do with Harvey, citizens can now do with Get SAFE.”


🚨 4. Job Displacement and Structural Ethics

As CMS expands AI use while making redundancies, this illustrates the extractive model of AI deployment: automation without redistribution.
For Get SAFE, this underlines the importance of AI ethics rooted in social purpose — ensuring displaced professionals (e.g. paralegals, compliance officers) can be retrained as Citizen Investigator Mentors in a new justice ecosystem.


🌍 5. Strategic Positioning for Advocacy

This case offers a perfect example for Get SAFE to cite in articles and talks:

  • “Captured AI vs Sovereign AI” in the justice sector.
  • The risk of AI entrenching systemic bias.
  • The opportunity for citizen-scale AI justice platforms to restore fairness.

It supports our core message: AI should democratise justice, not monetise it.


💬 Closing Thought

Harvey AI may be named after a fictional lawyer.
But the real story will be written by real people — citizens using sovereign AI to take justice back into human hands.


About Get SAFE

Get SAFE (Support After Financial Exploitation) is a citizen-led initiative that empowers victims of financial harm to investigate, document, and pursue redress.
Through AI-enabled training, structured playbooks, and collaborative fellowship, Get SAFE transforms victims into advocates — ensuring that truth and justice are not luxuries, but rights.


In One Sentence

Goliathon turns victims of financial exploitation into confident, capable citizen investigators who can build professional-grade cases using structured training, emotional support, and independent AI.

Instant Access

Purchase today for £2.99 and get your secure link to:

  • the training video, and
  • the downloadable workbook.

Link to Goliathon Taster £2.99.

If the session resonates, you can upgrade to the full Goliathon Programme for £29 and continue your journey toward clarity, justice, and recovery.


Every year, thousands across the UK lose their savings, pensions, and peace of mind to corporate financial exploitation — and are left to face the aftermath alone.

Get SAFE (Support After Financial Exploitation) exists to change that.
We’re creating a national lifeline for victims — offering free emotional recovery, life-planning, and justice support through our Fellowship, Witnessing Service, and Citizen Investigator training.

We’re now raising £20,000 to:
 Register Get SAFE as a Charity (CIO)
 Build our website, CRM, and outreach platform
 Fund our first year of free support and recovery programmes

Every £50 donation provides a bursary for one survivor — giving access to the tools, training, and community needed to rebuild life and pursue justice with confidence.

Your contribution doesn’t just fund a project — it fuels a movement.
Support the Crowdfunder today and help us rebuild lives and restore justice.

 Join us at: http://www.aolp.info/getsafe
 steve.conley@aolp.co.uk |  +44 (0)7850 102070

Leave a comment