When Institutions Start Calling Agency Dangerous

Why agency-expanding technologies are often resisted precisely when they begin dissolving dependency structures

For most of modern history, institutions have held a structural advantage over individuals.

Not always because they were more intelligent.
Not always because they were malicious.
But because they possessed something ordinary people did not:

Access.

Access to information.
Access to expertise.
Access to analytical capability.
Access to systems ordinary people could neither see nor navigate alone.

This asymmetry became the foundation of modern institutional power.

Banks understood financial systems better than customers.
Law firms understood legal systems better than citizens.
Medical institutions understood health information better than patients.
Governments understood administrative systems better than the public.

And because complexity was concentrated inside institutions, individuals became dependent upon them.

For decades, this arrangement was treated not simply as normal, but as necessary.

Then artificial intelligence arrived.

Not as artificial general intelligence.
Not as science fiction.
But as something far more disruptive to existing structures:

Portable cognition.

For the first time in history, analytical capability is beginning to move from institutions into the hands of ordinary individuals at scale.

And institutions are reacting in a remarkably predictable way.


The Pattern Repeats

Whenever a technology emerges that reduces dependency on institutional gatekeepers, a familiar argument tends to appear:

“This may be dangerous in the hands of ordinary people.”

The wording changes.
The framing evolves.
But the structure remains remarkably consistent.

The public is told:

  • the technology is too complex,
  • too risky,
  • too easy to misunderstand,
  • too powerful without supervision,
  • too dangerous without professional oversight.

Some of these concerns are legitimate.

But historically, they also tend to emerge most forcefully at the precise moment a technology begins dissolving dependency structures.

That distinction matters.

Because there is a difference between:

  • protecting people from harm,
    and
  • protecting institutions from disintermediation.

The two are often rhetorically blended together.


Complexity Has Long Been a Moat

Many institutional systems derive authority not only from expertise, but from opacity.

Complexity itself becomes part of the commercial model.

The harder something is to understand:

  • the more valuable intermediaries become,
  • the more dependent individuals remain,
  • and the more difficult it becomes for outsiders to challenge institutional assumptions.

This is not necessarily conspiracy.
Often it is structural.

Systems evolve around professional expertise.
Language becomes specialised.
Processes become layered.
Compliance frameworks multiply.
The ordinary person gradually loses visibility into decisions affecting their own life.

Over time, dependency becomes normalised.

Financial planning provides a useful example.

For decades, high-quality long-term financial modelling was largely inaccessible to ordinary people without professional mediation. The tools were expensive. The language was technical. The expertise was concentrated inside regulated institutions and professional firms.

AI changes that equation.

Not perfectly.
Not completely.
But materially.

An individual can now stress-test scenarios, explore trade-offs, model decisions, and ask sophisticated financial questions using tools that would have been institutionally inaccessible only a few years ago.

The information asymmetry begins to narrow.

And as it narrows, institutional discomfort rises.


The New Paternalism

One of the most interesting developments in the AI era is the re-emergence of paternalistic language.

The argument is rarely:
“We want to preserve dependency.”

Instead, the argument becomes:
“We are protecting people.”

Again, this is not always insincere.

AI systems do hallucinate.
Poor tools can mislead users.
False confidence is a real risk.
Bad information can cause harm.

But there is an important question underneath all this:

Compared to what?

Because the alternative facing many individuals is not:

  • AI versus perfect professional guidance.

The alternative is often:

  • AI versus no guidance at all.

Millions of people already make major life decisions every day:

  • pensions,
  • mortgages,
  • debt,
  • contracts,
  • career changes,
  • investments,
  • legal disputes,
  • healthcare choices,

without meaningful access to expert support.

The existing system has not solved this accessibility problem.

In many sectors, it has simply priced large parts of the population out of meaningful participation.

So when institutions argue that ordinary people should not rely on imperfect AI tools, an uncomfortable question emerges:

Were these same individuals adequately served before the tools existed?

Very often, the honest answer is no.


The Democratisation of Analytical Capability

What AI is really distributing is not merely information.

The internet already distributed information.

What AI distributes is structured interpretation.

That is different.

The ability to:

  • synthesise complexity,
  • identify patterns,
  • model possibilities,
  • translate specialist language,
  • and accelerate understanding

has historically been concentrated inside professional institutions.

Now it is becoming portable.

This may prove to be one of the most socially disruptive shifts of the next twenty years.

Not because AI replaces professionals entirely.

But because it changes the balance of cognitive power between institutions and individuals.

The citizen with AI is no longer cognitively isolated in the way previous generations were.

That changes negotiations.
It changes contracts.
It changes financial planning.
It changes education.
It changes legal awareness.
It changes confidence itself.

And confidence matters.

Because dependency is not only economic.
It is psychological.

Many people have been conditioned to believe:
“I cannot understand this without an expert.”

AI weakens that belief.


This Is Not an Argument Against Expertise

None of this means expertise becomes irrelevant.

Professional judgement still matters.
Experience still matters.
Ethics still matter.
Human discernment still matters enormously.

But expertise and dependency are not the same thing.

A healthy future may involve:

  • individuals with greater agency,
  • professionals acting as guides rather than gatekeepers,
  • and institutions becoming more transparent rather than more controlling.

The question is not whether AI should replace human expertise.

The question is whether expertise will evolve into partnership — or attempt to preserve itself through opacity and dependency.

That may become one of the defining tensions of the AI age.


The Real Opportunity

The deeper opportunity here is not technological.

It is civilisational.

For generations, many systems have quietly trained people out of authorship over their own lives.

Complexity expanded.
Intermediation expanded.
Dependency expanded.

AI introduces the possibility — still imperfect, still fragile — that individuals may regain some ability to think clearly about systems that previously felt inaccessible.

Not because machines become wiser than humans.

But because ordinary people gain tools that help them navigate complexity with greater confidence and less dependency.

That does not eliminate risk.

But neither did the institutional era.

The real challenge now is whether society uses AI primarily:

  • to centralise power further,
    or
  • to distribute human capability more widely.

That may be the most important question of all.

And institutions, whether consciously or not, are already beginning to answer it through the way they respond to agency itself.

Leave a comment