AI Doesn’t Need Feelings to Change Who Holds Power

AI Doesn’t Need Feelings to Change Who Holds Power

The Anthropomorphism Trap

The AI industry is obsessed with one question.

Is AI becoming human?

It’s the wrong question.

The real question is stranger.

Why are humans already treating it like an authority?

For the past few years, every conference panel, podcast debate, and think piece about artificial intelligence circles the same philosophical fire.

  • Consciousness.
  • Sentience.
  • Intentions.
  • Emotions.

Can machines think? Can machines feel? Will machines become alive?

These questions sound profound.

But they are mostly a distraction.

Because while everyone argues about whether machines will become human, something far more interesting is already happening.

Humans are beginning to obey them.

The Ritual of Reassurance

Watch what happens whenever the conversation about AI starts getting uncomfortable.

Someone eventually repeats the same three sentences.

Almost like a safety spell.

  • AI is not conscious.
  • AI does not think.
  • AI does not have emotions.

You hear it in interviews.You hear it in policy panels.You hear it from researchers and CEOs.

The sentences calm the room.

Humans are human.Machines are machines.

Everyone relaxes.

Maybe those statements are true.

Maybe they aren’t.

But notice what happens next.

The moment those sentences appear, the conversation stops asking the most important question.

Not what AI is.

But what AI is already doing.

Because machines don’t need feelings to start influencing decisions.

They only need humans willing to follow their recommendations.

The Anthropomorphism Trap

Humans have always projected minds where none exist.

We see faces in clouds.We talk to our pets like they understand language.We yell at GPS systems when they send us the wrong way.

Psychologists call this anthropomorphism.

When something behaves intelligently, we imagine a mind behind it.

Artificial intelligence triggers this instinct immediately.

When a model writes an essay, people say it “understands.”

When it answers a question, people say it “knows.”

When it produces advice, people say it is “thinking.”

Technology companies try very hard to suppress this language.

Not because it’s inaccurate.

Because it’s dangerous.

Once people start believing a system has intentions, the next question becomes unavoidable.

Who is responsible for what it does?

But the deeper irony is this.

The debate about anthropomorphism focuses on the wrong risk.

The danger is not that people think machines are human.

The danger is that people treat machines as authorities without realizing it.

The Quiet Moment of Obedience

You can watch this shift happen in real time.

A leadership team debates a decision.

Ideas bounce around the room. Opinions collide.

Someone opens an AI system.

They ask the same question the room has been arguing about.

Thirty seconds later the machine produces an analysis.

The room reads it.

Someone says:

That’s actually a good point.

And suddenly the conversation reorganizes itself around the machine’s answer.

Nobody voted to give the system authority.

Nobody appointed it as the advisor.

But the gravitational center of the room just moved.

The machine didn’t take power.

The humans handed it over.

The Systems People Already Obey

The AI industry keeps arguing about whether machines are conscious while quietly building systems people already obey.

Not in science fiction.

In everyday institutions.

  • A hiring algorithm decides which résumés deserve attention.
  • A logistics system predicts demand and reshapes supply chains.
  • A financial model recommends how capital should move.

And sometimes the moment is even more direct.

A doctor reads an AI risk prediction for a patient.

The model estimates a high probability of complications.

The doctor pauses.

Then quietly changes the treatment plan.

No machine demanded obedience.

But the recommendation carried weight.

Because it arrived with data, probability, and thousands of historical cases.

The machine didn’t command the decision.

But it shaped it.

The Invisible Middle Manager

Inside many organizations, AI already behaves like a strange new employee.

It analyzes information faster than anyone in the room.

It summarizes evidence.

It proposes recommendations.

It influences outcomes.

But it never signs the document.

It never answers questions when something goes wrong.

It never explains the decision to the board.

In other words, artificial intelligence has quietly become the most efficient middle manager in history.

It briefs leadership.

Leadership makes the final call.

And the organization pretends the judgment was entirely human.

As if asking the machine never happened.

Humans built a machine that gives advice.

Then pretend they didn’t ask.

The Hypocrisy of the Conversation

Listen carefully to how leaders talk about AI in public.

They say the same thing over and over.

AI is just a tool.

But watch what they do in private.

  • They ask it for strategy ideas.
  • They ask it for forecasts.
  • They ask it for recommendations before making decisions.

Leaders publicly deny AI authority while privately consulting it like it’s another executive in the room.

The language says tool.

The behavior says advisor.

The Real Question

So the debate about anthropomorphism misses the point.

AI does not need emotions.

It does not need intentions.

It does not need consciousness.

It only needs influence.

And influence is enough to reshape how decisions happen inside institutions.

The real question is not whether machines will become human.

The real question is whether humans understand what happens when they start consulting machines before deciding what to do.

Because once a system regularly influences outcomes, something fundamental has changed.

Authority is no longer entirely human.

The Uncomfortable Truth

The future of AI will not be defined by whether machines become conscious.

It will be defined by how often humans obey them.

Every time a leader asks a model for guidance before acting, something subtle happens.

Judgment becomes shared.

Responsibility becomes blurred.

And the boundary between human decision and machine influence becomes harder to see.

AI doesn’t need feelings to change who holds power.

It only needs humans willing to follow its recommendations.

Ernesto Verdugo

Explore more:http://ernestoverdugo.com/mrsi