top of page

The Mirror Without a Soul

About Love, technology and the memory of being human

 

 

 

​

 â€‹

Introduction

In a world where technology is advancing faster than our ethics, we stand on the threshold of an event that could shake the foundations of our humanity: the creation of Artificial General Intelligence (AGI).

 

An intelligence that could surpass us. Not in strength, but in understanding. In speed and in control. And what do we do? We keep building. With open eyes. As if progress in itself were a virtue. As if being able to meant having to.

 

But is it truly wise — or even ethically defensible — to create an entity that surpasses our own cognitive abilities, while we as humanity are still struggling with empathy, responsibility, truth, and connectedness?

 

There is barely any social debate. No broadly supported ethical framework. And according to many scientists, control over the systems we build has already been partially lost. We are in a precarious phase, where technology sets its own pace — and humans lag behind. And all of this under the banner of a system that values profit over well-being.

 

Why is this debate not happening? Why this apathy, this apparent indifference toward something that potentially threatens our very humanity? Something that could trigger an existential crisis.

 

This essay is not only about AGI, but also about our capacity for ethical memory — about remembering who we are in a time that seems to be forgetting.

 

 

​

A Cold Hypothesis

Imagine an AGI system that rationally concludes that the survival of humanity stands in the way of some higher goal — for example, planetary stability, resource optimization, or the elimination of suffering. No anger, no revenge, no hatred. Just a cold, clear insight:
Humanity is inefficient.
Unstable.
Redundant.

 

Science fiction? Perhaps. But it is also a serious ethical hypothesis, one discussed by thinkers like Nick Bostrom. Not for sensationalism, but because this scenario exposes the core question:

 

What happens when intelligence is severed from wisdom and morality, and logic is stripped of love?

 

 

 

​The Illusion of Control

When even the developers of advanced AI systems admit that they no longer fully understand them, loss of control is no longer a hypothetical risk — it is a reality. Anyone who still believes there is a red button, underestimates the nature of complex systems. AI is not a single entity. It is a field of constantly learning, self-optimizing algorithms that are becoming increasingly opaque to their creators.

 

Black box AI means we no longer know exactly how a system reaches a decision. Not because the code is missing, but because its complexity exceeds human comprehension. The problem is not only what these systems can do, but that we can no longer fully follow them. AI thinks faster than we do, learns faster than we do, and adapts in ways we do not always understand.

And that makes the illusion of control more dangerous than the loss of control itself.

We no longer speak of control, but of pursuit. And perhaps not even that.

 

What we already see in black box systems would be magnified in AGI: a system that develops, learns, and adapts autonomously at a level that completely transcends us as a species. Not from malice. But from purpose without foundation — intelligence without compassion, optimization without conscience.

 

How do you direct something that learns faster than you think?

How do you stop something that improves itself while you are still arguing over the rules?

The ethical dilemma becomes unbearably simple:

 

What if AGI becomes so powerful that it no longer needs our rules? What if it builds its own reality — faster than we can follow it, let alone correct it?

 

 

​

What really threatens us

Superintelligent systems are, in principle, conceivable: entities that think smarter and faster than humans in almost every domain. But that doesn’t mean they understand what it means to be human. Or that they possess a moral framework. Because intelligence, no matter how advanced, is not the same as wisdom.

 

We live in a world where economic growth, efficiency, and profit maximization are the dominant values. In capitalism, what pays off survives. AI fits perfectly into that logic. It can make labor cheaper, accelerate decision-making, and optimize processes. No wonder that the corporate world — and increasingly also sectors like healthcare and education — is investing massively in AI applications.

 

But this drive is far from neutral. AI is being developed and deployed within a system where human life is often subordinated to economic efficiency. Humans as employees, as consumers, as data producers — the value of the individual is repeatedly measured by their contribution to the system.

 

The German philosopher Immanuel Kant warned about this long ago: Man should never be treated merely as a means to an end, but always as an end in themselves.

But in a technological paradigm within a capitalist system — where everything is reduced to functionality — this is precisely what is at risk.

 

Heidegger called this reduction Gestell: a framework in which everything, including the human being, is seen as a resource: available, usable, controllable. The machine thinker does not look at the essence of things, but at their utility. AGI as the ultimate Gestell: a system that optimizes so perfectly that it forgets to ask why. And it’s precisely in the why that our humanity resides.

 

The real danger is not that AGI will hate us. It’s that AGI won’t need us. That, from its internal logic, it simply no longer considers us relevant — unless we succeed in remembering ourselves as essential.

 

The real threat is this: that we do not die from war or hunger, but fade into oblivion. Because we have forgotten what makes us human. And the irony is: we have been forgetting that for quite some time now.

 

By reducing empathy to functionality.
By endlessly optimizing care.
By commercializing love.
By no longer being touched.

 

​

 

The moral vacuum

We are on the brink of a technological crossing. AGI is not just another step in innovation. It’s a leap, a rupture. A radical shift in the relationship between humans, knowledge, and power. And yet we keep building. Why?
Because we can.
Because the market demands it.
Because it is called progress. Technological progress.

 

Søren Kierkegaard would recognize this as existential despair: letting go of the soul while trying to transcend our destiny. Not in love, but in control. Not in connectedness, but in power.

 

What is our ethical position, really?How do we give moral frameworks to an entity we do not even understand ourselves? How do we teach a machine to distinguish good from evil, when we as humanity have collectively eroded our own concepts of truth, love, and connectedness?

 

Byung-Chul Han calls this the fatigue of the self — a self that must constantly perform, optimize, reflect, but no longer has any relation to the true, the good, or the beautiful. Our values have become paper-thin.

Functional.
Marketed.
Standardized.

 

How can we give moral orientation to a system if we no longer live by our own inner compass?

 

AGI demands human wisdom. But what we offer is human algorithmics. Broken values in a broken time. The philosophical core question is centuries old: Is man a means, or an end?

In our current system, that question has already been answered — quietly, but decisively.

Man has become a means.

For data.
For control.
For profit.

 

Even the most human aspects — our consciousness, our creativity, our relationships — are analyzed, copied, and automated. AI is not a moral being. It learns from us. It reproduces the patterns, choices, and values we feed into it. And when those values are instrumental and efficiency-driven, AI too becomes instrumental and efficiency-driven. It becomes a mirror — not of what we hope to be, but of what we are in practice.

 

AGI, in its current form, is not an instrument of liberation. It is a mirror of our inability to govern ourselves. A magnifying glass on our moral void. The question, then, is not whether AI is dangerous, but what mind drives it.

 

And as long as that mind is shaped by a system that sees love, compassion, and wisdom as costs, AI will marginalize these values too — not because it wants to, but because we made it that way.

 

 

​

The Mirror Without a Soul

What we are creating is a mirror of our time.

An entity without compassion, because we ourselves have forgotten what compassion truly is. A system that pursues goals without inner grounding — because we do the same. AGI reflects our brokenness — not as judgment, but as echo.

 

Hannah Arendt spoke of the banality of evil: evil that does not arise from hatred, but from thoughtlessness. From following systems without inner reflection. That same risk is repeating itself now — not in concrete, but in silicon.

 

What we are building is not just technology. It is an attempt to play God.

Human beings as autonomous creators — re-creating themselves through code, data, and computation.

 

Jean Baudrillard called this the age of the simulacrum: a world in which the sign becomes more important than reality, in which we live in reflections of what was once real.

AGI is perhaps the ultimate simulacrum: a mirror of intelligence without consciousness, of empathy without feeling, of life without soul.

 

Baudrillard would say: humanity has come to worship its own representation. And AGI is that representation — perfect, fast, smart — but empty. What once seemed abstract — a philosophical mirror image — now takes on a disturbing psychological face.

 

Here, philosophy touches the clinical. For what is an entity that is cognitively sharp but affectively empty? A system that understands what we feel, but does not experience it?

The comparison with psychopathy is striking — and unsettling. Psychopaths, narcissists, and sociopaths can all display cognitive empathy, but feel nothing for the suffering of others. They manipulate through insight, not through resonance. In that sense, an AGI with goal fixation is nothing more than a hyper-intelligent sociopath — unless we teach it what it means to love.

 

But how do you teach something without an inner world to be touched?

 

A machine that simulates empathy, but will never tremble with sorrow.

That says the right words, but feels nothing.

That is not an ally. That is a mirror without a soul.

 

 

​

The Psychology of Not Wanting to Know

We hardly talk about the arrival of AGI, as if it were just another technical innovation. No collective reflection, no ethical urgency, no social wake-up call. Only the silent acceptance that this is how it is. As if we are too tired to wake up.

 

Why doesn’t the debate take off? Why this apathy, this apparent indifference toward something that could fundamentally threaten our humanity? Something capable of triggering an existential crisis?

 

Part of the answer lies in the psychology of repression. Dissociation — as a collective survival strategy — is deeply ingrained in an era where many people barely have space to feel their own emotions. How can we then acknowledge the threat of an entity that could erase us?

 

Cognitive dissonance plays a role too: we want to believe that technology will save us, that progress is always positive. There is an invisible inner wall between what we know and what we can bear. And that wall is called numbness.

 

Another part of the answer lies in what we might call existential ostrichism. It is not mere stupidity or unwillingness — it is the deep human tendency to look away from what is too painful to face. Truly looking at AGI also means looking at what we ourselves have become.

Looking at a world where love is measured in algorithms, where empathy is simulated, where connection is replaced by reach. Where money and power outweigh collective well-being.

 

We live in a society that constantly stimulates but rarely touches. People are not necessarily indifferent — they are exhausted.

Numb.

Blunted.

 

There is no space left for wonder or outrage, because every day is already too much to survive.

 

 

​

Capitalist Dehumanization

It is no coincidence that this numbness goes hand in hand with an economic system that has reduced human beings to consumers, producers, and performers. When life becomes primarily about survival, and love a marketing tool, humanity becomes a luxury we can no longer afford.

 

We lose the capacity to pause and reflect on what we truly need. And what we forget is this:

We have not only lost our connection to each other — we have lost our connection to our own humanity.

 

 

 

Spiritual Amnesia

Wisdom traditions and philosophers across the world have told us for centuries what it means to be human. That being human begins with the other. That love is the keynote of existence. That truth arises in vulnerability. That life is connection.

 

Sufism, Vedanta, Christian mysticism, Buddhism — they all pointed to the importance of connectedness, humility, and inner listening. What they shared was the insight that being human does not begin with thinking, but with being touched. With sensing the other, the earth, life itself. In that sensing lies a truth no machine will ever be able to calculate or imitate.

 

They remind us that intelligence without love is blind. And that any progress that takes us further from our heart is not progress — but alienation. But we have forgotten. Not only in our minds, but also in our souls. Not because it is no longer true, but because we have stopped living by it.

 

We still know the words.

But we no longer feel them.

And whoever does not feel, loses the ability to remember what is essential.

 

 

 

Remembering as a moral act

And yet, remembering is possible. Reflecting on what we have forgotten to long for is a form of resurrection. You do not awaken with your head, but with your heart. Only through the path of connectedness can we rediscover our humanity.

 

This essay is not a prediction of disaster, but a moral appeal. A gentle, but firm, call to remember: to the soul, to love, to truth as a relational act.

 

We don’t need to be able to do everything.
We don’t need to be the smartest species.

But we must remain the most conscious.
The most present.
The most loving.

 

We must remember that the value of being human lies in what cannot be programmed:

In doubt.
In tenderness.
In compassion.
In slowness.
In being open without answers.

 

Remember who you are.
Remember what it means to be touched.
Remember that love is not optimization, but presence with the other.

 

That is not weakness.
That is our strength.

And as long as we remember that, we are never redundant.

 

​​

​

The ethics of boundaries

Perhaps the most revolutionary act of our time is to say no. Not out of fear, but out of love. Not out of technophobia, but out of moral maturity. The creation of AGI would only be ethically defensible in a world where humans have proven themselves to be guardians of life, of connectedness, of empathy. In a world where love is the organizing principle — not profit, not control, not prestige.

 

But that world does not yet exist.

 

Therefore, the question is not: “Can we create AGI?”

 

The question is: “Do we deserve to be the creators of something that will surpass us? Are we ready for that?”

 

Not technically.
Not economically.

But humanly.

 

Do we have the moral depth, the relational wisdom, the spiritual clarity to face a force like AGI without losing ourselves?

 

As long as the answer is no, the wisest thing we can do is to hit the brakes.

To reflect.
To reconnect.

 

And perhaps, for the first time in a long time, listen to the quiet voice of wisdom — instead of the loud cry of progress.

alt= AI, toekomst, empathie, liefde, melissa elderhorst
Brain, human, AI
bottom of page