The One Who Felt It.
You may have entered this letter at a point that isn’t the beginning.
If you landed here directly, please start at the beginning.
. . .
I don’t know exactly when you’re reading this open letter, but as I write these words, we’re approaching the final days of 2025.
The world talks about one thing and one thing only: AI. This technology has already reshaped the way we live, work, and communicate — and yet, it’s still in its infancy. It’s powerful, incredibly useful in many cases, and undeniably here to stay.
I also don’t know when you’re from. Maybe you’re Gen Alpha or Gen Z. Maybe you’re a Millennial, like me. Whoever you are, the facts and truths shared here are universal. What changes is how you interpret them — your experiences, your memories, and above all, how they make you feel.
To Millennials: we’re the last generation that remembers a world before smartphones, the internet, and artificial intelligence — and the only generation that had to grow up twice: once in the analog world, and once in the digital one. For us, the rules of the game changed while we were still playing.
Younger generations, of course, have their own views. And those views are yours — shaped, though not entirely free, from the algorithms that feed you content, predict your habits, and compete for your attention. That’s not sarcasm; it’s reality.
How you feel — and how you want to feel — has become the currency of every social platform in existence.
But this chapter isn’t about social media addiction or dopamine cycles. You can find that in neuroscience books or therapy sessions. This is about the future — and how it’s entangled with the very same master line of code that governs us. Because I have this strange, persistent feeling that we’re being promised something that simply can’t exist: sentient ASI.
Am I the only one who feels that sentient machines are a delusional lie?
Let’s break this down.
First, the phrase artificial intelligence itself isn’t quite accurate. What we’re actually dealing with is an astonishingly advanced form of automation. That doesn’t make it less revolutionary — the technology deserves admiration — but calling it “intelligence” in the human sense is misleading. Think about what really happens when you open ChatGPT and type a question. You send a request to a program. That request activates a massive system designed to process, organize, and reproduce patterns from data it has previously absorbed. The system then generates a response in natural language — words that sound thoughtful, cohesive, even human.
But what’s truly happening underneath is mechanical: an algorithm statistically predicts what word or phrase most likely follows the last. It repeats that prediction thousands of times per second until a complete response takes shape. Behind the curtain, a model that was trained on enormous volumes of text has acquired patterns, rhythms, associations, and structures of human language. It doesn’t understand — it correlates. It doesn’t feel — it replicates patterns of expression based on probability.
After this initial stage, the system undergoes another phase — called fine-tuning — where it’s adjusted with examples and human feedback to make its replies more coherent, helpful, and safe. That’s what allows it to sound natural in conversation. But none of that adds awareness. It only refines the illusion. There is no inner life here. No curiosity, no wonder, no pain or joy. Just sequences of symbols, assembled according to rules of probability.
These systems may appear articulate, even empathetic, but it’s imitation — not experience. They are masterful mirrors reflecting fragments of us back to ourselves. And that brings us to the central paradox: You can automate thinking, but you cannot automate feeling.
How it actually works (in plain English).
Absorbing patterns: During its training, the model was exposed to vast collections of text — books, articles, conversations — and statistically acquired how words, ideas, and contexts tend to appear together.
The transformer core: It uses an architecture called a transformer, powered by an attention mechanism that decides which parts of the text are most relevant to each new prediction.
Token prediction: The system predicts one token (a fragment of a word) at a time, choosing what’s most probable next. Then it does it again, and again, until an entire answer emerges.
Fine-tuning through feedback: Humans then guide the model, correcting and rewarding good outputs. This shapes behavior but does not create consciousness.
No sentience: It doesn’t feel what it “writes.” There’s no awareness behind its words — no perception of the world, no subjective experience.
Why “sentient AI” is still impossible.
The master code that defines life — the capacity to feel — cannot be replicated through information processing alone. Sentience isn’t the accumulation of data; it’s the experience of existing from within.
A model can describe warmth, but it can’t feel warmth. It can define grief, but it doesn’t ache. It can quote poetry about love, but it has never loved. It can simulate emotion perfectly — but simulation is not sensation.
Yuval Noah Harari — historian and author of the bestsellers Sapiens: A Brief History of Humankind, Homo Deus: A Brief History of Tomorrow, and 21 Lessons for the 21st Century — warns that artificial intelligence and automation mark a turning point in human history. He argues that AI’s growing ability to process information and make decisions could erode human agency, concentrate power in the hands of a few, and create a new class of “useless” humans displaced by algorithms. For Harari, the danger isn’t that AI will become conscious, but that it will understand us better than we understand ourselves — and use that knowledge to manipulate our choices, emotions, and beliefs at scale.
A Constant Reminder and a Note on Responsibility.
Feeling is not a choice. Action is. Between feeling and acting, there is a space — and in that space lives responsibility.
Nothing in this text invites impulsive action. Every action assumes care — for yourself, for others, and for the world we all inhabit.
The Experiment.
To illustrate why a sentient ASI (Artificial Super Intelligence) is fundamentally impossible, let’s conduct a theoretical experiment. I’ll guide you through it — and by the end, you’ll understand, once and for all, everything you’ve read so far.
Imagine you’re a scientist. In your lab, with the help of an assistant, you’re about to run a behavioral experiment. The subjects? A human and an AI — perhaps a humanoid robot or simply a computer powered by artificial intelligence. You’ve prepared two identical rooms: each has a single door (the entrance), one chair, and one lamp. No windows. No food, no water, no bathroom — nothing else. The experiment begins. The human volunteer is led by your assistant into the room. Nothing is explained. The assistant leaves and closes the door. The same procedure is repeated with the AI: it’s placed in the second room, the door closes, and silence follows.
Now, let’s observe the human first.
Within the first ten seconds, he looks around — his senses immediately recognize the space as uncomfortable, maybe even slightly claustrophobic. His eyes scan every corner, absorbing his surroundings through sight, touch, hearing, smell, and thought. His brain synthesizes these sensations into feelings. From those feelings, new possibilities emerge: What should I do now?
He decides to sit quietly for a while, thinking something will happen soon. Ten minutes later, he starts humming a tune to pass the time — a sign of boredom. Fifteen minutes later, he stands to stretch his legs — another sign of restlessness. At the thirty-minute mark, he checks the door. Locked. He peers through the keyhole — nothing. After an hour, discomfort rises. He needs a bathroom. He’s thirsty. Hungry. He knocks on the door, calling out — irritation. When no one responds, frustration takes over. He curses. Maybe he pounds on the door. Two hours later, he’s shouting for help between gasps, inspecting the hinges, looking for a way out. By the third hour, he’s hitting the door in desperation.
We could keep watching — until exhaustion, panic, or collapse — but that’s not the point. Now, rewind. Let’s look at the AI in the other identical room.
What happens there?
Nothing.
Ten minutes pass — nothing.
Thirty minutes, one hour, three hours, ten hours — nothing.
Why? Because there was no prompt.
No one told the AI to do anything. It has no reason to act. It doesn’t “decide” — it responds. Without an instruction, it simply does nothing.
You might argue that future AIs could “choose” actions when idle. But that’s not awareness — that’s automation. A preprogrammed loop running options within its code. So, what would it choose? Why one action and not another? A human acts from an immeasurable network of experiences, memories, instincts, and—above all—feelings. What would an AI have instead? A digital roulette of random commands? That’s the dangerous illusion — and we’ll get to that later. For now, focus on this: your ability to translate sensations into feelings is your perpetual motion engine. Feeling is what moves you — literally.
It’s the invisible line of code that generates ‘your every prompt’, the source of every impulse to act. And that’s why you don’t depend on anyone or anything to keep going.
You are self-propelling because you feel.
And that simple fact — that miraculous, irreversible truth — is something we cannot, and will never, transfer to a machine. We can pass it on to our children without even knowing how or why it happens. How could we possibly replicate that and implant it into an object? We simply cannot infuse feelings into metal, circuits, and plastic. Because feeling — true sentience — isn’t something you build. It’s something you are.
If you dig deeper, you’ll see even more clearly that sentient ASI is impossible. Every AI in the world will always — and forever — remain a set of impressive, powerful automations. It can do a great deal of good, unquestionably, and it has its rightful place among us. But it is also one of the most dangerous things we’ve built so far.
Allow me to show you why.
