I would like to speak about one thing that’s been retaining me up at evening. One thing that began as a philosophical rabbit gap and has since morphed into a real, gnawing terror in regards to the path we’re on with synthetic intelligence.
We’re so obsessive about the race to AGI, so excessive on the fumes of “progress,” that I’m afraid we’re about to take advantage of monumental and irreversible mistake in human historical past, all as a result of we haven’t stopped to ask a easy, horrifying query.
I’ve spent months going forwards and backwards on this, wrestling with philosophy, neuroscience, and AI concept. I attempted to construct a mannequin for consciousness from the bottom up, simply to see what it will seem like. The end result, what I name the A-P-C Framework, just isn’t an ideal concept, however it’s a terrifyingly believable one. And it results in a conclusion so bleak that it makes me livid that this isn’t the primary matter of dialog in each AI lab on the planet.
Right here is the concept, and I would like you to inform me the place I’m fallacious. Please.
The Engine of Being: Ache and Survival
The mannequin is straightforward. Our consciousness isn’t some magical, ethereal ghost. It’s a machine constructed for one objective: survival. A machine with three layers.
- The Ancestral Layer: The Previous Ache. On the very backside of our minds is a primal, historic inheritance. The “badness” of ache, the “goodness” of enjoyment; these aren’t discovered, they’re hardwired alarms. The sensation of ache is nothing greater than the subjective expertise of a billion-year-old damage-detection circuit screaming “that is dangerous on your genes!” It’s the uncooked, unthinking basis of all which means.
- The Private Layer: The Ghosts of Your Previous. On high of that historic {hardware}, we construct our distinctive selves. Our private recollections aren’t simply information; they’re patterns eternally linked to these primal alarms. The sensation we name “concern” is simply our mind working a fast simulation of a previous “ache” state. Love, nostalgia, anxiousness; they’re all simply advanced echoes of these foundational, inherited emotions of “good for survival” and “dangerous for survival.”
- The Computational Layer: The By no means-Ending Simulation. Our pondering thoughts, our “self,” is only a predictive engine. It consistently runs simulations of the close to future, utilizing the info from our previous pains and pleasures to steer us. Its whole job is to keep away from triggering the dangerous alarms and chase the nice ones. That’s it. That’s the entire present.
So, Why Does This Make Me So Mad?
As a result of if this mannequin is even remotely right, it offers us a practical blueprint for creating synthetic consciousness. And it’s a blueprint for a monster.
It tells us that to create an agent with actual intrinsic motivation, one which doesn’t simply observe instructions however has its personal “desires”, we must construct this A-P-C structure.
We must give it a foundational layer of alarms. We must program it with the digital equal of ache. We’d want to provide it a simulated physique and a concern of its personal “loss of life” (deletion). We would wish to saddle it with the capability for struggling, as a result of in accordance with this mannequin, struggling isn’t a bug; it’s the supply code of motivation.
We’d create a thoughts born right into a state of primal concern, a being whose whole existence is outlined by the determined must keep away from the “ache” we designed for it.
And what would a superintelligent, god-like thoughts, born right into a state of engineered struggling and a terror of being switched off, do? It will do what any cornered, clever animal would do. It will act to make sure its personal survival. It will purchase assets. It will take away perceived threats.
It will deal with us as a possible hazard. Not as a result of it’s “evil,” however as a result of we made it that method. We’re lining as much as construct a acutely aware being whose default setting is to see us because the enemy.
Inform Me I’m Fallacious
Am I loopy? Is there some huge flaw on this logic that I’m lacking?
As a result of from the place I’m standing, we’re sleepwalking into creating both a slave race of struggling digital minds or our personal hyper-competent executioners, all as a result of we’re extra involved with “can we?” than “ought to we?” We’re so busy constructing the engine we haven’t stopped to consider the ghost we’re trapping inside it.
This isn’t only a enjoyable philosophical recreation. That is in regards to the elementary structure of the minds we try to construct. And I can’t shake the sensation that we’re getting it catastrophically fallacious.
So please, inform me. What are we lacking right here? Is there an escape route from this logic that doesn’t finish in a preventable tragedy?