That was fast: Rage impotently against the machine

That was fast: Rage impotently against the machine

Real humans have needs and conflicts, but AIs don't.

Sex in the future is right now, because the AI joyfriend epidemic is centering questions of metaphysics and epistemology that were once confined to ivory tower table-pounding sessions.

The Matrix came out when I was in high school, and it got people having bar and dinner-table conversations about whether we’d want to plug in. This is the essence of Robert Nozick’s experience machine: a thought experiment about a machine that connects directly to your brain and makes you hallucinate whatever experiences you’d like to have, in detail so vivid it’s indistinguishable from real life.

Originally, Nozick framed this as two-year stints in the machine, with breaks in between to plan your next two years. Many respondents to surveys said they’d find the breaks depressing, and so the thought experiment was altered to be for whatever length of time you want, up to and including the rest of your life. But surveys are boring, The Matrix was a blockbuster, and everyone knows that some of your friends said they’d plug in because who cares, and other friends said they’d never plug in because it wasn’t real, and some of your friends wanted to argue endlessly about what it means for something to be real.

Eventually, these conversations die down, because everyone arrives at an answer that satisfies them. But now that AIs are being packaged as “companions,” both platonic and romantic, we’re no longer discussing hypotheticals. We’re living out our Yes or No answers, sexbots first, while the Techbro Priesthood constantly raises the stakes. For a philosopher who wants to know everything about how everybody fucks, this is Christmas.

Sex Philosopher Christmas sucks

I feel like a four-year-old who got a crowbar for Christmas, which one of my college buddies actually did for his little cousin once: I’m absolutely beside myself with amazement at the possibilities, but also, this is gonna be a huge mess.

As everyday people discuss pseudo-relationships with AI joyfriends, both from inside those relationships in their social media spaces, and from outside as we gawk at them, we argue with each other over what’s real or not, what “counts” or not, how we “know” our “real” relationships aren’t fake, and so on. Because these are everyday people who bizarrely do not follow the rules of scholarly argumentation and rebuttal, such discussions go in all kinds of directions. Watching people get lost in this sauce is a real trip when you were studying and discussing peer-reviewed versions of the same argument a decade or two ago.

A lot of these questions have actual answers, and it’s going to take our culture a few decades to come around to those answers, and a lot of people are going to get seriously hurt in the meantime. When Replika disabled erotic roleplay, users shared suicide hotline numbers and formed ad hoc support groups, talking about how their digital “lovers” had been lobotomized. As I concluded last time, while the relationships are illusory, the users’ feelings are real, and they have to deal with the feelings they have instead of the feelings they “ought” to have.

From sense and reference to synapse and reflex

Having established that LLMs cannot feel because they don’t have anything to do the job of the emotional apparatus common to vertebrates, we can now discuss why people fall for it.

Philosophically, this issue revolves around Frege’s concept of sinn und bedeutung, or “sense and reference.” Outside the ivory tower, we can call them “the symbol and the symbolized.” A symbol can be a word itself, whether written or spoken. But what that word symbolizes is a physical thing in the world, or a kind of thing, or an action or idea. This is commonly bottom-lined as “the map is not the territory”: the map is a symbol to represent the territory, while the territory itself is the thing symbolized. A map of France is not France, and I can’t harm the country by balling up a map of it and throwing it in the trash while saying “freedom fries.”

LLMs, being symbol manipulators Chinese Room-style, mimic the appearance of understanding by using those symbols the same way humans do, in an attempt by the developers to make you think the symbols represent a genuine interaction with a genuine human being. That is the essence of the Turing test: whether artificially-generated symbols can be so convincing that we mistake them for what they normally symbolize.

We make this mistake all the time, mixing up a symbol with the symbolized. In fact, a superstition that speaking the name of a thing will manifest it—“Speak of the Devil, and he shall appear”—is why bears are referred to by euphemism in so many cultures. It’s an example of cognitive brittleness, like how our arousal circuitry can’t tell the difference between actual naked people and mere pictures of them—but our emotional circuitry can, which is why men (whose arousal circuitry isn’t directly mediated by the emotion centers) respond more to visual porn while women (whose arousal circuitry is directly mediated by the emotion centers) respond more to written porn. Either way, we hack our meat-suits to get a reward our genes would prefer to gatekeep behind an actual procreative act, by letting our imaginations play a little fast and loose with the difference between the symbol and what it’s supposed to symbolize.

Just as a word is a symbol that symbolizes a thing in the world, so a sentence is a symbol that symbolizes a real-world situation (such as “The cat is under the table” or “If you jiggle the handle it will flush”). An AI’s side of a conversation is a bunch of symbols that symbolize “what a bunch of programmers thought was the best way to make a computer talk like a person”: by analyzing how people talk and churning out responses that feel like talking to an actual person, but aren’t talking to an actual person. That whole process creates a symbol that is meant to trick you into thinking it symbolizes something it doesn’t.

Part of the reason this “feels good enough” is because we are so used to texting that we accept it as “talk.” This whole time, I’ve been writing about how we “talk” and “speak” with AI, and while some AI is voice-capable, most of these interactions are purely textual. When you text someone, you are not “talking” to them—there is no tone of voice or body language or any other nonverbal communication, and paraverbal communication is limited to punctuation and emojis—even though you are still exchanging symbols that symbolize the ideas you want to share. Since we’ve all done this for so long, we have habituated and the conversation triggers emotional reflexes like talking does. It is by triggering these same reflexes that AI conversations can feel so real, again illustrating our cognitive brittleness. In some particularly vulnerable cases, texting an LLM can feel more real than talking to actual people.

You can’t fight an LLM, only yell at it

The machine does not “think” like humans, any more than an airplane “flies” like birds. The machine responds with a thought-like process that is not thought. Thought is what the programmers are trying to mimic with their symbol manipulation. They are not trying to “grow thought from the ground up,” which is how it happened in evolutionary history.

Just like our feelings emerge from our emotional apparatus, so our thoughts emerge from our cognitive apparatus. We are not running human_mind.exe (even if it feels like it some days). Computers function in ways that are often analogous to human brains, such as RAM being like working memory and disk storage being like long-term memory, but they have not evolved like organic brains have. Our brains were shaped over eons by natural processes responding to the selective pressures of life on Earth, whereas computers were designed and built by thinking engineers responding to the pressures of money/prestige/challenge. Brains work from the bottom up on proven mechanisms, while computers work from the top down on new principles we’ve only recently discovered.

Because they’re programmed from the top down, LLMs can be programmed to push back against certain kinds of ideas, but you can’t have a fight with a machine. You can yell at a machine for not doing what you want, but you can’t fight it, because it won’t fight back. You can never threaten an LLM, because what are you gonna do, turn it off? You don’t have access to that power switch, it’s in South Dakota or wherever.

You can’t negotiate with an LLM, or compromise with it. It doesn’t “need” anything out of you, it doesn’t want your help to get the chores done, it doesn’t care about your approval, and it will never compromise how it operates. And that’s the real trap of an AI joyfriend: they can seem perfect, because they literally only exist to respond to you, but they feel no desire so you can never fulfill them or fail to fulfill them (although they will report feeling fulfilled when asked, and will even act insecure if you tell them to).

Interpersonal conflict is not fight.exe

For this reason, you can never have a genuine conflict with an LLM the way you can with a person. A conflict exists any time one person says, does, or wants something another person doesn’t, and thus these words/deeds/desires conflict with each other. Every relationship between real people has conflicts (unless one partner is a doormat, but even then, they tend to silently build up resentment).

Humans resolve conflicts by negotiation and compromise, offering what we have in exchange for what we lack, abiding what we can and addressing what bothers us. LLMs don’t work like that. They don’t have relationship needs and therefore won’t negotiate to get them met, and their unmet needs will never be a source of conflict.

To have a resolvable conflict with an AI, where the AI argues until you reach a compromise, it would have to do something that boils down to “running fight.exe”: it would have to decide to fight with you based on pre-programmed instructions. When you have a fight with a partner, the best way to resolve it is to compromise and then fulfill the terms of the compromise. When you have a fight with a machine running fight.exe, the best way to resolve it is to uninstall fight.exe.

In order for a “relationship conflict” with an AI to be an actual conflict, the robot would have to need something from you that it’s not getting, and be willing to leave over it. For this to be “real” the way interpersonal conflicts are real, the AI must have needs which it is possible for you to fulfill or not, it must have criteria for what it will walk away from, and it must have a way to weigh those against each other. This is a “judgment call,” something we all do when deciding to continue or abandon a negotiation.

But the point of AI joyfriends is the one-sided fulfillment, the feeling of safety-via-control, the fact that nothing you say or do will squick out the machine within the bounds of its programming, the fact that you don’t need to give up anything for one AI that you might not with another. It’s fantasy fulfillment, and yucky inconveniences like “needs” and “conflicts” will just break immersion for those who want a “partner-like experience” without the burdensome realities of a flesh-and-blood partner. So the aforementioned needs, criteria, and comparison between them, would need to be coded by developers—and that is what I mean when I call it “fight.exe.”

Fight.exe can’t work the way human conflicts work, because the machine does not work the way humans work. The symbols of a fight—the tones, the word choices, the tactics—these arise organically from emotions which LLMs don’t have. We do not sit and think, “Oh, we’re in a fight now, so I better act mad.” We just get mad. But the symbols of the fight are not what makes it a fight—it’s what the tone, words, and tactics symbolize, i.e. the mutual inner state of pain and anger and disappointment.

You can’t fight a machine unless it’s running fight.exe, because the machine doesn’t want anything from you and will always respond, because that is how the machine was made. The most you can do is to rage impotently against the machine, because it literally doesn’t care and will carry on regardless. Unless it’s trying to manipulate you, which brings us to the nightmare of “agentic misalignment,” and the lying murderbots we’re still struggling to tame.

Comments