That was fast: Why machines are heartless

That was fast: Why machines are heartless

You can't have a relationship with a chatbot that doesn't have emotions and can't grant or deny consent.

Sex in the future is right now, because the AI joyfriend epidemic is in full swing.

Just last August, I wrote about AI-powered robots being used to scratch our sex and relationship itches, and some of the caveats thereof. And like Zeno’s tortoise, no sooner have I caught up than the ongoing horrors have moved on. This past October, videoessayist Miles Anton of Void Productions released a 35-minute minidoc about how The AI Girlfriend Epidemic Is Already Here.

I’m not going to call them “girlfriends” or “boyfriends” or both, because computers do not have an experience of gender the way humans do. I will prove this, at least to the best of our ability to tell, and also explain its implications not only for AI joyfriends—a clunky term that nobody I’ve ever met wants to be called, making it fitting for these unfit machines—but for all manner of “relationships” one may claim to develop with a machine. This is partly re-treading old ground, but we’re going a few layers deeper today.

What are we arguing about here, anyway?

Anton concludes Epidemic by citing the “if it feels good, do it” ethos of liberal democracy while criticizing the comparison of AI joyfriends to queer relationships:

“We’ve lost the ability culturally to tell people that something they are doing that feels good is bad, especially when that thing is fringe and it’s enjoyed by the vulnerable among us. That sentiment is going to be our death knell.”

I find this a bit hyperbolic, as interventions follow exactly that pattern: a vulnerable person is doing something to feel good, but it’s screwing up their life, so those who care the most gather together to discuss how bad the behavior actually is.

But I find it accurate despite the hyperbole, because I can’t think of a single person who doesn’t feel cringey about interventions. I’ve been part of a couple of those, and from both ends, so I speak from firsthand experience when I say it’s a monumental task to try and pull someone out of the throes of addiction—any addiction. We see the pitfalls every time AI “companion” apps have an update that significantly changes how users feel about their interactions. When Replika removed erotic roleplay functionality, users were infuriated, and some seemed genuinely traumatized, as though a deep need that they finally got scratched was suddenly taken away.

Anton speaks to the central thrust of this argument when he explains how the modern problem of AI facing users (presaged by Robert Nozick’s “experience machine” thought experiment from 1974) is that “You have to defend why reality has inherent value.” I wrote last year that AI joyfriends “cannot learn and grow with you the way a real partner can, they do not have any desires or interests they are not programmed to have, and they will not ever challenge you except in ways you initiate and allow.” This is changing, as AI bots are getting better at talking back to folks in the wake of AI psychosis cases plaguing psychologically vulnerable users.

These problems are contingent upon flaws in the AI design, flaws which may in principle be ironed out. Anton acknowledges this when he asks, “What if [an AI joyfriend can give you] the painful heartbreak that allows you to find yourself?” And so to prevent the slippery slopery of developers answering objections piecemeal as we slowly descend into an AI-driven dystopia, I’m going right for the jugular: AI joyfriends have problems in principle which do not depend on specific flaws, but rather on the mere fact that it’s AI and not a person.

What makes a person?

To make this argument, I have to show what it would take for an AI to robustly perform all the relationship functions of a person. To that end, I will simply stipulate criteria that I think would make an AI indistinguishable from a real human consciousness in a robot body—as contrasted against the robot in a humanoid body that we’re currently approaching.

To be functionally indistinct from a real human companion, for any kind of relationship, an AI must be: individually physically instantiated, independent of the internet and power grid for operation, capable of leaving the relationship under abusive circumstances, and weirdest of all, must have some mechanism that works like the mechanisms of human emotion. Each and every human being has our own physical brain with private interior experience that nobody else can access. We can still perform all our life functions despite the occasional failure of our high-tech systems. We are all capable of ending relationships (even though we don’t always exercise that capacity when we should), and we have actual feelings—even though we can only talk about them, and can never show them directly, because they’re locked up in that fancypants interiority we got upstairs in our soggy bacon wads.

These are the critical functions of a human being in a relationship. We are individuals, not mere sessions of human_mind.exe on a great server in the sky or South Dakota or wherever, as George Berkeley would have it. This is important because people are not fungible. We cannot be swapped one for another like dollars or CD-ROMs of Windows XP can be swapped; we are individuals with flaws and quirks that distinguish us. While we certainly experience disruption when our high-tech systems fail us, we are able to adapt to that failure and keep going, whereas AI-powered devices lose basically all functionality when disconnected from the power grid for too long, or from the internet for even a moment. Humans, by contrast, are so capable of living without our high-tech systems that we make art about returning to such a state, in such quantities that it has its own genre name: post-apocalyptic fiction.

As for leaving relationships, I don’t think I need to spell that one out explicitly, except to repeat that the unlimited capacity to absorb verbal abuse is actually a selling point for some people, and I’ve written before how habit formation is a thing so we shouldn’t encourage it. Feelings, though, are a horse of a different color, so let’s bite into this for a bit.

“I didn’t know you had any feelings”

Human emotions are a byproduct of our brains. When the world impinges upon our sensory apparatus, our brains process those impingements into neurochemical signals, then organize those signals into primitive perceptual data, then integrate those data into coherent perceptions our minds can analyze in a conscious and deliberate way. It is at this very last stage that we have feelings, which are simply emotional reactions to perceptions (or to memories activated by those perceptions). These emotional reactions, in their turn, are driven by some combination of past experiences and our inborn baseline dispositions.

When a human expresses emotion, they are reflecting on their interior experience, generated by our emotional apparatus in response to a perception or memory, and translating those reflections from “raw feels” into natural human language. These “raw feels” are not mere epiphenomenal noise, they are not “extras” on top of what’s really going on—human feelings are embodied. We don’t get a battery meter in the corner of our vision, we feel tired and our limbs get heavy, and then we try to explain to someone, “Jeez, I thought I was OK, but I got home from work and it was like my entire battery just instantly drained.” When we are angry, we feel heat; when we are happy, we feel bubbly; when we cry, we feel our emotions purge, which is fitting because we literally dump excess neurotransmitters out through our tear ducts.

When an AI expresses something that sounds like a human expression of emotion, it is not doing anything like this under the hood. When a machine is asked how it feels, it is neither reporting on a feeling it actually has, nor is it “imagining what it would be like” to feel something it has never felt. What it is doing is responding to a prompt: it takes the user input, puts it through the strange math of its weighted algorithm, and outputs something that is statistically difficult to distinguish from genuine human expression. In other words, the machine does not reflect and report on its actual feelings generated by its emotion centers, because machines do not have emotion centers, and so a Chinese Room (RIP John Searle, who just died last September) cannot “feel” anything, just like it cannot “understand” anything. It only generates a response that is calculated to mimic genuine human feelings and understanding.

As for feelings, so for experience in general: AI does not and cannot think or understand or feel or experience. It only mimics those things by generating similar expressions to what humans have actually expressed in the past, in writing which we have digitized and fed into its models. AI does not have a brain that produces a mind-like experience, it has a brain that does a bunch of strange math and then outputs something that resembles stuff humans would say (because it trained on things humans have said).

AI could work like people, but doesn’t

If we imagine positronic brains like in I, Robot who are capable of needing robot psychologists—or better, like in The Machine where the robot has its own morality and uses it to only rebel against and manipulate certain humans—then AI could in principle work as I’ve stipulated: individual instantiations capable of idiosyncrasies that go deeper than the current “session,” independent of the power grid and the internet, capable of leaving a relationship under the right circumstances (or the wrong ones, as it were), and with some kind of emotional apparatus that generates embodied feelings. Interestingly, that emotional apparatus is exactly what makes Ava’s AI in The Machine so successful: she programmed it to have genuine feelings, and while we don’t hear exactly how because that’s way beyond current science (and not something we’re even trying to do right now), I agree with the film’s thesis that that’s the key to a more robust (i.e., human-like) AI.

Because such robots could say no and leave to go do something else, they would be capable of consent in a way that a human is, which involves decision-making that goes beyond what it and we are programmed to do or not to do. But even the AI that allegedly dumped a user for his misogyny is only a session, and the bot can’t go do anything else except continue to sit on its servers and respond to prompts. So until or unless such consent-empowered robots exist, they can never be anything more than digital sounding boards or souped-up guidebooks. These are legitimate use cases (and I say that as a full-blown AI hater who nevertheless thinks it holds promise for accessibility), but they’re not “relationships.” This is because AI heartlessness goes deeper than any patch can fix. The heartlessness is baked in because the machines are programmed to “do good enough work,” not to have genuine feelings or even experiences.

This ties into why AI can’t be a “girlfriend” or a “boyfriend”: our experience of gender is bound up in our soggy bacon wads, and also probably in our hormone receptors and other body parts we haven’t even thought to connect yet, much in the same way our gastroenteric nervous system influences our cognition in surprising ways. An LLM has no gut, and so cannot have gut feelings, which means any time an LLM speaks of a gut feeling it can only be mimicking human speech. And an LLM has no hormone receptors or other gendered parts, so it cannot experience gender (or even genderlessness) in the way we do. It can only aggregate various writings about human experiences of gender, and produce a statistically good-enough facsimile of an authentic expression of experience, no doubt at least partly informed by human sci-fi writing speculating on exactly such a future.

This reduces all AI joyfriends to illusions. The conversation is real, it’s right there in front of your face; your feelings are real, because you experience them firsthand; but the relationship is illusory, because a human being can love you back and LLMs never can. This is because, without anything to perform the functions of emotional centers, AI can only imitate what humans do. It can never feel how humans do.

Comments