AI chatbots are coming for you: A glitch in the (nervous) system

AI chatbots are coming for you: A glitch in the (nervous) system

All minds, as well as mind-like AIs, use tricks and shortcuts that make them brittle.

Sex in the future will be complicated by a growing understanding of how supernormal stimulus affects human behavior, and how human-computer interaction shapes both humans and computers.

Supernormal stimulus (explained)

Modern porn pushes a specific neurological button called “supernormal stimulus” in two specific ways: first, in the sheer variety of pornography that exists to be consumed; and second, in the extremely specific niche categories that have been enabled by this mass production. I’ve written about the refractory period for men, commonly between ten minutes and twenty-four hours, during which Mister Penis is stuck in baseline and doesn’t want to be aroused. Another way to circumvent the refractory period is with novelty, which will make Mister Penis interested again. With enough novel sex partners, Mister Penis can go again and again, reaching even to quoll-like throes of mating frenzy.

Supernormal stimulus “glitches” our nervous systems. It tweaks certain inputs to jack the neurological response up to eleven. This effect can be used by advertisers and marketers to fuel addictive consumption, whether deliberately or not. Food scientists make fast food and junk food supernormally sweet and savory to cultivate addictive eating habits. Advertisers and influencers use makeup, filters, and Photoshop to reach impossible levels of flawlessness. Movies tap into the story-shaped structures in our brains to time their plot beats. And pornography allows anyone to vicariously live as a monarch of old, with a harem as varied and numerous as they’d like, with heretofore-unmatched myriad positions and activities, all at the click of a button.

This is an example of cognitive brittleness, because of course this is not an actual harem of real people, but a pattern of flashing lights on a screen. Computers can't even make actual yellow light—they use red and green lights so tiny that the human eye cannot distinguish them from yellow light. Similarly, our sexual response systems have not evolved to distinguish between the sight of actual people and the sight of flashing lights on a screen.

Don't feel too bad—unless you don't think this is weird at all. Then maybe feel a little bad at first. 

Cognitive brittleness (explained)

Humans are not the only example of this cognitive brittleness. Male turkeys will exhibit their full sexual response cycle if presented with a taxidermied female turkey head on a stick. That's their porn. That's their AI sexbot. A disembodied head, on a stick jammed in the ground. It's just as much a supernormal stimulus for them as flashing lights on a screen are for humans. (Or erotica. Us literary sluts aren’t immune, since that's also not actual partners, but just a simulation that's good enough to trip our triggers.)

The point is to see the edges and seams, not to feel bad about it. Feel weird, not bad, is what I'm saying. This cognitive brittleness is going to come back when I discuss GPT models of AI.

Cognitive brittleness doesn't just show up in sexual response. The sphex wasp shows it when preparing to lay its eggs. The way it does this is, it first digs a burrow on the forest floor. Then it flies off to find a cricket. Once it finds its prey, it stings the cricket, paralyzing it, and carries it back to the burrow. It then sets the cricket down outside the burrow and investigates the inside for intruders. Some sphex will steal burrows rather than dig their own, and other opportunists are looking for a place to sleep or a free meal. Finding no intruders, or dealing with them, the wasp then puts the cricket into the burrow, implants its eggs with its ovipositor, and buries the paralyzed and infested cricket alive to nourish its babies once they erupt from its still-living body.

But while it’s inspecting the burrow, if you move that cricket so much as two inches away, the wasp will drag it back to the entrance and re-inspect the burrow. You can then move the cricket again, and the cycle repeats, and the wasp never catches wise. You can repeat this cycle until it starves to death.

Now, the poor sphex never had to deal with “scientists messing with it” in the ancestral condition. So it’s not stupid: it’s smart enough, whatever that means, to find its own specific burrow in the middle of a forest. It’s just not flexible enough to identify that this weird disruption to routine is probably best abandoned and restarted elsewhere. It’s not just a little organic robot, either—even fruit flies will drink to ease sexual rejection, drinking twice as much alcoholic sugar water as flies who had successfully mated. To me, this shows that there is “someone at home,” even in a fruit fly’s brain, although their kind of consciousness is probably very different from ours.

So cognitive brittleness is just part of having a brain: it performs certain functions, but those functions can be disrupted if confronted by conditions too far outside their design specs, so to speak. This inherent limitation also applies to AI, and here I'll be focusing on generative pre-trained transformers (GPTs), because they're the ones I'm most familiar with. But this kind of limitation applies to every possible AI, even if it somehow “works perfectly like” a human mind, because even humans are cognitively brittle to some degree.

Chinese Rooms (explained)

GPTs are basically really flexible Chinese Rooms. This thought experiment by John Searle was meant to prove that computer-based AI is fundamentally incapable of what we call “real” intelligence, or understanding, depending on your focus. The setup is: John Searle is in a room, and in that room is also an enormous set of books with instructions. John occasionally receives pieces of paper with Chinese on them, and he takes these papers and compares the symbols to his instruction books, which tell him what symbols to write down in response.

Externally, writers of Chinese write down messages in Chinese, put them through a slot in the room, and then receive pieces of paper that have Chinese written on them—answers that are indistinguishable from those of a native Chinese speaker. But John does not know Chinese, and cannot read or write it; he is just following English instructions on what to do with the symbols he receives. And his instruction books also do not constitute an understanding of Chinese; they are merely instructional operations that John carries out, using the symbols to access the correct instructions and return the correct responses. So no Chinese is being understood, even if the output is indistinguishable from a native speaker.

That is how GPTs fundamentally work. The instructions represent the AI’s programming and training, John himself represents the CPU, Chinese represents any natural human language, and English represents machine code. John (as the CPU), understands English (machine code), but cannot understand Chinese (natural language), which means that while he understands his own instructions, he has no idea what he’s writing. This problem cannot be solved by making the instructions more sophisticated, because additional sophistication will not change the fundamental mechanism. Thus, AI can “think” like planes can fly: in a usefully reproducible way, but which is fundamentally unlike the way birds do it with their wings, feathers and pectoral muscles.

So while the machine is good at mimicking understanding, it doesn't actually understand anything being said to it (or anything it says back). This means there are edges and seams to discover, and AI coders have gotten pretty good at covering for them: for instance, by incorporating “conversations so far” as part of its instructions, and admitting it is an AI when pressed on certain matters. But it still works the same way as a Chinese Room. It uses instructions it does understand, to manipulate symbols it doesn't understand, and thereby mimics the speech of “a human understander of things.”

We obviously shape computers, but through user interface design and content engagement, the computers shape us right back. Those interactions then go on to inform further changes to the machine, which further changes how it affects us back. This is the basic structure of a sociotechnical system, and it’s one of the subtler dynamics with a major (and iterative) impact on society. Next time we’ll pick up with the pitfalls of generative AI, and what that means for the ways we play with our toys.

Comments