How A.I. is wrecking the online experience
The irony of illustrating this article with an A.I. image is not lost on anyone. Created with DALL-E AI

How A.I. is wrecking the online experience

Our shiny new toy is solving exactly the wrong problem.

Loading the Elevenlabs Text to Speech AudioNative Player...

The buzz about large language models and generative artificial intelligence shows no sign of abating. Tech optimists hype the idea that this will immeasurably improve the world, launching us into our next phase of evolution. Alarmists predict that computers are on the cusp of becoming Skynet, the evil intelligence that brought about the robot apocalypse in the Terminator universe.

As a software engineer, I have a different take: What people call “A.I.” right now is a mildly amusing fad that's wrecking the online experience.

Hope and hogwash

I've been fascinated by computers and their potential since I was a kid. I love solving puzzles through programming and soak up science fiction stories that explore what it means for machines to think. I chose to specialize in software in part because computers are cool. Now I'm a senior engineer at a major company, with decades of experience in web technologies. I even believe, in a philosophical way, that someday it might be possible for machines to have general intelligence that can compete with human minds… in principle.

That day is not here. It's not even close.

People are massively underestimating the gap that remains before we get there. Two years ago, ChatGPT got a big viral media boost, capturing people’s attention because it does a good job of simulating intelligent conversation. Algorithms can do impressive work processing a lot of information. But I see the current crop of generative A.I. as a pretty specialized tool that's only incrementally different from those that came before it.

People believe these chatbots are intelligent because people in general are gullible. We want to believe that a program understands what we say, in the same way we want to believe that there are faces on Mars, or that the Virgin Mary has appeared on a piece of toast. We're really good at pattern recognition but bad at identifying false positives.

Many of you are probably familiar with the “Turing Test,” suggesting that maybe computers will be truly intelligent when they can fool an interviewer into believing that they are as human as a human subject. This dream is not new—Alan Turing wrote about it in 1950, and it had already been kicking around for a while in popular fiction by then. In the mid 1960s, a chat program called ELIZA simulated a therapy session using around 400 lines of code. While it didn’t exactly “pass” the Turing test, it easily fooled many people into believing the computer was talking to them and understanding their problems somehow.

It did so because we want to believe.

Tripping over our wish to believe

Code has become much more sophisticated since then. But generative A.I. processes a lot of material written by other people, repackages it, and spews out text based on probabilistic sentence structure. It says things that sound like knowledge, but in almost all cases, any amount of probing makes the façade fall apart. When A.I. doesn’t 'know' something, it cheerfully makes it up, and clearly cannot tell the difference between real information and fantasies. It invents references that don’t exist and confidently declares that they do.

🥸
When A.I. doesn’t 'know' something, it cheerfully makes it up

Humans do this too, of course. But saying something was written by a computer can give it an air of being objective and infallible, and we keep falling for it.

Countless stories have emerged already, like the foolish lawyer who wrote a case filing with ChatGPT, then got hit with contempt of court for submitting a document filled with citations of other cases that never happened. The fact that A.I. “hallucinates” is not a minor glitch that we should expect to be cleaned up with debugging. Inventing convincing stories is core to the way generative chatbots work. What worries me is that A.I. will only get better at sounding authoritative, making it even easier to mislead people with false information.

While I don’t think humanity is facing the threat of Skynet, it is facing a less alarming but still insidious problem: A.I. is flooding the internet with half-assed, poorly written content. If there's one thing programs excel at, it's producing huge amounts of content incredibly quickly.

That sounds like a great idea to some people. Content makes money, right? From YouTube to news providers to social media to recipe blogs, the whole game is about grabbing as many eyeballs as you can and holding onto them for as long as possible. Those eyeballs will go on to watch ads, and ads make money. The more content there is, the more money you can make—at least that’s what business majors hope as they compete for the attention of an increasingly fractured public.

From a consumer perspective, this endgame is just miserable. Think about it: As a citizen of 2024, does it seem like society’s biggest problem is not enough content? No. It's almost the exact opposite. Dozens of streaming services offer movies and TV shows. Thousands of news articles flash through our feeds. Friends and strangers share millions of memes. The real problem is not enough filtering of all this content. My inbox spam folder is filled with scams begging for attention; my texts are filled with political donation requests; YouTube is filled with low-quality dreck. I have a finite amount of attention to give. I don't want to watch a hundred bad movies in a month, I want to watch five good ones. I want to read good books and receive reliable news.

But pumping "A.I." into everything means flooding our feeds with mediocre and often terrible content. A.I. is not the only problem—plenty of humans are creating low-quality content just fine, thanks. But the sheer volume and speed of the A.I. crap factory is making everything immeasurably worse at light speed. A.I. stories and art range from not particularly good to awful (to say nothing of the plagiarism). A.I. content is mostly “interesting” in the sense that it can provoke conversations like "Hey, a computer made that. Isn’t it neat?" But I would not seek out the stuff to consume based on its merits, which is a far more attractive form of interesting.

If worthwhile content is a needle in a haystack, generative A.I. is mass-producing hay. It's making the internet harder and less fun to read, and it's only going to get worse as more companies jump on this bandwagon.


From the author: This post contains my own views and opinions and does not represent any positions by my employer.

💡
Help build the new OnlySky! We are now accepting pitches for short fiction or nonfiction (up to 1000 words) exploring any aspect of possible futures, any topic, at any timescale. Send a complete draft for consideration, or pitch an idea, and send to editor@onlys.ky. Include links to two examples of your written work if possible.

Comments