AI ushers in the post-truth era of politics
DALL-E 3

AI ushers in the post-truth era of politics

Don't believe anything you see or hear.

Last month, Zohran Mamdani, a self-proclaimed democratic socialist running for mayor of New York City, released a Spanish-language campaign ad. In it, he pokes fun at mispronunciations of his name (similar, he says, to his own mistakes in speaking Spanish), and describes his plans to lower the cost of living and make the city more affordable.

However, after the ad debuted, a Republican member of the state assembly fired off a novel critique. He accused Mamdani's campaign of creating it with AI, saying, in essence, that Mamdani's Spanish pronunciation was too good and his speech too smooth to be real. (Under New York state law, AI is allowed in campaign ads as long as its use is disclosed.)

"This audio in question is likely not natural human speech due to its lacking formant structure and limited pitch variation. This becomes particularly evident when compared to genuine recordings of Assemblyman Mamdani's voice."

As the Post article notes, Mamdani's campaign responded by releasing a blooper reel showing all the times he flubbed his lines while filming the ad. This evidence would seem to lay the controversy to rest... that is, unless you believe Mamdani's campaign, in an even more nefarious plan, anticipated this line of attack and used AI to concoct the bloopers too. Just in case.

This double-bluff 4D-chess excuse seems too unlikely to credit—for now. But whatever you believe, one thing is clear: AI is ushering in a brave new world of post-truth politics. We're entering an era where fake images, audio and video can be crafted—or real images, audio and video can be altered—so seamlessly and so perfectly, there's no good way for ordinary people to tell whether anything in the media is real.

The emerging problem of deepfakes

Experts have long feared that an onslaught of AI deepfakes would make it impossible for people to trust anything they see or hear. While it hasn't happened to that extent, it's an emerging problem. There have been notable instances of autocratic states like Russia using deepfakes to meddle in the politics of their democratic neighbors:

In Moldova, an Eastern European country bordering Ukraine, pro-Western President Maia Sandu has been a frequent target. One AI deepfake that circulated shortly before local elections depicted her endorsing a Russian-friendly party and announcing plans to resign.

...In Slovakia, another country overshadowed by Russian influence, audio clips resembling the voice of the liberal party chief were shared widely on social media just days before parliamentary elections. The clips purportedly captured him talking about hiking beer prices and rigging the vote.

...In poorer countries, where media literacy lags, even low-quality AI fakes can be effective.

Such was the case last year in Bangladesh, where opposition lawmaker Rumeen Farhana — a vocal critic of the ruling party — was falsely depicted wearing a bikini. The viral video sparked outrage in the conservative, majority-Muslim nation.

America has also experienced AI election disinformation, albeit on small scales. In January 2024, a deepfake of Joe Biden's voice was used in robocalls in New Hampshire telling voters to skip the state's primary. (The deepfake was created by a Democratic political consultant who claimed he did it to show the dangers of AI.)

In the runup to the 2024 presidential election, Elon Musk shared an AI deepfake of Kamala Harris on Twitter, flouting his own rule against deceptive use of manipulated media. He later claimed it was a parody, but there's no telling how many people may have seen the original video and not the later excuse.

The liar's dividend

There's another, even bigger problem we have to think about. We shouldn't just be worried that bad actors will trick voters with AI deepfakes. We should also be concerned about unethical politicians dismissing genuine incriminatory evidence by saying that their enemies deepfaked it!

This tactic has already reared its head in the legal arena. In a 2023 lawsuit against Tesla, plaintiffs argued that a failure of the car's autonomous technology caused a lethal crash. They said this wouldn't have happened if Elon Musk hadn't exaggerated Tesla's self-driving capabilities in public statements. In response, Musk's lawyers have argued that these comments were deepfaked.

In a column for the Brookings Institution, Daniel S. Schiff and Kaylyn Jackson Schiff describe this as the liar's dividend, building on a term coined by law professors Bobby Chesney and Danielle Citron.

In a set of studies, they presented volunteers with news stories about real political scandals, and then measured their response to either a generic denial, or a statement which specifically claimed the offensive comments were deepfaked. According to the authors, the latter strategy works:

We find consistent evidence across years and studies for the existence of a liar's dividend. That is, people are indeed more willing to support politicians who cry wolf over fake news, and these false claims produce greater dividends for politicians than remaining silent or apologizing after a scandal.

This fits with what we know about human psychology. People with an ideological commitment excel at coming up with reasons to reject evidence that challenges their preconceptions. Young-earth creationists say that dinosaur bones were planted by Satan to test believers' faith. Conspiracy theorists say that the omnipotent conspiracy plants false flags to lead the public astray. Even scientists, when defending a cherished hypothesis, can argue that contrary evidence is misinterpreted or won't be replicated.

This is an extension of that trend into politics. The political arena has always been a domain of lies and exaggerations, but we may soon see untruth proliferating like never before. AI gives voters from across the political spectrum a ready-made excuse to wave away anything that casts doubt on their candidate.

Obviously, fabricated evidence isn't a new phenomenon. Underhanded politicians have been churning out disinformation for decades, either to cast their opponents in a worse light, or to rewrite history to match their ideological commitments. The Protocols of the Elders of Zion is an infamous forgery that's fueled decades of antisemitic hate. During the Stalinist era, Soviet propagandists made people disappear from photographs after they were purged as traitors.

In that sense, AI hasn't created a new problem. What's new is the speed and ease of forgery that it enables. In a digital era where everything is just a collection of bits, it's never been easier to conceal the truth in a fog of altered and fabricated evidence.

AI companies are aware of the problem, but their attempts to prevent it have been less than successful. OpenAI, creator of ChatGPT and DALL-E, tried to build in safeguards to prevent people from creating malicious fake images of public figures. But as a CBC News report showed, these guardrails are easy to get around with a little bit of prompt engineering.

Deepfakes are a problem that defies easy answers. There will never be a purely technological solution, because the technology to make better fakes will advance in step with the technology to detect them.

Instead, we may have to go back to good old-fashioned trust. In the future, eyewitness accounts may come to be treated as more reliable than anonymous digital evidence. Society may no longer believe video or audio recordings unless there's sworn testimony from reliable observers to support them.

All this surveillance was supposed to eliminate the human factor: to overcome flawed observations, slanted perceptions and fallible memories. Instead, in a grand irony, technology may well end up making it more vital than ever.

Comments