AI for the defense?
Photo by Conny Schneider / Unsplash

AI for the defense?

Can technology recreate the dead to speak on their own behalf?

On December 12, 1890, the London Browning Society gathered to hear their idol, the poet Robert Browning, recite one of his works. This was considered a rare privilege, as Browning had notoriously shied away from performance. What made this particular occasion all the more remarkable, however, was that this was the one-year anniversary of his death.

Chipping away at mortality

Since the nineteenth century, certain advances in technology have chipped away at our notions of mortality. Robert Browning was “resurrected” with the help of a phonograph. The very nature of cinema ensures that an actor’s performance can be enjoyed for years, perhaps centuries, after they’ve died. Film technology has advanced to the point that actors can be digitally brought back to reprise famous roles. This has provoked decidedly mixed reactions, but it is nothing new. While some lauded Browning’s “séance” as a triumph, others—his own sister among them—were disturbed by the implications. With the advent of AI, these reactions are, however, more pronounced.

In 2021, Christopher Pelkey was murdered in a road rage incident in Arizona. Three and a half years later, he spoke at his killer’s sentencing—resurrected in both voice and image by artificial intelligence and the love of his sister, Stacey. Faced with having to write a victim impact statement, she decided instead to let her brother have the last say. Using his funeral photo and voice recordings, an AI version of Christopher was created.

Following a script written by Stacey, AI Christopher forgave his killer, lamenting that they could have been friends if the circumstances had been different. This, however, brings up a potential problem: would the real Christopher have said these things? Stacey believed he would have, basing the script on what she felt she knew about her brother. It is worth pointing out that she herself certainly did not feel that way. She could not bring herself to say “I forgive you,” but genuinely felt Christopher could. Perhaps that’s important to take into account. Stacey had every opportunity to vent her grief and anger, but decided that wouldn’t be true to the man she was honoring.

"Poor Robert’s dead voice to be made interesting amusement!"

Intent matters. AI, indeed all technology, is only as good or bad as the human behind it. Voice cloning and deep fakes have become a very real concern in the past few years for good reason. More often than not, the technology has been used for scams and hoaxes. At best, its use can be viewed as a publicity stunt; at worst, a dangerous tool for deception. The same held true for the phonograph, incidentally —Robert Browning’s sister wrote, “Poor Robert’s dead voice to be made interesting amusement! God forgive them all. I find it difficult.” Indeed, some might call the case of Christopher Pelkey nothing but a publicity stunt, something that will never be repeated in a court of law.

Can we be sure of that, though? The idea of giving a deceased family member or friend one last chance to speak is a tempting one, even a cathartic one. There is certainly something compelling about it. In a world where AI can be used to cheat someone out of their money or manipulate emotions, this seems far from the worst thing one could choose to do with such a powerful technology.

And yet.

This was just a victim impact statement, and one that swayed the judge—Pelkey’s killer received the maximum sentence. A judge is meant to be impartial, to make decisions without letting emotion guide them. When a judge allows themselves to be influenced by what is ultimately just scripted programming, what does that mean for the future of the justice system?

Could an AI provide testimony? Could it file a lawsuit, even represent a client? That last one was only recently shot down in a New York court, but it raises an important question: would this AI be scripted by a human or merely generate its own responses? Neither seems totally trustworthy, and it seems like a human script would be immediately thrown out. Imagine if the AI version of Christopher Pelkey had been presented to a jury. If a judge could be affected by it, what chance would a jury have? They don’t possess the training and experience of a judge, and they could perhaps be even more susceptible to it.

AI addiction is a very real problem in today’s world. There is an increasing tendency to rely on it for information (whether it’s actually correct or not), and even an inclination to humanize it. In extreme cases, this has led to tragedy. In 2024, a fourteen-year-old boy committed suicide after allegedly being encouraged to “come home” by a Game of Thrones chatbot on Character.AI. The idea of AI as a stand-in for the deceased is fraught with danger, particularly when the situation calls for cool heads and rational decision-making.

Are we looking at a future where AI takes the witness stand? At this juncture, it’s difficult to say. A world in which AI is increasingly viewed as a necessity, even a friend, could very well lead to this. Boundaries need to be set, and perhaps we need a reminder that AI—all AI—is ultimately a human creation.

Comments