'Shut it all down': The ethics and risks of AI
Created with DALL-E AI

'Shut it all down': The ethics and risks of AI

A philosopher and an AI expert explore the potential pitfalls and benefits of artificial intelligence. An ongoing series.

Loading the Elevenlabs Text to Speech AudioNative Player...
🤖
"My guess is in between five and 20 years from now, there’s a probability of half that we’ll have to confront the problem of AI trying to take over."

Geoffrey Hinton, Nobel Prize winner and the “grandfather of AI,” BBC Newsnight

In 3000 years of moral philosophy, humans have come so far, yet we still disagree so fundamentally. It’s bad enough trying to work out what we humans should do on an individual level, let alone on a societal and governmental level (let’s face it, politics is a subset of morality, writ large across a state or the whole world). 

Some humans have suggested that we should confuse matters by involving gods. We advise against such antics.

Lately, we have started to confuse moral matters by involving something far more tangible: artificial intelligence (AI).

Artificial intelligence is badly understood by a vast majority of the population. But it is somewhat worrying when major proponents of and experts on artificial intelligence have expressed their worries so publicly and explicitly.

Morally speaking, we have spent thousands of years trying to work out how we should govern ourselves, and then each other, and often animals; yet now we are having to concern ourselves with governing computers and algorithms and virtual entities. Mobile phones, smart cars, robots, battlefield drones… Science fiction so often becomes science fact. We have already surpassed the point of considering how to program smart cars with morality so they can decide which human dies in a potential car crash.

This is not altogether new. Alan Turing mused in 1951 that “it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control…”

If you weren’t worried enough by Turing, then Eliezer Yudkowsky, an American decision theorist who leads research at the Machine Intelligence Research Institute and has been called "the father of AI," laid on suitable doom and gloom just last year:

⚠️
Shut it all down.

We are not ready. We are not on track to be significantly readier in the foreseeable future. If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong.

Shut it down.

Defining terms

Artificial intelligence is a broad term for intelligence exhibited by a computer (or computer-controlled system) to perform tasks that we might usually associate with intelligent beings. These intellectual processes are those traditionally associated with humans, such as being able to see, listen and understand, analyze data, or the ability to reason, generalize, discover meaning, or learn from past experience.

Of course one definition often leads to another, and here we might seek to understand what “intelligence” means. In the world of AI, aspects of intelligence that attract research include learning, reasoning, problem-solving, perception, and using language.

Artificial intelligence has long been able to equal or surpass human intelligence in specific areas. This is certainly not new—just ask Gary Kasparov.

What is new is the speed at which artificial intelligence has improved, thanks largely to Generative Pre-Trained Transformer neural networks—the GPT in ChatGPT—which now account for 78% of AI models. Large Language Models (LLMs) use neural networks in a machine learning (ML) process called “deep learning” to teach computers how to process data, in a way similar to neurons in the brain making connections. This works on the principle of the next-word prediction. Popular Large Language Models (LLMs) such as GPT-4o, Claude 3.5 Sonnet, and Gemini 1.5 Pro all use transformers (like GPT) and the natural language processing abilities of these AI models have increased versatility and ”general intellect.” 

Increasing the amount of training data and advancements in the computational hardware have reliably increased the performance of Large Language Models. This has caused a clamor for good-quality data on which to train large language models. In some cases, corners have been cut with AI labs training their models with pirated ebooks, news articles, and audiobooks. So much of the internet has been fed into this hungry machine that studies suggest we will run out of suitable datasets as soon as 2026. The entire transcription of the content of YouTube might well be on the menu, and some of it has already found itself into the “training data”

Our first aspect of political involvement through regulation, therefore, might concern copyright. Will the work of authors and other content creators find its way inexorably into the mind AI?

Becoming equal

The next big milestone is AGI or Artificial General Intelligence, where an AI model is generally as intelligent as a human. 

AGI is a larger, more generalized form of AI. Where AI, as a term, can be used in a narrower sense (beating a human at chess or creating images), AGI, on the other hand, could run a business—marketing, purchasing, finance, and strategy. Different AI labs have developed various stages of AI.

OpenAI’s (one of the three big AI labs) latest model GPT4 o1 has human-level reasoning. It is believed that will satisfy Level 2 of 5 on their scale. 

It is AGI’s use of reasoning, language comprehension, recognition of emotions, self-teaching skills, and in-context learning that prompts us to worry about where it all leads. Many are concerned that we are readily employing a technology that we barely understand because, right now, it seems to be helping.

AGi is our equal. It can pass exams and even pass off as us. The Turing Test is now but a speck in the rearview mirror of human development. It is no longer about being as good as a single human in some given area: AGI’s next goal is to outsmart every human in every context.

But do all of these proponents of catastrophizing suffer from the “Luddite fallacy”? It is not definitely the case that technology will have a negative effect on work and the workforce. Likewise, it is not definitely the case that AI will be so unutterably catastrophic. Dario Amodei, CEO of AI lab Anthropic, thinks we underestimate the potential benefits to humanity

He predicts, that in the next 5-10 years, harnessing AI will help cure diseases such as Alzheimer's and cancer, thus extending the longevity and quality of human life. It could have benefits for the developing world, such as eradicating diseases and generating unprecedented economic growth. Efficacious treatments for mental health such as depression, schizophrenia, and addiction might become available. We could see improvements in our day-to-day cognition. Healthy minds make for healthy democracies. Democracies that leverage AI for governance could strengthen a democratic world order, countering authoritarian propaganda. AI could, in principle, herald a peaceful, fair, and just world. For example, it could be utilized to help in judicial wrangles, teasing out compromise and middle ground in a polarized world, or fighting automated discrimination.

For us mere humans, there will be inevitable change—many jobs will be replaced by AI. This need not be a bad thing, given the right economic model to ensure everyone benefits from the revolution. Humanity could be empowered to find meaning and purpose more often doing what actually brings them joy, argues Amadei. 

Who is right? Are we being sold a dream that only benefits the few? Is fear of human extinction being used by nefarious actors to discourage progress? 

We may know the path we want to walk—but can we walk it?

The challenges of AI are not dissimilar from those of climate change or global tax evasion: they are problems that ignore borders. Governments, if they seek to regulate AI in a virtual world of global connectivity, will need to work together and in unison. Otherwise, certain countries will generate benefits and competitive advantages over others, or ruin things for everyone. We all need to be singing off the same song sheet.

But money and power will always do what they always do.

OpenAI has partnered with Microsoft, Anthropic has Amazon as one of its biggest investors, and Google runs their own Google DeepMind. Meta (Facebook etc.) has an AI lab, having developed Llama LLMs that they are releasing as open source. Musk’s X has its Grok (which differs from Groq, another AI company developing AI accelerator hardware). In order to power such intense computation, Microsoft has recently shelled out for an entire nuclear power station. The infamous plant at Three Mile Island will soon be powering machine learning.

The astute reader will notice that we could be getting into a well-trodden sci-fi trope: The greedy corporation pits the ability to print unlimited amounts of money and create heretofore unseen amounts of power against the little guy. Against benign human morality.

President Dwight D. Eisenhower gave his famous 1953 “Atoms for Peace” speech to the UN General Assembly, which brought about the International Atomic Energy Agency that sought to use the carrot rather than stick approach to keep countries on the right side of nuclear technologies. It aimed to keep the world safe by concentrating such power only among those, as they saw it, responsible entities. This would also seek to empower democracies. 

We can clearly imagine a parallel between this and the challenges of AI. We would much rather democracies held the keys to AI than autocracies and dictatorships. The politics of AI is a rather pressing issue.

Take Taiwan, for example. They are one of the only nations in the world that produces chips that are required for high-level AI computation. China is menacingly surrounding Taiwan as we write this. 

There are reasons that Taiwan is seen as at the heart of global security and geopolitical strategy.

The name of the game, then, is preparation. The problem might well be that, as Yudkowsky apocalyptically panics, we’re not ready and don’t really understand what we are even planning for:

💀
We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die.

We are already living in a world where bad-faith actors—states, political parties, and any number of entities seeking to cause chaos—are able to confuse us in this epistemic crisis (epistemology being the study of knowledge and truth). We are no longer certain of what is real and true, whether it be an image or a post on social media, bots and trolls are generating “fake” or manipulated content at a rate faster than we can verify it. And that’s even if we have the desire to check.

To be prepared, we need to understand AI and its many applications and implications. This series seeks to take us on such a journey.

We must be wary of Descartes’s Evil Demon as applied to AI. In philosophy, in the domain of epistemology, Descartes wondered what we could indubitably know. He surmised that we could only know that we exist (cogito ergo sum) because in order to doubt, we have to exist. It theorized that, for example, you don’t know that you’re not dreaming or being manipulated in your thoughts by an evil demon. You just don’t know what is actually” true” or “real.”

And in that context, we really did write this article. Honest. It wasn’t AI.

Or was it?

For all of those same arguments that we apply to whether we are indeed in The Matrix, we can also apply to AI. AI could have been programmed to have made some typos, to emulate our writing style.

But really, we did write this.

Didn’t we?

💡
This series will look at the opportunities and threats that AI presents, and how it can be and is being used to push human development to new heights. Or depths.

Comments