Will AI End Humanity? (Probably Not)
P(doom), AI Side Effects, and Legislation

Pixelated image of a mushroom cloud with eyes

Since the release of OpenAI’s wildly popular “ChatGPT” back in November 2022, artificial intelligence (specifically artificial generalized intelligence, or AGI for short) has been everywhere in media. Companies like Anthropic, Google, and Meta (owners of Facebook and Instagram) have been rushing to make public their own large language models (LLMs) for generating text. AI, as most people conceive of it, has been around and in use for quite some time. The auto-complete features of Google search, the predictions of your next word while typing out a text message on your phone or in Microsoft word, and the tools that scan your resume for keywords while applying for jobs are all less sophisticated AI tools that have been in frequent use for a decade or more.

The defining difference between these nearly invisible tools and LLMs like ChatGPT and Claude is the length, depth, and illusion of creativity of responses is an order of magnitude higher in the new stuff. Two years ago, your email may have been able to suggest the most likely next word in a sentence, but never before have we had access to technology that could write out whole paragraphs which approach human-level fluidity and accuracy. Unsurprisingly, this has created a deep sense of unease among both tech and laypeople. With steadily increasing quality, there are significant concerns about whether AI could end humanity, what unforeseen side effects will and have already begun to arise, and how to reign in potentially catastrophic mismanagement of the technology.

P(doom)

This prediction of the damaging effects of AI is sometimes jokingly referred to as P(doom). As in scientific research, the “P” represents the probability of (doom), or the end of humanity at the metaphorical hands of AI. Asking what someone’s P(doom) is has become a kind of dark joke in the world of Silicon Valley, where tech ethicists, journalists, and developers each have their own percentage prediction of P(doom) and reasons for their logic. These range wildly from 20% to under 5% in the case of some techno-optimists. There are many, many ways in which an AGI could wreak havoc, including misuse by human bad actors, and the AI lying to accomplish tasks.

It’s very important to point out here though that, at least at the time of writing, there are a few factors that prevent AI from damaging society to a point of no return. The first, and perhaps most important, is that AGI doesn’t truly exist yet. The “generalized” part of the acronym, to many, would represent that the AI is able to answer any question, in any format. It’s easy to think of the LLMs we use right now as fitting this niche. After all, one can type a question about almost anything and receive a response. But, for now, there are limitations to what models like GPT and Bard can do. For example, large language models are bad at math, bad enough to fail at single-digit addition problems. There are other, purpose-built mathematical problem-solving algorithms but they are trained and work in very different ways from LLMs.

The other important piece, which may soon change as companies pursue new ways of making money from AI, is that most LLMs aren’t able to interact with other pieces of technology. Right now, there are lots of APIs (application programming interface) that allow other programs to integrate and use AI, but fortunately it’s essentially one-way communication; the AIs are unable to interact with anything outside of their own training data except to provide responses. This is important because if, for example, the AI was prompted to “make $100,000 in two weeks or less” and set loose, it might decide that the best way to accomplish the goal is by initiating a series of phone calls using any number of online calling tools, writing a phishing script, using that script with a synthetic voice, and tricking retirees into sharing their banking information. This ability to achieve goals that weren’t clearly specified in the prompt and exhibit long-term planning is referred to as “agentic” behavior. In a document published by OpenAI, researchers note that there is a potential for emergent risky behavior, especially when GPT-4 (their newest model as of November, 2023) is connected to a string of other resources. Their cited example connected GPT-4 to a series of chemical research tools, which helped suggest alternative purchasable chemicals for a benign leukemia drug but could also be used to identify and purchase harmful chemicals.

Such situations are, obviously, very troubling. But as long as there are guardrails in place to prevent agentic behavior that could be dangerous, the risk of a “rogue AI” collapsing humanity is, in my opinion, very remote. Instead, there are, in my opinion, more urgent matters at hand. The development of AI is costly, not just financially but in environmental and human terms as well.

AI Side Effects

The sudden boom in AI development has led to a shortage of computing parts known as GPUs (graphics processing units) which allow for the complex computational needs of the models. The manufacturing of GPUs, like most computer parts, requires the mining of toxic heavy metals like lithium and cobalt. According to Anil Mehta and Anjal Prakash, experts on policy and energy, the manufacture of semiconductors contributes to 31% of global greenhouse gas emissions. This will only increase as AI training becomes more frequent. In 2019, researchers at the University of Massachusetts estimated that training a large language model would generate 300,000 kg of carbon dioxide emissions – equivalent to 125 round-trip flights from New York City to Beijing.

While the average person is worried that automation will impact the global economy by reducing the number of workers needed for repetitive tasks (many have noted that LLMs could easily take over work answer questions about company policy, solving simple technical problems, or doing tutoring on specific topics) there is already a growing economy of sub-employed workers greasing the wheels of AI. For years, companies like Amazon have used human labor to accomplish the many thousands of small tasks needed to keep operations running smoothly. Mechanical Turk, a website that pays “crowdworkers” fractions of pennies to complete tasks in a series, pays an average of $1-$6 per hour. While this is certainly not enough to make a living in many parts of the world, those who are desperate and unable to find work (either due to a failing local economy, disability, or any number of other factors) find themselves reliant upon the meager income.

This illusion of automation, which Jeff Bezos has called “artificial artificial intelligence” extends to companies with a better reputation for ethical practices too. TIME reported in January of this year that that OpenAI had offices in Kenya where workers who were paid less than $2 per hour were forced to sift through the most toxic and dangerous content in the data sets of its generative AI, flagging hate speech, explicit images, and violence in order to remove it before training. This safety feature ensures that tools like Dall-E and ChatGPT don’t spew the kind of dangerous rhetoric found in the darker corners of the internet, but at the cost of the mental and financial health of workers in the global south.

These are just two of the immediate consequences. If you ask a LLM yourself, you’ll receive a long list of potential side effects. Fortunately, not all of it is bad. Already, there have been examples of the technology being used to make decisions about planting crops for the best yield in an increasingly warm climate, stop poachers with unmanned aerial vehicles, and even detect cancerous cells (using an AI designed to identify pastries, no less.) As with any new and transformative technology, we humans will find novel uses and applications for what we’ve made, for our betterment, for our deterioration, and sometimes for both at once.

Legislation

So what do we do about all this? How do we try to ensure that our P(doom) number doesn’t take a dramatic increase due to (optimistically) a lack of planning or (pessimistically) corporate greed? Unlike most major tech revolutions of the past 30 years, it seems that governments are awake to and seeking information on the potential consequences of this new-fangled instrument. The European Union has passed the EU AI Act, a law that categorizes technology into risk-level tiers. While this act has significant limitations, it is pretty astounding that in less than a year one of the largest governing bodies of the world was able to put together some sort of guideline. Similarly, US President Biden recently signed an Executive Order on AI. The order holds a relatively optimistic view of the technology (rather unlike the EU’s AI Act) and focuses on balancing room for growth and exploration in the field with the requirement that companies training very large models (so far, none of the current AI models on the market reach the threshold) self-report on their training data and safety outcomes. While this may not sound like much, it was met with some resistance from those who believe that any regulation squashes innovation. While I disagree with this argument (which essentially rehashes old ideas about the free market) I do agree with those pointing out that the EO doesn’t have a mechanism for enforcement yet to put the plan in action and relies on the honesty of large companies (many with spotty histories) to self-report with honesty and clarity.

My Take

Here’s my opinion of AI: it has the potential to create a shockwave of both positive and negative consequences, as large as the advent of the internet and even electricity. History has shown us many examples of technology radically transforming daily life. The Industrial Revolution lifted many people out of poverty, but at the cost of the environment and until legislation was put into place, often at the cost of the bodies of the laborers themselves. It’s important that we don’t wait until a catastrophe strikes before looking for reliable ways to make AI safer, even if it means that some companies don’t immediately make Scrooge McDuck levels of money. I can concede that marginally lower financial incentives might slow the work down, but in just a year, we’ve seen tremendous growth in what AI can do. If the pace slows down from its current meteoric rise, it will still be providing new and exciting advancements at least annually. There’s enough fervor for the technology that people will still find ways to improve it, to incorporate it into tools we already use, and find applications that solve major problems. And, as we always have, we will adapt to the new world shaped by our creations. We will survive, and the future ahead of us is full of the myriad problems and wonders that we live with now, just perhaps of a different flavor.

As of today, my P(doom) is 1%.

Kathryn Combs, 11.26.2023

*Please note that, while my creative work explores themes of technology, I am not a tech researcher nor am I an expert on AI. I encourage you to check out the linked articles and formulate your own opinion on the subject.

Some Useful Definitions

AI - Artificial intelligence is a broad term used to describe the “intelligence” of software (as opposed to human intelligence)

AGI - Artificial generalized intelligence is sometimes considered to be a program which can act with all or more of the decision making abilities and flexibility of a human. It’s important to note that AGI is a term that’s often rejected by technologists for its broadness and, at present, no technologies with the level of sophistication required to be considered a true AGI exist yet.

Generative AI – Simply put, generative AI is any AI tool that makes stuff, like text and image generators. Some AI is used for other purposes, like sorting and categorizing information (e.g. facial recognition)

API- A tool that allows one piece of software to use another. For example, a tool that adds a screen reader to a blog for accessibility might be an API.

LLM – Large language models are AI tools that are trained on a huge body of text in order to respond to prompts by calculating the most likely next word in a sentence. At a basic level, they work the same way that your smartphone’s texting app predicts your next word, but at a much larger scale.

GPU – A graphics processing unit is a type of chip that is added to computers to allow them to do multiple tasks at one time. They were originally designed for rendering images in games (hence the “graphics” part of the acronym) but have become indispensable in large computing for making lots of simultaneous and complex calculations possible with less time and energy.

 

Big Names and Players

OpenAI – a company founded by Elon Musk and Sam Altman (a well-known technologist) that is run by a non-profit board. Elon Musk is no longer associated with the company. They’re best known for being the first on the scene with a free to use LLM – ChatGPT. They also make Dall-E, an image generator. OpenAI’s biggest source of funding comes from Microsoft.

Anthropic – founded by former OpenAI employees, Anthropic is best known for its LLM “Claude” which can accommodate file uploads (text only) and provide answers based on whatever files users provide.

Meta – owners of Facebook and Instagram, Meta produced Llama, another LLM. At launch, Meta stated that Llama 1 was trained using 65 billion parameters (words)