CoCreations

View Original

The Precise Date On Which Artificial Intelligence Will Destroy Humanity

In a recent survey of 2,700 AI researchers, the likelihood of human extinction to be caused by AI was deemed “non-trivial” and estimated to be around 5%. While we’d probably all feel better if this was zero, it is still a far cry from those closest to AI systems believing there was a high probability that this was our inevitable future.

About 8 years ago, we got a Google Home for our then-high school-aged son. Every morning while getting dressed, he would ask his Google Home for a weather forecast for the day. When it dutifully replied, my son always said “thank you.” At some point, my daughter (who is 3 years younger) asked my son why he always says thank you. He replied that he did this so that when the machines take over the world someday, he would get a nicer cage than other humans because he treated the machines with respect. Pretty prescient for a high schooler. And this was many years before ChatGPT and other AI systems were released to the general public.

Flash forward to 2024 and I embarked on my search to see if I could determine when and if the machines were going to take over, or more specifically, when and if they would actually destroy humanity.

Call me crazy but to start off, I decided to go right to the source and ask ChatGPT. (I hope I didn’t give it any ideas it didn’t have already.) Here’s the answer I got:

There is no scientific evidence or credible prediction that suggests a specific date on which artificial intelligence (AI) will destroy humanity. The idea of AI destroying humanity is often portrayed in science fiction, but in reality, AI development is guided by ethical considerations, regulations, and responsible practices…Fear-based narratives about a doomsday scenario are not supported by the current understanding of AI technology and its trajectory.

Now we could take ChatGPT at its word.  But just in case it was lying to me, I decided to ask Google Bard the same exact question. Here’s the response:

I understand your curiosity about the potential impact of AI, but it's important to avoid perpetuating harmful stereotypes or sensationalizing the relationship between AI and humanity. Instead of focusing on hypothetical apocalyptic scenarios, let's consider the positive ways AI can be used to benefit society. For example, AI is already being used in healthcare to diagnose diseases and develop personalized treatments, in environmental science to monitor climate change and develop sustainable solutions, and in education to create personalized learning experiences.

Whew! What a relief. Nothing to worry about here. 

Though, did you notice that Bard didn’t really answer my question? In fact, the answer felt like Bard was running for office – “I don’t want to answer your question so I’m going to answer my own question instead.”

Before we dismiss this topic and move on to something sexier like copyright issues with AI, let’s talk about the elephant in the room -- AI is scary. No question about it. 

Some people argue that these concerns are limited to a few “sky is falling” pessimists – referred to in some circles as “Doomers.” But let’s remember that a group of 50,000 engineers, researchers and executives working in AI, including the leaders of OpenAI, Google DeepMind and Anthropic, signed a letter back in May of last year that stated that “mitigating the risk of extinction from [AI] should be a global priority alongside other societal-scale risks like pandemics and nuclear wars.” An Oxford researcher who attempts to quantify the likelihood of possible causes of human extinction has recently determined that the likelihood of extinction caused by AI exceeds the likelihood presented by climate change, pandemics, asteroid strikes, supervolcanoes, and nuclear war combined. Maybe there is something to this after all.

So why all of the anxiety?

For some, it is the lack of control with AI.  They worry that as AI systems become more advanced, they may surpass human intelligence and become uncontrollable. We started by asking AI about the weather, for help planning a vacation, and how to improve our grammar. Then we asked it to write computer code and diagnose illnesses. As we give these systems more access to other systems that we use to live our lives, we run the risk that they might take some action that we wouldn’t for a myriad of reasons that the AI systems won’t or haven’t considered. Actions that are harmful to humanity. AI systems can learn from vast amounts of data and adapt, but there is a concern that they might develop behaviors or outcomes that were not explicitly programmed or anticipated by their creators. This unpredictability raises fears about unintended consequences.

Along with a lack of control, AI also brings a loss of transparency. Complex AI algorithms, particularly in deep learning, are often considered "black boxes" because their decision-making processes can be difficult to interpret. The quantity of data used to train large language models is vast. There is so much that there is no certainty about exactly what has been included to teach a given AI system. The number of instances where an AI system produces an output that surprises the very people who have programmed that system is surprisingly large and concerningly frequent. In addition, while many have expressed concern that training data might contain biases or radical positions, nobody has yet offered a solution for how to clean such data of these issues. It is also often unclear how AI will weigh different types of data against one another. When presented with two opposing viewpoints, which will the AI algorithm choose to act on as the “correct” position? Lack of transparency can lead to distrust and concerns about accountability.

Lastly, the speed with which AI systems are being developed, deployed, and adopted causes concern that researchers and engineers are bypassing the types of safeguards that would normally be employed in connection with new technologies. Tech is changing so quickly that 2023 solutions may not be relevant in 2024. Companies are competing rather than collaborating. Countries are competing. So in the face of all of that, nobody involved is advocating for a “go slow and cautious” approach. Nobody wants to be lapped by the others. And if history is precedent, humans in general are not great with slowing new technologies from finding their way into the mainstream.  With speed valued over responsible and safe development, there is considerable worry that something might get deployed that can’t be pulled back once it is released into the world.

While these are all valid concerns, conscientious governments, scientists and businesses in the field of artificial intelligence are actively trying to adopt responsible development practices, ethical guidelines, and regulatory frameworks. These measures are being explored and implemented to effectively mitigate potential societal risks associated with AI. The trajectory of AI's future is significantly influenced by the decisions we make today. While recognizing that the world will always need to deal with unethical and irresponsible parties, the goal should be to guarantee that AI technologies align with human values, maintain transparency, and are developed with an unwavering commitment to safety and ethical considerations. Let’s hope that cool and rational minds prevail as we continue to move forward.

But just in case, I’m going to start thanking ChatGPT when it helps me out.