The Student Publication of Keystone

The Keynote

The Student Publication of Keystone

The Keynote

The Student Publication of Keystone

The Keynote

Will AI Destroy Humanity?

Will+AI+Destroy+Humanity%3F

Artificial intelligence (AI) has been in the spotlight for its rapid development and its capability to completely change the way we live. From OpenAI’s ChatGPT to AI-operated fast-food drive-thrus, the immense power of AI is very apparent in today’s society. This rapid development has raised the question of how the power of AI should be managed, and if we even should be allowed to keep working on AI’s development. 

In May of this year, many notable players in the AI industry such as Sam Altman, the chief executive of OpenAI, and Demis Hassabis, the chief executive of Google DeepMind gathered to sign a letter essentially establishing that AI could one day destroy humanity. The Center for AI Safety released a statement alongside the signing of the letter stating that “[mitigating] the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” This letter, in addition to other warning signs, all play into the scary scenario that some say could one day become a reality: Will AI destroy humanity? 

This fear is rooted in AI’s increasing connection to various industries that make up our global infrastructure. AI’s autonomy and connection to vital infrastructure, in addition to its seemingly omnipotent power as seen in its significant improvements, contributes to this fear; AI could theoretically use its vast knowledge and vital infrastructural connections to bypass any sort of human intervention. The White House even released an “AI Bill of Rights” that suggests limits on how companies can deploy AI surveillance and how they can develop AI responsibly. 

So, what are the signs that AI could actually grow into this ultra-powerful form? As of right now, there aren’t actually many explicit signs that AI could actually do such a thing. First of all, AI chatbots like ChatGPT are already known for giving out inaccurate data, and these programs rely on user text input. However, there are new programs that can actually create programs all on their own, like AutoGPT and Codex. They can retrieve information and can even improve themselves, but they are nowhere near powerful enough to “destroy humanity.” They still have lots of problems and errors, however. Even though these programs will continue to be improved, there are still many experts that doubt AI could actually cause mass destruction.

The significant and rapid development of AI has sparked some fear about global domination or even the extinction of humans. While this scenario could technically be a possibility, there are currently no warning signs that AI could actually do this because of its shortcomings and governmental regulations in the works concerning AI safety.

Leave a Comment
About the Contributor
Cody Zhu, Junior Editor In Chief
Cody is a junior and is actively involved in Keynote, Model UN, Debate, Foreign Language Club, and Yearbook. He is also co-editor of Yearbook. In 2020, Cody wrote a letter that was named finalist in a Pulitzer writing contest. The letter was published on the Pulitzer Center website for encouraging global change. Passionate about learning and sharing information, Cody is excited to continue writing articles in this year’s Keynote.

Comments (0)

All The Keynote Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *