AI Can Wipe Out Humanity,If Left Loose ‘Risk Of Extinction’ From AI

Written by Senoria Khursheed ·  2 min read >

On Tuesday, top artificial intelligence (AI) executives and CEO Sam Altman called experts and professors to raise the ‘risk of extinction from AI,’ which they pleased with decision-makers to compare to the dangers posed by pandemics and nuclear war.

The article published on the Centre for AI Safety webpage on Tuesday reads, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Sam Altman, CEO of chatGPT-maker OpenAI, Demis Hassabis, CEO of Google DeepMind, and Dario Amodei of Anthropic, supported the statement.
Similar cautions were also made in an open letter published in March that urged a halt to AI development due to its grave dangers to human civilization. Elon Musk, the CEO of Tesla and SpaceX, was among the many signatories.

The godfather of AI, Geoffrey Hinton, who was also a part of it, had also been voicing concerns and quit Google to voice freely about the risks associated with AI or the dangers has also supported the idea.

AI chatbot

The statement also has the voice of Yoshua Bengio, a professor of computer science at the University of Montreal. However, the information limelights the wide-ranging issues about the dangers of AI if it is to remain unchecked.

According to AI experts, “Society is still a long way from developing the kind of artificial general intelligence (AGI) that is the stuff of science fiction; today’s cutting-edge chatbots largely reproduce patterns based on training data they have been fed and don’t think for themselves.”
The need to restrain AI has grown as the race to implement it among tech behemoths has heated up, with the behemoths investing billions of dollars in this effort.

Though the statement came after the remarkable success and achievements of OpenAI’s chatbot: a chatbot that can generate human-like responses.
Many experts, lawmakers, and advocacy groups have expressed concerns about false information, elaborate forgeries, and job replacements since the end of the arms race.

Geoffrey Hinton, a pioneer of AI neural network systems, had earlier told CNN that he decided to quit Google and “blow the whistle on the technology after suddenly realizing that these things are getting smarter than us.”

Dan Hendricks, director of the Centre for AI Safety, in his tweet, wrote that “the statement first proposed by David Kreuger, an AI professor at the University of Cambridge, does not preclude society from addressing other types of AI risk, such as algorithmic bias or misinformation.”


In addition, the director linked these alerts to the nuclear scientists issuing warnings regarding the technologies they have developed.
“Societies can manage multiple risks at once; it’s not either/or, but ‘yes/and,” Hendricks tweeted.

“From a risk management perspective, just as it would be reckless to prioritize present harms exclusively, it would also be reckless to ignore them.”
Recent advancements in AI have produced tools that proponents claim can be used in fields ranging from writing legal briefs to medical diagnoses.

However, this has sparked worries that technology could result in privacy violations, fuel misinformation campaigns, and cause problems with “smart machines” thinking for themselves.

After the chatGPT has taken the world by storm, Altman has emerged as the face of AI. Ursula Von Der Leyen, the head of the European Commission, will meet Altman on Thursday, and Thierry Breton, the EU industry commissioner, will meet him in San Fransisco the current month.

Read more:

Pakistani Scientists Develop AI-Based Method for Accurate Assessment of Citrus Fruit Sweetness

JPMorgan is Launching a ChatGPT Like AI Chatbot That Will Give You Investment Advice