Artificial intelligence could lead to extinction, experts warn
Spread the love

The heads of OpenAI and Google Deepmind have warned that artificial intelligence may lead to humanity’s extinction.

A statement published on the website of the Centre for AI Safety has been endorsed by dozens of people.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” it states.

Others, however, argue that the fears are exaggerated.

The statement has been endorsed by Sam Altman, chief executive of ChatGPT-maker OpenAI, Demis Hassabis, CEO of Google DeepMind, and Dario Amodei, CEO of Anthropic.

The Centre for AI Safety has also received support from Dr Geoffrey Hinton, who previously warned about the risks of super-intelligent AI.

Among the signatories was Yoshua Bengio, professor of computer science at the University of Montreal.

The Turing Award, which recognizes outstanding contributions in computer science, was jointly won by Dr Hinton, Prof Bengio, and NYU Professor Yann LeCun in 2018 for their groundbreaking work in AI.

According to Prof LeCun, who also works at Meta, these apocalyptic warnings are overblown, with AI researchers face palming in response.

Other experts agree that fears of AI wiping out humanity are unrealistic, and a distraction from issues such as bias in systems that are already problematic.

“Current AI is nowhere near capable enough for these risks to materialise,” Arvind Narayanan, a Princeton University computer scientist, told previously. Due to this distraction, attention has been diverted from the near-term harms caused by artificial intelligence.

The senior research associate at Oxford’s Institute for Ethics in AI, Elizabeth Renieris, told she was more concerned about risks that were closer to home.

AI advancements will magnify the scale of automated decisions that are biased, discriminatory, exclusionary, or otherwise unfair and inscrutable and incontestable. In addition to fracturing reality and eroding public trust, they would further drive inequality, particularly for those who remain on the wrong side of the digital divide.

The “whole of humanity’s experience to date” is essentially “freerided” by many AI tools, according to Ms Renieris. Human-created content, text, art, and music are used to train them – and their creators “have effectively transferred wealth and power from the public sphere to a few private companies”.

According to Dan Hendrycks, director of the Centre for AI Safety, future risks and present concerns shouldn’t be viewed as antagonistic.

“Addressing some of the issues today can help us address many of the risks tomorrow,” he said.