laitimes

The latest joint call from industry experts: urgently regulate AI, just like the plague and nuclear weapons

author:Southern Metropolis Daily

More than 350 industry experts recently sent a signed open letter to the U.S. Congress calling for urgent regulation of artificial intelligence (AI) as a "global priority alongside the plague and nuclear weapons" to avoid the "risk of extinction" of artificial intelligence.

The latest joint call from industry experts: urgently regulate AI, just like the plague and nuclear weapons
The latest joint call from industry experts: urgently regulate AI, just like the plague and nuclear weapons

The latest open letter, which is only 22 words, was initiated by OpenAI CEO Altman.

The latest letter, reportedly only 22 words, was initiated by OpenAI CEO Sam Altman, creator of ChatGPT.

The letter, published by the San Francisco-based nonprofit Center for the Safety of Artificial Intelligence (CAIS), calls for urgent regulation of artificial intelligence (AI) as "a global priority on a par with the plague and nuclear weapons."

The latest joint call from industry experts: urgently regulate AI, just like the plague and nuclear weapons

Demis Hassabis of Google DeepMind.

Altman's call was joined by other prominent leaders in AI, including Google's DeepMind's Demis Hassabis, Anthropic's Dario Amodei, and executives from Microsoft and Google.

The latest joint call from industry experts: urgently regulate AI, just like the plague and nuclear weapons

Geoffrey Hinton, one of the three "godfathers of AI."

Two of the so-called "godfathers of AI," Geoffrey Hinton and Yoshua Bengio, who participated in the joint call, won the 2018 Turing Award for their work on deep learning.

The third AI godfather, Yann LeCun's Meta company, did not sign the letter.

Elon Musk and a group of AI experts and industry executives pioneered the potential societal risks of AI in April.

Musk and more than 1,000 industry experts have called for a moratorium on the "dangerous race" to advance AI, saying more risk assessments are needed before humanity loses control and it becomes a sentient "human-hating species."

The latest joint call from industry experts: urgently regulate AI, just like the plague and nuclear weapons

Experts believe that artificial intelligence will reach a "singularity" by 2045, that is, surpassing human intelligence.

They believe that artificial intelligence will reach the "singularity", which means that it has surpassed human intelligence and has independent thinking.

At this time, artificial intelligence will no longer need or listen to human opinions, it will steal nuclear codes, create a great plague and start a world war.

In its simplest form, AI is a field that combines computer science and powerful datasets to enable problem solving.

The technology allows machines to learn from experience, adapt to new inputs and perform human-like tasks.

Recent developments in AI have created tools that proponents say can be used in applications ranging from medical diagnoses to writing legal briefs, but it raises concerns that the technology could lead to privacy violations, power-misleading campaigns, and questions of "smart machines" thinking for themselves.

Altman was questioned this month for 5 hours by the U.S. Congress about how ChatGPT and other models could "reshape human history," which he likened to a printing press or an atomic bomb.

In an exchange about the future that AI could create, Altman looked red-faced, admitting that his "biggest concern" was that using his technology could cause "significant harm" to the world.

Text/Nandu reporter Chen Lin