laitimes

"AI extinction theory" is controversial? Professor Ng replied!

author:Awakening Metaverse AI

The article comes from: Xi'an Awakening Metaverse AI (a professional education platform focusing on artificial intelligence teaching, AI painting, and AI thesis guidance!) )

"AI extinction theory" is controversial? Professor Ng replied!

Regarding the series of controversies caused by the "AI extinction theory", the bigwigs from all walks of life quarreled again.

Joining the fray this time is Andrew Ng, a well-known artificial intelligence scholar and professor of computer science at Stanford University.

Before that, the debate of the deep learning triumvirate Geoffrey Hinton, Yoshua Bengio, Yann LeCun has begun. For example, just a few days ago, Hinton and Bengio issued a joint letter "Managing AI Risks in a Rapidly Evolving Era", calling for urgent governance measures to be taken by researchers to focus on safety and ethical practices before AI systems are developed, and calling on governments to take action to manage the risks posed by AI.

However, LeCun's views are not the same, and he is very optimistic about the development of AI, believing that the development of AI is far from a threat to humanity.

Just one day ago, LeCun publicly named Hinton and Bengio and others on X, arguing that "if some people's [Hinton and Bengio et al.] fear campaign is successful, it will inevitably have a catastrophic outcome: a handful of companies will take control of the AI." The vast majority of academic peers are very supportive of open AI R&D. Few people believe in the apocalyptic scenario you preach. You, Yoshua, and Geoff are the only exceptions."

In LeCun's view, the idea that AI poses a threat to humanity will inevitably lead to some monopolies, with the result that only a few companies control the development of AI and, in turn, people's reliance on digital products. LeCun said these problems kept him up at night.

"AI extinction theory" is controversial? Professor Ng replied!

The debate isn't over yet, and Ng joins in: "My biggest concern about the future of AI is that exaggerating risks, such as human extinction, could lead to regulations that suppress open source and suppress innovation."

"AI extinction theory" is controversial? Professor Ng replied!

Ng cites a new article he recently published, in which he mentions that "lobbyists for some large companies are trying to convince rule-makers that AI is very dangerous, and some of them are reluctant to compete with open source."

"AI extinction theory" is controversial? Professor Ng replied!

Link to Cited Article: https://www.deeplearning.ai/the-batch/issue-220/

Some big tech companies may profit if regulations on AI lead to slow progress in AI research in open source communities, or if small startups run into innovation bottlenecks.

Hinton was quick to question below the tweet: "If AI is not heavily regulated, what is the probability that it will cause human extinction within the next 30 years?" If you are a true Bayesian, you should be able to give a number. My current estimate is 0.1 and I guess Yann LeCun is estimated at <0.01."

"AI extinction theory" is controversial? Professor Ng replied!

LeCun was quick to answer, "I think this probability is much less than most other potential causes of human extinction," and asked Hinton rhetorically, "What do you estimate to be the probability that AI might actually save humanity from extinction?"

"AI extinction theory" is controversial? Professor Ng replied!

Then, Hinton tweeted: "My departure from Google is the best refutation of 'conspiracy theories'."

"AI extinction theory" is controversial? Professor Ng replied!

Apparently, Hinton thinks Ng sees the "AI extinction theory" as a "conspiracy" of tech giants.

Ng quickly hit back head-on: "I didn't say it was a conspiracy. But I think the excessive fear that AI will lead to the extinction of humanity is doing real harm."

"AI extinction theory" is controversial? Professor Ng replied!

Ng cited some of the negative impacts of the current "AI extinction theory", including:

Young students are reluctant to enter the field of artificial intelligence because they do not want to contribute to the extinction of humanity.

The hype about the dangers of AI is being used to promote bad regulation around the world, undermine open source, and stifle innovation.

Ng believes that the idea of "AI extinction" will do more harm than good.

LeCun was also quick to comment, saying, "Hinton and Bengio have inadvertently helped those who want to monopolize AI research, development, and business by banning open research, open source code, and open access models. This will inevitably lead to undesirable outcomes in the long run."

"AI extinction theory" is controversial? Professor Ng replied!

Regardless of the bigwigs' arguments, yesterday Biden signed the first executive order on generative AI, the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (hereinafter referred to as the "Order"), which sets guardrails for the development and use of artificial intelligence and is the most comprehensive AI regulatory principle in the United States to date. With a considerable number of organisations involved in providing advice, the "order" reads more like a patchwork of different (and even opposite) group positions.

In short, the question of how to approach AI is a complex question that requires a combination of many factors, and over time, the answer will be answered.