laitimes

Less words, big deal! OpenAI founders and more than 350 other big bulls signed the joint name, a short and powerful sentence

author:InfoQ

Author | Liu Yan, Nuclear Coke

The risk of extinction caused by AI should be mitigated in the same way that other pressing global social problems, including epidemics and nuclear war, are treated.

The global AI bull signed another open letter

On Tuesday, the Center for Artificial Intelligence Security (CAIS) released a brief statement signed by OpenAI and DeepMind executives, Turing Award winners and other AI researchers, warning that their lifetime work could destroy all of humanity.

CAIS said the statement wanted to discuss "the broad and urgent risks posed by AI."

As the saying goes, the fewer words, the bigger the matter, and the statement contains only one sentence: "The risk of extinction caused by AI should be mitigated in the same way as other pressing global social problems, including epidemics and nuclear war." ”

Celebrities who signed the statement included Turing Award winners Geoffery Hinton and Yoshua Bengio, OpenAI CEO Sam Altman, OpenAI Chief Scientist Ilya Sutskever, OpenAI CTO Mira Murati, DeepMind CEO Demis Hassabis, and Anthropic CEO Dario Amodei. As well as professors from UC Berkeley, Stanford University and the Massachusetts Institute of Technology, it is reported that more than 350 executives, researchers and engineers working on artificial intelligence have signed the open letter.

As a figure on the cusp, the statement comes as OpenAI head Altman is visiting the world to discuss AI and its potential risks with heads of state. In early May, Altman also participated in a U.S. Senate regulatory hearing on the AI industry.

The vague statement about the risks of AI quickly drew criticism from naysayers.

In terms of content, the statement does not define AI exactly, nor does it mention how to mitigate the risk of extinction, but places the work on the same level as other global social problems.

But in a separate press release, CAIS further emphasized its desire to "put guardrails and put in place institutions to ensure that AI risks don't catch humans off guard." ”

2 months ago, Musk and others called for a halt to AI research and development

Two months ago, an open letter co-signed by dozens of people in the field of AI and high-profile participation by technology billionaire Musk shocked the world.

On March 22 this year, the Future of Life Institute (Future of Life) issued an open letter to the whole society "Pausing large-scale artificial intelligence research", calling on all artificial intelligence laboratories to immediately suspend the training of artificial intelligence systems more powerful than GPT-4 for at least 6 months.

Musk, Turing Award winner Yoshua Bengio, Apple co-founder Steve Wozniak, Stability AI founder and CEO Emad Mostaque, DeepMind senior research scientist Zachary Kenton, AI heavyweight Yoshua Bengio (often referred to as one of the "godfathers of AI") and Stuart Russell, a pioneer in research in the field Thousands of tech bigwigs and AI experts have signed the open letter.

The letter mentions that through extensive research and surveys by top AI laboratories, AI systems with humanoid intelligence are likely to pose far-reaching risks to society and humans. As outlined in the widely respected Asilomar AI Principles, advanced AI could represent a profound change in the history of life on Earth that should be given the attention and resources to plan and manage. Unfortunately, we do not see this level of planning and management. In recent months, AI labs have been locked in a runaway technology race to develop and deploy increasingly powerful "digital brains" that even the creators themselves cannot understand, predict, or reliably control.

The letter argues that such a moratorium should be public and verifiable, covering all key players. If the moratorium is not implemented quickly, the government should step in and impose it. AI labs and independent experts should seize this pause to jointly develop and implement a shared security protocol for advanced AI design and development, with rigorous audit and oversight by external independent experts. These protocols should ensure that the system built on them has unquestionable security. AI research and development efforts should focus on making today's most powerful and advanced systems more accurate, safer, more interpretable, more transparent, more robust, more consistent, more loyal, and trustworthy. At the same time, AI developers must work with legislatures to accelerate the development of strong AI governance regimes.

Through this open letter, it can be seen that people want to stop the research and development of more advanced AI systems, which is nothing more than the fear that in the absence of effective supervision, the rapid development of AI will bring a series of potential hidden dangers and dangers to human society. What's more, AI is so powerful that even humans can't control it when it reaches a certain level.

AI ethics experts: Do not care for open letters such as warning about the risks of AI

However, experts who have long been concerned about the ethics of AI are not interested in such open letters at all.

Dr. Sasha Luccioni, a machine learning research scientist at Hugging Face, called CAIS's letter child's play: "First, the statement conflates the hypothetical risks of AI with real threats such as pandemics and climate change, which only disturbs the public's basic judgment. It's also misleading, as long as the public's attention is drawn to future risks, they will ignore the more tangible risks of the present, such as AI bias, legal disputes, and consent rights. ”

Author and futurist Daniel Jeffries also tweeted, "The risks and dangers of AI are now a game of taking a stand, and everyone wants to play the good guy in this wave... The question is, is it useful to be so noisy? It looks good, but no one really pays for it, it's a complete waste of time. ”

CAIS is a San Francisco-based nonprofit organization whose goal is to "reduce the societal risks at scale posed by AI" through technical research and advocacy. One of its co-founders, Dan Hendrycks, holds a Ph.D. in computer science from the University of California, Berkeley, and previously interned at DeepMind. Another co-founder, Oliver Zhang, also frequently posts about AI safety on the LessWrong forum.

In the field of machine learning, some AI security researchers always worry that super AI that is smarter than humans will soon appear, get out of bounds, and either control human civilization or completely eliminate human civilization. **As the initiator of the current wave of AI, OpenAI's basic security work also revolves around this "AGI" (General Artificial Intelligence) anxiety. In other words, AI apocalypse is already quite marketable in the tech industry.

But many people feel that there is no point in signing such a vague open letter, which is nothing more than to relieve practitioners of a little deep moral pressure. Luccioni emphasized, "This group of people who created AI technology participated in the statement, which is nothing more than to make a good name for themselves." ”

To clarify, Luccioni and her colleagues are not deciding that AI is harmless, but rather that focusing on hypothetical risks in the future will make people ignore the current negative effects of AI. These influences are creating thorny ethical conundrums, while tech giants ignore threats and sell their products.

Margaret Michell, chief ethics scientist at Hugging Face, believes that "certain vulnerable groups are already being harmed: AI-based surveillance systems are forcing Iranian women to maintain traditional clothing, and even imposing surveillance and house arrest on certain groups." ”

While some form of advanced AI may indeed threaten all of humanity one day, critics argue that it is too early to discuss the issue in 2023 and is unlikely to help constructively. How can research be conducted on problems that do not yet exist?

Jeffries also tweeted to reiterate this view, "AI long-term risk is an unrealistic fantasy, we can't solve problems that don't exist." This is a complete waste of time, we should focus on solving the problems of the present, and leave the things of the future to the future. ”

AI "godfather" Yoshua Bengio said: I am also "confused" in the face of the results of my life's work

Recently, Yoshua Bengio, an AI scientist who signed this latest open letter, admitted in an interview that he began to feel "lost" about the results of his life's work.

Less words, big deal! OpenAI founders and more than 350 other big bulls signed the joint name, a short and powerful sentence

As one of the three "godfathers" of AI, he has made many pioneering contributions in this field. And the direction and astonishing speed of AI is causing him to worry. Professor Bengio says he used to be involved out of a sense of identity, but now he is puzzled.

"Emotionally, people inside the AI space are definitely hit." Confusion is real, but we still have to move on, we have to participate in it, join the discussion, and encourage others to think with ourselves. ”

The Canadian scholar recently signed two statements urging caution about the future risks of AI. Some academics and industry experts have warned that AI moving too fast could lead to the technology being misused by malicious actors. Even apart from this layer, AI itself has the potential to have a bad impact.

Professor Bengio also joined the ranks of AI regulation and said that he personally believes that the capabilities of AI should not be given to the military. Professor Bengio believes that all companies that build strong AI products should register and report.

"The government needs to track the activities of these companies, audit their work, and regulate the AI industry at least as much as things like things like airplanes, automobiles, or pharmaceuticals."

"We also need to promote the qualification of AI-related personnel... Includes ethics training. You may not know that computer scientists rarely have access to this knowledge. ”

Geoffrey Hinton: He regretted his life's work

Another AI "godfather," Dr. Geoffrey Hinton, also signed a statement on Professor Bengio's participation.

Earlier this month, foreign media reported that Geoffrey Hinton quit his job at Google, warning that the continued development of this technology field may bring great risks.

As one of the "Godfathers of AI," Geoffrey Hinton and two other partners received the 2018 Turing Award in recognition of their foundational contributions to the current AI boom. But now he regrets the research he has devoted his life to.

According to an interview with him by The New York Times, Hinton is quitting his job at Google and can finally talk about the risks behind AI technology. Hinton, who has been working at Google for more than a decade, says, "I'm always using excuses to comfort myself that even if I don't do it myself, someone else will do it." But at the moment I really don't know how to prevent the bad guys from using AI to do evil. ”

The spread of disinformation is just one of the risks Hinton wants to highlight right now. In the long run, he worries that AI will completely eliminate all memory-intensive work, and that as it progressively writes and runs the code that makes up itself, AI may eventually replace humans.

Hinton pointed out in the interview, "In fact, many people believe that AI can actually become smarter than humans, but most people think that this is still far away." Yes, I used to think that it was still far away, maybe another 30, 50 years or even more. But now, I obviously can't think that way anymore. ”

In an interview with the BBC, he even mentioned that AI chatbots have become a "rather terrifying" threat. "As far as I know, IT is not smarter than us right now, but I believe they will soon surpass humans."

Different voices: Yann LeCun is optimistic about AI development

However, there are also different voices in the field of AI.

The third "godfather" Yann LeCun, along with Bengio and Hinton, won the Turing Award for his pioneering work, but he was optimistic, saying that the warning of AI destroying humanity was overblown.

It was also felt that the real and pressing issues should be addressed before the retreat was discussed.

Dr. Sasha Luccioni, a research scientist at AI company Huggingface, believes that society should be concerned about AI bias, predictive law enforcement, and chatbots spreading misinformation, which she believes are "very specific real-world hazards."

"We should focus on these issues instead of getting bogged down in the hypothetical quagmire that AI could destroy humanity."

In addition to risks, AI does bring a lot of benefits to human society. Last week, AI tools discovered a new type of antibiotic. The microchip developed with the help of AI technology also allows a paralyzed man to walk normally under the control of his mind.

But no amount of good news can offset far-reaching concerns about AI's impact on economies. Companies have begun replacing human employees with AI tools, and Hollywood screenwriters are organizing a collective strike over the issue.

Professor Bengio said of the current state of AI that "it's not too late." It's like tackling climate change. We're emitting a lot of carbon into the atmosphere, and while we can't stop it overnight, we should at least think about what we can do now. ”

Reference Links:

https://www.bbc.com/news/technology-65760449?at_medium=RSS&at_campaign=KARANGA

https://arstechnica.com/information-technology/2023/05/openai-execs-warn-of-risk-of-extinction-from-artificial-intelligence-in-new-open-letter/

https://www.infoq.cn/article/Y9rIogQk8Sjt33bLDMHk

This article is reproduced from:

https://www.infoq.cn/article/ARJEOOh2M5oAmwRCpzfk

Read on