laitimes

The three giants jointly signed! Another open letter "Beware of AI and defend humanity" was issued

author:Taste play

Since generative AI has swept through almost every field at breakneck speed, concerns about AI challenging humanity have become more and more real.

Last time, Musk issued an open letter calling for a suspension of AI large model training for half a year and strengthening AI technology supervision on the leaders of the artificial intelligence research community and industry, calling on all laboratories around the world to suspend the development of stronger AI models for at least 6 months. But it turned out that he had actually bought 10,000 GPUs for Twitter to advance a new AI project, most likely developing his own large-language model.

But this time, another open letter calling for attention to the threat posed by AI has been issued. Even more striking than the last time is that the three giants currently at the top of the generative AI field: OpenAI, DeepMind (part of Google), and Anthropic have all joined in.

22-word statement, signed by 350 people

The statement, issued by the Center for AI Safety, a San Francisco-based nonprofit, issued a fresh warning about what they see as an existential threat to humanity posed by AI. The entire statement is only 22 words - yes, you read that right, only 22 words, and the whole content is as follows:

Mitigating the risk of AI extinction should be a global priority, along with managing other social-scale risks such as epidemics and nuclear war.
The three giants jointly signed! Another open letter "Beware of AI and defend humanity" was issued

Although the AI threat theory is not new, it is the first time that it has been so bluntly compared with "nuclear war", "pandemic" and other crisis factors that affect all mankind.

The attribution part of the statement is much longer than the content of the statement.

In addition to OpenAI CEO Sam Altman, DeepMind CEO Demis Hassabis and Anthropic CEO Dario Amode, more than 350 top AI researchers, engineers and entrepreneurs joined the ranks, including Geoffrey Hinton and Yoshua Bengio, two of the "Big Three of AI" who once won the Turing Award, but another award-winning Yann LeCun, now chief AI scientist at Facebook parent company Meta, has yet to sign up.

In addition, Chinese scholars appeared on the list, including Zeng Yi, director of the Research Center for Artificial Intelligence Ethics and Governance at the Institute of Automation of the Chinese Academy of Sciences, and Zhan Xianyuan, associate professor at Tsinghua University.

The full list of signatures can be viewed here: https://www.safe.ai/statement-on-ai-risk#open-letter

Hendrycks, executive director of the Center for AI Security, said the statement was concise and deliberately did not suggest any potential ways to mitigate the AI threat in order to avoid disagreement. "We don't want to push for a huge portfolio of measures with 30 potential interventions," Hendrycks said. "When that happens, it dilutes the message."

An enhanced version of Musk's open letter

This open letter can be seen as an enhanced and "clean version" of Musk's open letter earlier this year.

Previously, Musk and more than a thousand industry and academic bigwigs published a joint announcement on the "Future of Life Institute" website. The open letter mainly conveys two aspects of information: one is to warn of the potential threat of artificial intelligence to human society, and to immediately suspend the training of any artificial intelligence system more powerful than GPT-4 for a period of at least 6 months. The second is to call on the entire AI field and policymakers to jointly design a comprehensive AI governance system to supervise and review the development of AI technology.

The letter was criticized on multiple levels at the time. Not only because Musk was exposed to "not talk about martial arts", while publicly calling for a suspension of AI research, while secretly promoting a new AI project, and poaching some Google and OpenAI technical talents, but also because the proposal of "suspending development" is not feasible and does not solve the problem.

For example, when he won the Turing Award with Yoshua Bengio, Yann LeCun, one of the "Big Three" of artificial intelligence, made it clear at the time that he did not agree with the views of the letter and did not sign it.

The three giants jointly signed! Another open letter "Beware of AI and defend humanity" was issued

However, this new and more vague open letter, Yann LeCun, was also not signed.

Andrew Ng, a well-known scholar in the field of artificial intelligence and founder of Landing AI, also posted on LinkedIn at the time that he thought the idea of a complete moratorium on AI training for 6 months was a bad and unrealistic idea.

He said the only way to really get the entire industry to pause research AI training was for the government to step in, but getting the government to suspend emerging technologies they don't understand is anti-competitive and obviously not a good solution. He acknowledges that responsible AI is important and that AI does have risks, but a one-size-fits-all approach is not advisable. At present, it is more important for all parties to invest more in the security field of AI while developing AI technology, and cooperate in the development of regulations around transparency and auditing.

The three giants jointly signed! Another open letter "Beware of AI and defend humanity" was issued

Sam Altman directly said when questioned by the US Congress that the framework of Musk's call is wrong, and the suspension of the date is meaningless. "We suspend for 6 months, and then what? Let's suspend for another 6 months? He said.

But like Ng, Sam Altman has been the most vocal advocate for the government to tighten its regulation of AI.

He even made regulatory recommendations to the U.S. government at the hearing, asking the government to form a new government agency responsible for licensing large AI models, and that the agency could revoke a company's license if its models did not meet government standards.

Last week, he joined several other OpenAI executives to call for the creation of an international organization similar to the International Atomic Energy Agency to regulate AI and for leading international AI developers to collaborate.

The voice of the opposition

Like Musk's letter, this latest letter is based on the assumption that AI systems will rapidly improve their capabilities, but humans will not have full control over their safe operation.

Many experts point out that rapid improvements in systems such as large language models are predictable. Once AI systems reach a certain level of sophistication, humans may not be able to control their behavior. Toby Ord, a scholar at the University of Oxford, said that just as people hoped that Big Tobacco would acknowledge the serious health hazards posed by their products sooner rather than earlier and start discussing how to limit them, AI leaders are now doing just that.

The three giants jointly signed! Another open letter "Beware of AI and defend humanity" was issued

But there are also many people who doubt these predictions. They point out that AI systems can't handle even relatively mundane tasks, such as driving a car. Despite years of effort and tens of billions of dollars invested in this area of research, fully autonomous vehicles are far from a reality. Skeptics say that if AI can't even meet this challenge, what opportunities does the technology have to pose a threat in the coming years?

Yann LeCun voiced his disagreement with this concern on Twitter, saying that superhuman AI is completely off the list of human extinction crises — mainly because it doesn't exist yet. "Until we can design dog-level AI, let alone human-level AI, it's completely too early to discuss how to make it safer."

The three giants jointly signed! Another open letter "Beware of AI and defend humanity" was issued

Ng is more optimistic about AI. He said that in his eyes, the factors that will cause most of the survival crises of mankind, including epidemics, climate change, asteroids, etc., AI will be the key solution to these crises. If humans want to survive and develop in the next 1,000 years, AI needs to evolve faster, not slower.

What do you think? Welcome to let us know in the comment area.

Note: The cover image is from Pexels, and the copyright belongs to the original author. If you do not agree to the use, please contact us as soon as possible and we will delete it immediately.