laitimes

Musk and thousands of other tech people issued an open letter: suspend training of AI systems more powerful than GPT-4

author:China Youth Network

"I'm not worried about the 'AGI risk' (the risk of super-intelligent machines that we can't control' right now, I'm worried about the 'MAI risk' in the short term—Mediocre AI that is unreliable (like Bing and GPT-4) but widely deployed."

In recent days, AI risks have raised deep concerns among several tech leaders.

The first is Geoffrey Hinton, known as the "Godfather of AI," who said in an interview with CBS last week that AI could evolve to the point where it poses a threat to humanity is "not inconceivable."

Subsequently, AI "bull" Gary Marcus tweeted in response to Hinton on March 27, and published an article entitled "Artificial Intelligence Risk≠ General Artificial Intelligence Risk" on the 28th, saying that superintelligence may or may not be imminent, but in the short term, you need to worry about "MAI (mediocre artificial intelligence) risk".

On the 27th, Twitter CEO Elon Musk also joined in to express his agreement with Hinton and Marcus.

Musk and thousands of other tech people issued an open letter: suspend training of AI systems more powerful than GPT-4

Musk interacted with Jeffrey Hinton and Gary Marcus on social networks.

On the 29th, the Future of Life Institute released an open letter calling on all AI labs to immediately suspend training AI systems more powerful than GPT-4 for at least 6 months. Hinton, Marcus and Musk all signed the letter.

"Accelerating the development of robust AI governance systems"

The open letter from the Future Life Institute, entitled "Pausing Giant AI Experiments: An Open Letter", was unsealed on the 29th.

Musk and thousands of other tech people issued an open letter: suspend training of AI systems more powerful than GPT-4

The Open Letter from the Future Life Institute has been signed by 1,079 people.

"Extensive research suggests that AI systems with intelligence that compete with humans can pose far-reaching risks to society and humans, a view acknowledged by top AI labs," the letter reads. As stated in the widely recognized "Asilomar AI Principles" (an extended version of Asimov's Three Laws of Robotics, signed by nearly a thousand AI and roboticists in 2017), advanced AI may represent a profound change in the history of life on Earth and should be planned and managed with corresponding care and resources. Unfortunately, this level of planning and management doesn't happen, although AI labs have been locked in a runaway race in recent months to develop and deploy more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

"Contemporary AI systems are now becoming competitive with humans for general tasks, and we must ask ourselves: Should we allow machines to flood our information channels with propaganda and lies? Should we automate all work, including satisfactory work? Should we develop non-human minds that may eventually be more and smarter than we are, eliminating and replacing us? Should we risk losing control of our civilization? "Robust AI systems should only be developed if we are confident that their effects are positive and the risks are manageable." This confidence must be justified and increase with the scale of the system's potential impact. OpenAI's recent statement on general artificial intelligence states that 'at some point, it may be important to conduct independent reviews before starting training future systems, and for state-of-the-art work, it should be agreed to limit the amount of computation used to create new models to grow.'" We agree. That time is now. ”

The letter calls for an immediate moratorium on training AI systems more powerful than GPT-4 for at least 6 months. Such suspension should be public and verifiable, and include all key participants. If such a moratorium cannot be implemented expeditiously, the Government should step in and impose it. AI labs and independent experts should take advantage of this moratorium to jointly develop and implement a shared set of security protocols for advanced AI design and development, rigorously audited and supervised by independent external experts. These protocols should ensure that the systems that comply with them are secure and unquestionable. This doesn't mean a moratorium on AI development overall, just an urgent retraction from the dangerous race to unpredictable large, black-box models.

At the same time, the letter states that AI developers must work with policymakers to significantly accelerate the development of robust AI governance systems. At a minimum, this should include: the establishment of a capable new regulatory body dedicated to AI; Supervise and track high-performance AI systems and massive amounts of computing power; Introduction of a source identification system and watermarking system to help distinguish between real and synthetic and track model leaks; Strong audit and certification ecosystem; defining liability for harm caused by AI; Strong public funding for AI technology safety research; Respond to the enormous economic and political disruption (especially to democracy) that AI will cause with well-resourced institutions.

The letter concludes, "Humanity can enjoy a prosperous future enabled by artificial intelligence. Having successfully created powerful AI systems, we can now enjoy the 'AI summer', reaping the rewards of designing these systems for the benefit of all, and providing opportunities for society to adapt." Society has suspended other technologies that could have a catastrophic impact on society. We can do the same in this area. Let's enjoy a long AI summer instead of falling into autumn unprepared. ”

Founded in 2014, the Future Life Institute is a nonprofit organization funded by a range of individuals and organizations with a mission to steer transformative technologies away from extreme, large-scale risks and toward benefiting lives.

As of press time, the letter has been signed by 1,079 tech leaders and researchers, including Turing Award winner Yoshua Bengio, author of "Artificial Intelligence: Modern Methods" Stuart Russell, Apple co-founder Steve Wozniak, and Stability AI CEO Emad M. Tech leaders like Emad Mostaque.

"May lead to nuclear war and a more serious outbreak"

On the 28th, Marcus wrote on the writing platform Substack that a colleague wrote to him and asked: "Won't this (public) letter create unwarranted fears of the upcoming AGI (General Artificial Intelligence), superintelligence, etc.?" ”

Marcus explains, "I still don't think large language models have much to do with superintelligence or general AI; I still believe that, like Yann LeCun, large language models are an 'exit' on the road to general AI." My vision of doom may be different from Hinton or Musk; Their (as far as I know) seem to revolve mostly around what happens if computers improve themselves quickly and thoroughly, which I don't think is a direct possibility. ”

Musk and thousands of other tech people issued an open letter: suspend training of AI systems more powerful than GPT-4

AI risk vs. general AI risk.

But while much of the literature equates AI risk with superintelligence or general AI risk, superintelligence is not necessarily necessary to cause serious problems. "I'm not worried about the 'AGI risk' (the risk of super-intelligent machines that we can't control' right now, I'm worried about the 'MAI risk' in the short term—Mediocre AI that is unreliable (like Bing and GPT-4) but widely deployed – both in terms of the number of people using it and in terms of the world's access to software." A company called Adept.AI has just raised $350 million to do just that to allow large language models to access almost everything (aiming to 'augment your capabilities on any software tool or API in the world' with large language models, despite their obvious hallucinatory and unreliable tendencies). ”

Marcus argues that many ordinary people, perhaps above average intelligence, are not necessarily geniuses, and have created a variety of problems throughout history. In many ways, it is not intelligence but power that plays a key role. An idiot with a nuclear code can destroy the world, requiring only modest intelligence and undeserved access. Now, AI tools have piqued the interest of criminals, increasingly being allowed into the human world, where they could wreak even more damage.

Europol released a report on the 27th, discussing the possibility of using ChatGPT tools to commit crimes, which is alarming. "The ability of large language models to detect and reproduce language patterns not only helps with phishing and online fraud, but can often also be used to impersonate the speaking style of a specific individual or group. This ability can be abused on a massive scale to mislead potential victims into handing over their trust to criminals. "In addition to the criminal activities described above, ChatGPT's capabilities help to address many cases of potential abuse in the areas of terrorism, propaganda and disinformation." As a result, the model can often be used to gather more information that could facilitate terrorist activities, such as terrorist financing or anonymous file sharing. ”

Marcus believes that, coupled with the mass propaganda generated by artificial intelligence, terrorism enhanced by large language models could lead to nuclear war, or lead to the deliberate spread of pathogens worse than the coronavirus. Many people may die, and civilization may be completely destroyed. Maybe humans won't really "disappear from the face of the earth," but things do get really bad.

Jeffrey Hinton, the "godfather of AI," said in an interview with CBS that AI could evolve to the point where it poses a threat to humanity is "not inconceivable."

In an interview with CBS last week, Hinton, the "godfather of AI," was asked about the possibility of AI "wiping out humanity," saying, "I don't think it's unthinkable." I just want to say that. ”

Marcus pointed out that instead of Hinton and Musk's concern about Skynet and robots taking over the world, they should think more about the immediate risks, what criminals, including terrorists, might do to large language models, and how to stop them.

Source: The Paper

Read on