laitimes

The father of ChatGPT warned that AI could wipe out humans, and 350 AI authorities signed a joint open letter

author:Love Fan'er
The father of ChatGPT warned that AI could wipe out humans, and 350 AI authorities signed a joint open letter

Today, there is an important open letter from the field of AI.

350 AI authorities, including Sam Altman, the "father of ChatGPT," signed the joint open letter and expressed concern that the AI technology currently under development may pose an existential threat to humanity.

The letter has only one statement: reducing the risk of AI extinction should be a global priority, along with other society-scale risks, such as pandemics and nuclear war.

Add an open letter signature

https://www.safe.ai/statement-on-ai-risk

The signatories include executives from three leading AI companies:

  • Sam Altman, CEO of OpenAI;
  • Demis Hassabis, CEO of Google DeepMind;
  • Dario Amodei, CEO of Anthropic;

What's more, the list also includes Geoffrey Hinton and Yoshua Bengio, the godfathers of artificial intelligence.

The father of ChatGPT warned that AI could wipe out humans, and 350 AI authorities signed a joint open letter

More than 350 AI executives, researchers, and engineers signed the letter, issued by the nonprofit Center for the Safety of Artificial Intelligence, arguing that AI carries a risk of human extinction and should be considered a societal risk equal to epidemics and nuclear war.

There is growing concern about the potential threat that the development of AI models such as ChatGPT could pose to society and jobs, and many are calling for stronger regulation of the AI industry or it will cause irreparable damage to society.

AI continues to soar, but the regulatory and auditing measures have not kept pace, which means that no one can guarantee the safety of AI tools and the process of using them.

Last week, Sam Altman and two other OpenAI executives suggested that an international organization like the International Atomic Energy Agency should be established to safely regulate AI development, calling for cooperation between leading international AI manufacturers and asking the government to strengthen regulation of cutting-edge AI manufacturers.

In fact, as early as March, an open letter about stopping 6 months of AI research was popular all over the Internet.

The letter calls for an immediate moratorium on all AI experiments on AI models more advanced than GPT-4 for at least 6 months, in order to nip these terrible fantasies in the cradle.

The speed at which AI is advancing is staggering, but the regulatory and auditing measures are slow to keep up, which means that no one can guarantee the safety of AI tools and the processes in which they are used.

The father of ChatGPT warned that AI could wipe out humans, and 350 AI authorities signed a joint open letter

The joint letter has already been awarded to 2018 Turing Award winners Yoshua Bengio, Musk, Steve Wozniak, Skype co-founder, Pinterest co-founder, Stability AI CEO and many other well-known people signed the support, and the number of co-authors has reached 1125 before the deadline.

The original text of the open letter is as follows:

AI has intelligence that competes with humans, which can pose profound risks to society and humanity, which has been confirmed by numerous studies [1] and endorsed by top AI labs [2]. As noted by the widely accepted Asilomar AI Principles, advanced AI could represent a major change in the history of life on Earth and should therefore be planned and managed with commensurate attention and resources.

Unfortunately, even in recent months, AI labs have been caught in an out-of-control race to develop and deploy increasingly powerful digital minds that no one can understand, predict, or reliably control, not even their creators.

Modern AI systems now have the ability to compete with humans in terms of general-purpose tasks [3], and we must ask ourselves:

  • Should we flood our information channels with machines, spreading propaganda and lies?
  • Should we automate everything, including those that are satisfying?
  • Should we develop non-human minds that may eventually surpass and replace us?
  • Should we risk losing control of civilization?

These decisions should not be made by unelected tech leaders. Only when we are confident that the impact of AI systems is positive and the risks are manageable should we develop robust AI systems. This confidence must be well justified and enhanced as the potential impact of the system increases. OpenAI's recent statement on AI states: "At some point, it may be necessary to obtain independent scrutiny before starting training future systems, and for the most advanced efforts, agree to limit the rate of computational growth used to create new models." We agree. Now is that moment.

Therefore, we call on all AI labs to immediately suspend for at least 6 months and not train AI systems that are more powerful than GPT-4. This moratorium should be public and verifiable, and include all key participants. If such a moratorium cannot be implemented quickly, the government should step in and impose a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a shared set of advanced AI design and development security protocols that should be rigorously reviewed and supervised by independent external experts.

These protocols should ensure that systems that comply with them are safe beyond reasonable doubt [4]. This doesn't mean pausing AI development, but simply taking a step back from the dangerous race to bigger, unpredictable, black-box models and their emergence.

AI research and development should focus on improving the accuracy, security, explainability, transparency, robustness, alignment, trustworthiness, and loyalty of existing robust, advanced systems.

At the same time, AI developers must work with policymakers to significantly accelerate the development of AI governance systems. These should include, at a minimum:

  • new, capable regulators dedicated to AI;
  • Oversight and tracking of high-power AI systems and large pools of computing power;
  • Traceability and watermarking system to distinguish between real and synthetic content, track model leaks;
  • Robust audit and certification ecosystem; liability for damages caused by AI;
  • adequate public funding for technical AI security research;
  • Veteran institutions dealing with the dramatic economic and political changes caused by AI, especially the impact on democratic institutions.

Humans can enjoy a prosperous future with the help of AI. Having successfully created powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards and explicitly use these systems for the benefit of all, giving society a chance to adapt.

Society has pressed the pause button in the face of other technologies that could have catastrophic impacts on society [5]. Here, too, we can do the same. Let's enjoy a long AI summer instead of rushing into autumn unprepared.

Read on