laitimes

"Danger! Stop all big AI research now! Musk led the call, more than 1,000 Silicon Valley entrepreneurs and scientists

author:Wall Street Sights

As society cheers the evolution of AI, new and unexpected risks may be brewing.

On March 22, the Future of Life Institute issued an open letter to the whole society "Pausing large-scale AI research", calling on all AI laboratories to immediately suspend the training of AI systems more powerful than GPT-4 for at least 6 months. The agency's mission is to "steer transformative technologies for life, away from extreme large-scale risks."

"Danger! Stop all big AI research now! Musk led the call, more than 1,000 Silicon Valley entrepreneurs and scientists

In its letter, the agency mentions:

We should not risk losing control of civilization by entrusting decisions to unelected tech leaders. Development can continue only if the effects of ensuring a robust AI system are positive and its risks are manageable.

During the moratorium, AI labs and independent experts should jointly develop and implement a set of shared security protocols for advanced AI design and development, subject to rigorous review and oversight by independent external experts.

Up to now, Musk, Apple co-founder Steve Wozniak, Stability AI founder Emad Mostalique and thousands of other technology bigwigs and AI experts have signed an open letter.

"Danger! Stop all big AI research now! Musk led the call, more than 1,000 Silicon Valley entrepreneurs and scientists

It is worth mentioning that OpenAI CEO Altman pointed out in his latest conversation with MIT research scientist Lex Fridman that AI has emerged its unexplained reasoning ability, while acknowledging that "AI kills humans" has a certain possibility.

So how does ChatGPT view the impact of AI on human society in the open letter? Wall Street News specifically asked about ChatGPT 3.5, which said that with the continuous development and popularization of artificial intelligence technology, artificial intelligence may have an increasing impact on society and human beings. The appeal and recommendations made in this open letter should be seen as a starting point and reference for broader and deeper discussion, rather than an end point or solution.

"Danger! Stop all big AI research now! Musk led the call, more than 1,000 Silicon Valley entrepreneurs and scientists

The following is the original text of the open letter:

As extensive research and top AI labs acknowledge, AI systems pose greater risks to society and humanity. The Asilomar Principles of Artificial Intelligence point out that advanced AI may represent a profound change in the history of life on Earth and should be planned and managed with corresponding care and resources. However, despite the AI frenzy in recent months, developing and deploying increasingly powerful digital brains, no one can understand, predict, or reliably control AI systems, nor does it have the level of planning and management.

Now that AI is becoming competitive with humans in general tasks, we must ask ourselves: Should we let machines promote untrue information in information channels? Should we automate all work, including those that are fulfilling? Should we develop non-human brains so that they eventually surpass human numbers and outnumber human intelligence, and eliminate and replace humans? Should we risk losing control of our civilization? Such decisions must not be delegated to unelected technical leaders. It should only be developed if we are confident that the effects of powerful AI systems are positive and that their risks are manageable. At the same time, this confidence must be verified and strengthened with the magnitude of the potential impact of the system. OpenAI's recent statement on AI noted that it may have to be independently reviewed before starting training future systems, agreeing to limit the rate of computational growth used to create new models for state-of-the-art endeavors. We agree that action is high time.

Therefore, we call on all AI labs to immediately suspend the training of AI systems more powerful than GPT-4 for at least 6 months. Such a moratorium should be public, verifiable and include all key participants. If such a ban cannot be implemented quickly, the government should step in and enact a moratorium.

During the moratorium, AI labs and independent experts should jointly develop and implement a set of shared security protocols for advanced AI design and development, subject to rigorous review and oversight by independent external experts. These protocols should ensure that systems that adhere to the protocols are secure. It's worth mentioning that this doesn't mean suspending AI development in general, just taking a step back from the dangerous race to limit unpredictable research and development.

AI research and development should be refocused to make today's most advanced and powerful systems more accurate, secure, interpretable, transparent, robust, consistent, trustworthy, and loyal.

At the same time, AI developers must work with policymakers to dramatically accelerate the development of robust AI governance systems. These should include, at a minimum: regulatory bodies specifically for AI; Hardware that oversees and tracks high-capacity AI systems and large computing power; Provenance and watermarking systems help distinguish between real and synthetic and track model leaks; Strong vetting and certification ecosystem; Liability for harm caused by artificial intelligence; Strong public funding for AI security technology research as well as well-resourced institutions to address the enormous economic and political disruption that AI can cause.

Humanity can enjoy a prosperous future through artificial intelligence. Now, we have succeeded in creating powerful AI systems that can pay off in this "AI summer", designing these systems for the clear benefit of all and giving society a chance to adapt. Stopping the use of other technologies could have a catastrophic impact on society, so we must remain prepared. Let's enjoy a long AI summer instead of rushing into the fall.

This article is from Wall Street News, welcome to download the APP to see more