Earlier this month, Altman, Hassabis, Amood and Biden, CEOs of three of the world's top AI companies, held a special meeting aimed at discussing the future regulation of artificial intelligence. Subsequent US media reports said that Altman and others warned Biden that the artificial intelligence system has now brought risks and is serious enough to require government intervention, hoping that Biden will "strike" in time. The content of this interview is summarized as follows: American industry giants and governments can use their power to limit the rapid development of AI, they believe that artificial intelligence is producing an "extremely powerful counterforce", if it continues to grow wildly, it will take away human living space and cause social unrest risks.
The news mentioned that there was also an initiative in the form of an open letter in the international field, calling on all artificial intelligence laboratories to immediately "stop work for 6 months" and can no longer train artificial intelligence systems above the CGT-4 level. And this open letter has been signed by more than 1,000 people in the technology industry, including Musk. In general, experts are asking for AI to "slow down" because AI learns too quickly and even gets out of human control, which can have some unpredictable consequences. At the same time, the rapid development of artificial intelligence may also lead to some occupational and social inequalities, crowding out a large number of social positions, and there is currently no effective way to solve the problem.
On May 30, the scientific and technological community once again "hammered". The Center for Artificial Intelligence Security, an international nonprofit organization in the field of AI, warned that countries must carefully rectify the AI industry. The document calls for "the potential risk of human extinction posed by AI" and calls for AI to be "made a global priority, along with other large-scale risks affecting society, such as pandemics and nuclear war." At present, the warning letter has also received support from many parties, including business executives who study AI work, as well as professors and scholars in various fields such as AI, climate, and infectious diseases, totaling more than 350 people. Signatories also include industry giants such as Altman, the "father of ChatGPT" and OpenAI founder, and Ian Lecun, Meta's chief AI scientist.
The analysis of public opinion proposed by some experts believes that the current consensus is that the rapid development of artificial intelligence will lead to the emergence of "strong artificial intelligence". This new intelligence may violate or ignore goals or rules set by humans. For example, humans will apply a "rule" to artificial intelligence, but if its intelligence develops to a certain extent and has the ability to break this rule, it is likely to get out of control. In addition, strong artificial intelligence may have different values or morals from humans, and once they are used by people with ulterior motives or develop in other directions, the products created by this evil purpose are extremely dangerous, which is another purpose to "control AI".
It is also worth mentioning that there are still many people around the world who "oppose the restriction of AI development", believing that this is a self-restraint behavior. They believe that as long as they actively participate in and influence the design and management of AI to ensure that AI can meet the interests and values of mankind, this is completely a big progress for the whole world, rather than blindly denying development. Some commented on this: "The field of AI is a double-edged sword, if all parties are based on good purposes, it is understandable to set rules such as guardrails." But don't develop 'selfishness' in this matter, for example, some countries take the opportunity to suppress the development of AI in other countries, make up various excuses to restrict global AI upgrades, and become a selfish rule, which is problematic. ”