laitimes

Founder, Future Life Institute: China is in a unique position in global AI governance

author:The Paper

·“ Now is the time for East and West to work together. The development of artificial intelligence will change relations between great powers. And China is in a unique position to contribute to the safe and wise governance of AI. Comparing China, Europe and the United States, China has done the most to date in regulating AI, second in Europe and third in the United States.

Founder, Future Life Institute: China is in a unique position in global AI governance

On June 9, Max Tegmark, professor at the Center for Artificial Intelligence and Basic Interaction at the Massachusetts Institute of Technology (MIT) and founder of the Future Life Institute, delivered a keynote speech at KLCII 2023, talking about why concerns about AI have become more necessary and urgent, and how to control AI.

Tegmark said: "AI will change the relationship between great powers. In the past, when people focused only on the power of AI, everyone competed for it. But once people begin to realize that this could end all civilizations on Earth, thinking changes. Rather than an arms race, it is seen as a suicide race. Whoever acquires uncontrollable superintelligence first, everyone will perish at that time. So I think it's time for East and West to work together, to build partnerships rather than competitions, to make sure we can control the development of AI. ”

Comparing China, Europe and the United States, Tegmark believes that China has done the most to date in regulating artificial intelligence, second in Europe and third in the United States. He revealed that the Institute for Future Life has worked heavily with policymakers in Europe, including the EU's Artificial Intelligence Act, which is now being promoted.

The Future of Life Institute was founded in 2014 to study how to minimize the potential harm of developing AI technologies. In March this year, the agency's official website released a heated "pause AI research" open letter, calling on all AI labs to immediately suspend the training of AI more powerful than GPT-4 for at least half a year, and more than 1,100 technology people, including Tesla CEO Musk, signed the open letter.

As one of the initiators of this initiative, it is not surprising that Tegmark talks about how to control AI. But interestingly, Tegmark has repeatedly stated that he is not a "doomer" (someone who thinks the world is going to ruin) as Yann LeCun, one of the "Big Three of Deep Learning," but instead believes that "it is not impossible to have an agent much smarter than us." Provided we can prove that it is safe. ”

Artificial intelligence is close to the level of being able to fool humans

How to control AI to ensure that it serves and not opposes humans? Tegmark sees two main issues as this needs to be addressed.

The first is alignment, which is ensuring that the AI acts according to the wishes of its controller. Second, align organizations, companies, and individuals across the globe to ensure that their motivation is to use AI for good and not evil. "If we solve only the first problem and apply very powerful AI widely, then we may face a situation where terrorists or others who want to control the world and perform actions that we do not want to see, can take advantage of this technology." Tegmark said.

In fact, human concerns about whether they can control artificial intelligence have long existed, and as early as the time of Alan Turing, some people expressed concerns. Over the past 9 years, many people, including Tegmark, such as physicist Stephen Hawking, founder of the Center for Artificial Intelligence Systems at the University of California, Berkeley, Stuart Russell, and 2004 Nobel laureate in physics Frank Wilczek, have expressed concern about this problem.

But Tegmark stressed that the concerns are now more urgent because AI is close to being able to fool humans, i.e. AI passes the Turing test. In other words, AI can master language, which in the past was considered by many researchers to mark its very close proximity to everything humans can do.

But as for the future of human capabilities in understanding AI systems, Tegmark said he is actually more optimistic than Yang. "I think if we go full speed ahead and get more control out of humans to machines that we don't understand, the human world will end up in a very bad way." But we don't have to do that. If we work mechanistic interpretability (studying how human knowledge is stored in the complex connections of neural networks, ultimately explaining why big language models produce intelligence), and many other technical topics, we can actually ensure that these more powerful intelligences work for us and use it to create a more inspiring future than science fiction writers used to dream of. ”

The development of artificial intelligence will change the relationship between major powers

Regarding the previously issued "pause on AI initiative", Tegmark clarified that its purpose is not to suspend AI research, but only to suspend the development of systems more powerful than GPT-4, and the purpose of the suspension is only to allow AI to have a certain safety review like biotechnology. In biotechnology, a company that invents a new drug cannot have the product appear in pharmacies the next day, but must first convince government experts that it is a safe drug, that the benefits outweigh the harms, and that it can be sold to the public after being reviewed.

So does this undermine innovation? Tegmark asked rhetorically, "Of course not. In fact, it is precisely the weak regulation of dangerous goods that often undermines innovation, such as civilian nuclear power, and investment in this area has largely collapsed, at least in the West after Fukushima. ”

Tegmark believes that the development of AI will change relations between major powers, and China is in a unique position to contribute to the safe and wise governance of AI.

"China is the world's leading science and technology power, so it can help lead research." It's not just about how to make AI powerful, but how to make it more robust and trustworthy. As China's international influence grows in tandem with its ability to inspire and shape the global AI agenda, China's voice really matters. Just last week, I was very pleased to see that China's Global Security Initiative explicitly talks about preventing AI risks. ”

According to Tegmark, Future Life Institute has a lot of cooperation with European policymakers, such as the European Union's Artificial Intelligence Act. "If we can help Europe come up with really sensible regulation, the U.S. is likely to follow suit." We see this in the EU GDPR (General Data Protection Regulation). For example, Americans don't want to do anything similar for privacy, but after the passage of these laws in the European Union, Americans began to receive less spam, and the content mentioned in the GDPR began to become more popular in the United States. He said.

Tegmark's advice to researchers in the field of AI is to focus on the basics, because the economy and the job market are changing faster and faster, "We're moving away from learning for 10 or 20 years and then doing the same thing for the rest of our lives." More importantly, on a solid foundation, it is necessary to have creative and open thinking. Of course, pay attention to what is happening in the entire field of AI, not just in your own field. Because in the job market, the first thing that will happen is not that people are replaced by machines, but that people who do not deal with AI will be replaced by people who deal with AI. ”

Read on