laitimes

Turing Award Winner Bengio: What judgments are reliable about the future of AI?

There are often exciting or frightening predictions about the future development of artificial intelligence.

For example, Mo Gawdat, a former commercial director of Google's innovation division, said, "We will not experience 100 years of AI progress in the next century: instead, we will experience 20,000 years of progress." ”

Futurist Ian Pearson said during the World Summit of Heads of Government in Dubai that artificial intelligence may be billions of times smarter than humans, and that humans will have to integrate with computers if they want to survive in the future.

Musk has also said that humans need to be combined with machines in the future to become a kind of "cyborg" to avoid being eliminated in the era of artificial intelligence.

How should we think about how to distinguish between so many dizzying and indistinguishable predictions?

"The more accurate this prediction is, the more vigilant it is to be." Recently, 2018 Turing Award winner Yoshua Bengio wrote in his blog.

Turing Award Winner Bengio: What judgments are reliable about the future of AI?

2018 Turing Award winner Yoshua Bengio

Yoshua Bengio, a professor at the University of Montreal, is best known for his pioneering work in deep learning, winning the 2018 Turing Award alongside Geoffrey Hinton and Yann LeCun. In 2019, he received the Kiram Prize, Canada's highest national science award, and in 2021 became the world's second-most cited computer scientist.

The reality, Bengio argues, is that the best AI researchers can't seriously make such predictions unless they see it as part of a sci-fi movie. Not only are AI researchers not agreeing on the future pace of AI development, but there is no scientific basis for making such predictions. Scientific research can stagnate for a long time on a particular problem (such as the unification of all forces in physics) or make rapid progress after a breakthrough (such as deep learning).

So, what are the things we can judge with confidence right now?

Bengio lists the following:

· AI that is as smart as a human can be built. Our brains are complex machines that work more and more understandably. We are living evidence that some degree of intelligence is possible.

It is possible to build AI that is smarter than humans. Humans are sometimes plagued by cognitive biases that hinder the reasoning abilities that human ancestors might have needed to evolve into Homo sapiens. And we can reasonably assume that we will be able to construct AI without these flaws (such as the need for social status, self, or belonging to a group, unquestionably accepting group beliefs, etc.). In addition, AI can access more data and memory. Therefore, we can confidently say that it is possible to build artificial intelligence that is smarter than humans.

Still, it's still far from certain that we'll be able to build AI that's smarter than ourselves, as some articles claim. Various computational phenomena have encountered exponential difficulty walls (the infamous NP computational difficulty), and we have yet to discover the limits of intelligence.

The more advanced the science of human intelligence and artificial intelligence, the more likely it is to bring great benefits and dangers to society. The application of AI may increase and can greatly advance science and technology overall, but the power of tools is a double-edged sword, and laws, regulations, and social norms must be enacted to avoid or at least reduce the misuse of these tools.

To prevent humans blinded by the desire for power, money, or hatred from using these tools to harm other human beings, we undoubtedly need to change the law and introduce compassion into machines, while also strengthening the compassion inherent in humans.

· Since we really don't know how quickly technological advances will be in AI or other areas such as biotechnology, it's best to start now to better regulate these powerful tools. In fact, AI already has harmful uses, whether it's in the military, such as killer drones that can recognize someone's face and shoot at them, or making biased decisions in AI systems and discriminating against women or people of a subset of races. In general, there is poor regulation of calculations and this must be changed. We must regulate these new technologies, just as we do with aviation or chemistry, to protect people and society.

· In addition, AI applications that are clearly beneficial to society should be encouraged, whether in health, combating climate change, fighting injustice, or increasing access to knowledge and education. In all of these areas, governments can play a key role in directing the power of AI research and entrepreneurship to socially beneficial applications where the desire to make a profit is not always enough to spur the investment needed.

Read on