laitimes

Science and Technology Cloud Report: Can't support the future vision, artificial intelligence will once again usher in a "cold winter"?

author:Tech Cloud Report

Tech Cloud reports are original.

Since 1950, Alan Turing first proposed in his seminal paper, "Computing Machines and Intelligence", "Can machines think?" Since this problem, the development of artificial intelligence has not been smooth sailing, and it has not yet achieved its goal of "artificial general intelligence".

Science and Technology Cloud Report: Can't support the future vision, artificial intelligence will once again usher in a "cold winter"?

However, there are still incredible advances in the field, such as: IBM Deep Blue robots beat the world's best chess players, the birth of self-driving cars, and Google DeepMind's AlphaGo beats the world's best Go players... The current achievements demonstrate the best research and development results of the past more than 65 years.

It is worth noting that there were well-documented "AI winters" during this time, which almost completely overturned early good expectations for artificial intelligence.

One of the factors leading to the AI winter is the gap between hype and actual fundamental progress.

In the past few years, there has been speculation that another AI winter may be coming, so what factors may trigger an AI ice age?

Cyclical fluctuations in artificial intelligence

"AI Winter" refers to a period when public interest in AI diminishes as investment in these technologies in business and academia fades.

Artificial intelligence was initially developed rapidly in the 50s and 60s of the 20th century. Despite many advances in artificial intelligence, they are mostly academic.

In the early 70s of the 20th century, people's enthusiasm for artificial intelligence began to fade, and this dark period lasted until around 1980.

During this cold AI winter, activities dedicated to developing human-like intelligence for machines began to lack funding.

Science and Technology Cloud Report: Can't support the future vision, artificial intelligence will once again usher in a "cold winter"?

In the summer of 1956, a group of mathematicians and computer scientists occupied the top floor of the building housing Dartmouth College's mathematics department.

For eight weeks, they imagined a whole new field of study.

John McCarthy, a young professor at Dartmouth at the time, coined the term "artificial intelligence" when designing proposals for the symposium.

He argues that the symposium should explore the hypothesis that "every aspect of human learning or any other feature of intelligence can in principle be described so precisely that it can be simulated with machines."

At that meeting, the researchers sketched out the artificial intelligence as we know it today.

It gave birth to the first camp of artificial intelligence scientists, "symbolism" is an intelligent simulation method based on logical reasoning, also known as logicism, psychology or computer school, its principles are mainly physical symbol system assumptions and limited rationality principles, which have long been in a dominant position in artificial intelligence research.

Their expert system reached its peak in the 80s of the 20th century.

In the years following the conference, "connectionism" reduced human intelligence to the higher-level activities of the human brain, emphasizing that intelligence is produced as a result of a large number of simple units interconnected and running in parallel through a complex complexity.

It starts from neurons and then studies neural network models and brain models, opening up another development path of artificial intelligence.

The two approaches have long been considered mutually exclusive, with both sides believing they are on their way to artificial general intelligence.

Looking back at the decades since that conference, we can see that AI researchers' hopes have often been dashed, and these setbacks have not stopped them from developing AI.

Today, while AI is revolutionizing the industry and has the potential to disrupt the global labor market, many experts are still wondering whether today's AI applications have reached their limits.

As Charles Choi describes in Seven Revealed Ways AI Fail, the weaknesses of today's deep learning systems are becoming increasingly apparent.

However, researchers are not pessimistic about the future of AI. In the near future, we may usher in another winter of artificial intelligence.

But this may be the moment when inspired AI engineers finally lead us into the eternal summer of machine thinking.

Computer vision and artificial intelligence expert Filip Piekniewski's article "AI Winter is Coming" has caused heated discussions online.

The article mainly criticizes the hype of deep learning, arguing that the technology is far from revolutionary and is facing development bottlenecks.

The interest of major companies in artificial intelligence is actually converging, and another cold winter of artificial intelligence may be coming.

Will the AI winter come?

Since 1993, there have been increasingly impressive advances in the field of artificial intelligence.

In 1997, IBM's Deep Blue system became the first computer chess player to defeat world chess champion Gary Kasparov.

In 2005, a Stanford driverless robot won the DARPA Self-Driving Robot Challenge by driving itself 131 miles down a desert road without "stepping on a little."

In early 2016, Google's DeepMind's AlphaGo beat the world's best Go players.

Science and Technology Cloud Report: Can't support the future vision, artificial intelligence will once again usher in a "cold winter"?

Image credit: DARPA Grand Challenge 2005

Everything has changed in the last two decades.

In particular, the vigorous development of the Internet has allowed the artificial intelligence industry to have enough pictures, sounds, videos and other types of data to train neural networks and widely use them.

But the expanding success of the deep learning field relies on increasing the number of layers of neural networks, as well as increasing the GPU time used to train them.

An analysis by AI research firm OpenAI shows that the computing power required to train the largest AI systems doubles every two years, and then every 3-4 months thereafter.

As Neil C. Thompson and his colleagues write in The Diminishing Returns of Deep Learning, many researchers worry that the computational demands of AI are on an unsustainable trajectory.

A common problem facing early AI research was a severe lack of computing power, which was limited by hardware rather than human intelligence or ability.

Over the past 25 years, as computing power has increased dramatically, so have the advances we've made in artificial intelligence.

However, in the face of the rush of massive data and increasingly complex algorithms, the world adds 20 zettabytes of data every year, and the demand for AI computing power increases 10 times every year, which has far exceeded the Moore's Law cycle of doubling performance.

We are approaching the theoretical physical limit of the number of transistors that can be installed on a chip.

Intel, for example, is slowing down the rollout of new chip manufacturing technologies because it is difficult to continue to shrink transistors while saving costs. In short, the end of Moore's Law is approaching.

Science and Technology Cloud Report: Can't support the future vision, artificial intelligence will once again usher in a "cold winter"?

Image credit: Ray Kurzwell, DFJ

There are short-term solutions that will ensure the continued growth of computing power, thereby facilitating the advancement of artificial intelligence.

For example, in mid-2017, Google announced that it had developed a specialized AI chip called "Cloud TPU" that optimizes the training and execution of deep neural networks.

Amazon develops its own chip for Alexa (artificial intelligence personal assistant). At the same time, there are currently many startups trying to adapt chip designs to suit specialized AI applications.

However, these are only short-term solutions.

What happens when we run out of solutions that can optimize traditional chip designs? Will we see another AI winter? The answer is yes, unless quantum computing can go beyond classical computing and find a more solid answer.

But until now, quantum computers that could achieve "quantum supremacy" and were more efficient than classical computers did not exist.

If we reach the limits of traditional computing power before the arrival of real "quantum supremacy", I am afraid that there will be another winter of artificial intelligence in the future.

AI researchers are grappling with increasingly complex problems and driving us to realize Alan Turing's vision of artificial general intelligence. However, much remains to be done.

At the same time, without the help of quantum computing, we will be very good at realizing the full potential of artificial intelligence.

No one can say for sure whether AI winter is coming.

However, it is important to be aware of the potential risks and keep an eye on the signs so that we can be prepared if it does happen.

【About Technology Cloud Report】

Focus on original enterprise-level content experts - Tech Cloud Report. Founded in 2015, it is one of the top 10 media in the cutting-edge enterprise-level IT field. Authoritatively recognized by the Ministry of Industry and Information Technology, Trusted Cloud, one of the official designated communication media of the Global Cloud Computing Conference. In-depth original coverage of cloud computing, big data, artificial intelligence, blockchain and other fields.

Read on