laitimes

70 years of the evolutionary history of artificial intelligence (Part I): Before the inflection point

author:Tago FRM

Since the beginning of this year, artificial intelligence has been on fire.

MidJourney、Stable Diffusion、ChatGPT...... General artificial intelligence products represented by Office Copilot and New Bing continue to emerge and begin to transform people's working patterns.

And NVIDIA, as the largest provider of AI chips, has also fought a beautiful turnaround. The stock price soared 189% during the year, hitting an all-time high of $439.9, and the market value stood at trillions of dollars.

This craze for artificial intelligence is not the first time it has emerged.

IBM's "Deep Blue" chess program defeated Kasparov in 1997 and AlphaGo defeated Ke Jie in 2017, which were social hot spots at that time.

But this time is different.

This time is the first time that AI has stepped out of gaming and into the field of work.

Artificial intelligence has gone from human entertainment content to productivity tools or productivity disruption tools — depending on which side of AI you are on.

The footsteps of AI are getting closer and closer.

Back in 1968, just 12 years after the official establishment of the discipline of artificial intelligence, the United States released a landmark work in the history of science fiction movies - "2001: A Space Odyssey".

The impact of this film is so great that not only filmmakers worship it, but even many scientists are deeply influenced by it.

For example, Yang Likun, one of the three giants of artificial intelligence, has said many times that this movie inspired him.

There is a scene in the movie where the two astronauts are talking secretly behind closed doors, and the camera leaves a lot of close-ups on their lips.

As the camera recedes, the audience realizes that it was the spacecraft computer Hal who was reading their lips, and the plot after that is that Hal rages and kills the astronaut.

70 years of the evolutionary history of artificial intelligence (Part I): Before the inflection point

At that moment, the cold red light that Hal shot out from the camera left a deep impression on the audience.

This is the first vivid demonstration of the threat of artificial intelligence in the history of science fiction films.

This kind of theme is already very familiar to us today.

From here, you will find that compared with that year, artificial intelligence has quickly entered concrete life from conception, embellishment and assistance, and has a practical impact on people.

However, the warning is still that warning, still only in appeals and science fiction movies.

Compared to the enthusiasm of humans to act as creators and desperately iterate on AI, humans are rather slow to respond to AI threats.

Below, let's take a look at the development of artificial intelligence from the beginning.

birth

In the 40s of the last century, information science underwent an explosion of development driven by Berta Langfi's system theory, Wiener's cybernetics and Shannon's information theory.

Since then, the achievements of information science in the fields of computers and communications have made all walks of life seem to see that an era of machinery replacing human brain labor is coming.

But at this time, the kind of mechanical intelligence that can speak, move and think like a human is still only an academic study, and has not become a discipline.

In the summer of 1956, McCarthy and Minsky led a seminar on artificial intelligence at Dartmouth College, which opened the advent of artificial intelligence as a discipline.

Since then, artificial intelligence has started a period of explosive development.

Artificial intelligence is an interdisciplinary discipline that covers philosophy, mathematics, physics, information science, computer science, mechanics, engineering, and even psychology, neurology, brain, and life sciences.

Researchers with different backgrounds stand at different angles and have different research ideas.

The most notable of these are the two major lines, the symbolism school and the connectionist school.

The symbolist school advocates the use of "symbols" to represent the real world abstractly, and the use of logical reasoning and search to replace the thinking and cognitive processes of the human brain.

In this way, as long as the researcher enters the initial information to the machine, the machine can deduce according to the predetermined logic and express the results, which is the process of artificial intelligence realization.

Connectionism originated from a 1943 paper by McCulloch and Pitts , "The Logical Calculus of Inner Thought in Neural Activity."

In this paper, the duo propose a brain model that for the first time links the biological brain with mechanical computing, which makes it possible to reproduce the brain's thought process through the logical processing of mechanical operations.

You may have noticed the word "nerve" in the paper.

That's right, that's where the now-popular neural network algorithm originated.

There are many branches within these schools, and researchers have made great achievements under the guidance of the two schools of thought.

But in comparison, the most outstanding and at the same time more legendary are the connectionist neural networks.

Its story is full of ups and downs.

Perceptron

At the beginning of the birth of artificial intelligence, a large number of achievements first emerged in symbolism.

So much so that one expert, Herbert Simon, was even optimistic that "in twenty years, machines will be able to do everything a human can do." ”

But the ideal is plump, the reality is skinny.

As research progressed, it was found that symbolism could only rely on computing power and memory storage capabilities to achieve success in closed problems, but was often powerless in the face of open problems.

As a result, people began to abandon the illusion of deriving the whole world from a small number of axioms, and connectionism began to emerge.

In 1949, neuropsychologist Donald Hebb developed a set of rules for biological learning based on his research on chimpanzees' emotions and learning abilities, and published a book "Behavioral Histology".

In the book, Hebb summarizes the "Hebb's Law" for how neuronal cells work.

As soon as the rules were proposed, they quickly gained attention in the field of artificial intelligence.

One of them is Frank Rosenblatt (1928-1971), known as the father of the perceptron.

Rosenblatt studied psychology, but he was very interested in the workings of the brain, and after graduating with a Ph.D., he came to work at Cornell Aeronautical Laboratory.

In 1958, he designed a machine called a perceptron to mimic the workings of neural networks.

70 years of the evolutionary history of artificial intelligence (Part I): Before the inflection point

American scientist Frank M. Rosenblatt with the perceptron

This model seems to simply arrange a group of neurons flat, but it can do some machine vision and pattern recognition tasks without relying on human programming and relying only on machine learning without relying on human programming.

A whole new path to machine simulation intelligence has emerged.

Two years later, Rosenblatt actually built an engineering machine based on neural network ideas.

On June 23, 1960, the Mark-1 was introduced.

The perceptron-based neural network computer successfully showed the American public how it recognizes English letters.

The Mark-1 caused a sensation in all walks of life, and Rosenblatt was in the limelight.

In 1962, Rosenblatt published a book, Principles of Neurodynamics: A Theory of Perceptual Machines and Brain Mechanisms, which summarized his major research on perceptrons and neural networks, and was once regarded as the "Bible" by the connectionist school.

Rosenblatt's work caught the attention of the scientific community and was funded by the U.S. Air Force and the Postal Service, which wanted to identify targets in aerial photographs, and the Post wanted it to help read addresses on envelopes.

But unfortunately, Rosenblatt is a little floating under the fame.

Rosenblatt was originally restrained, but in the face of huge attention, he began to get carried away.

Soon, his ostentatious behavior, coupled with his crowding of other people's scientific research funds, caused him trouble.

The first of these criticisms came from Minsky.

70 years of the evolutionary history of artificial intelligence (Part I): Before the inflection point

Marvin Minsky: The father of artificial intelligence, the first Turing Award winner in the field of artificial intelligence.

As mentioned earlier, Minsky is one of the organizers of the Dartmouth Conference, one of the founders of the discipline of artificial intelligence, and has a pivotal position in the field of artificial intelligence.

In order to bring down Rosenblatt, Minsky wrote a special book, "Perceptrons: An Introduction to Computational Geometry".

Don't look at the title of this book is "Perceptron", but in fact, the main content is to reveal the limitations of perceptrons.

In his book, Minsky proves in a very academic way that Rosenblatt's perceptron is inherently flawed (unable to solve the XOR operation).

In fact, we now know that the reason why this defect cannot be solved is because Rosenblatt's model is a single-layer neural network, and as long as it is added to multiple layers, this problem can still be solved.

In fact, Minsky also felt this with his keen mathematical talent, but he was too lazy to deduce anymore.

Because the requirements of neural networks for computing power will increase geometrically with the increase of the number of layers, this is an impossible task at that time.

What's more, as far as Rosenblatt's algorithm is concerned, it is inherently in conflict with multi-layer networks.

Therefore, Minsky finally gave his evaluation and conclusion of the multilayer perceptron in the book: "It is worthless to study two or more layers of perceptron. ”

So the multilayer perceptron was directly sentenced to death by Minsky before it was explored in depth.

Minsky's book is well-founded and solves the battle from the root.

Rosenblatt's career went downhill.

On July 11, 1971, Rosenblatt died on his 43rd birthday when he "accidentally" fell into the water on a cruise ship in Chesapeake Bay, USA.

No one knows what happened, only that Rosenblatt was depressed because of his career.

The first winter of artificial intelligence is here.

Expert system

Due to Minsky's great popularity, not only Rosenblatt was abandoned by academia, but even the neural network behind the perceptron lost its promise and even became a pseudoscience.

Artificial intelligence has been depressed for a decade.

After the perceptron was overthrown, the main force of the researchers was basically focused on symbolism.

So in the 80s of the last century, with the emergence of expert systems, artificial intelligence was revived.

The expert system is an artificial intelligence system based on knowledge base and heuristic search, which collates the input of human expertise to solve problems in specific fields, and its representative work is the MINCIN system.

The MYCIN is a system developed by a team of researchers at Stanford University to help doctors diagnose patients with bloodstream infections.

In 1979, MINCIN participated in the evaluation of 10 actual cases, and in the diagnosis of blood diseases, MYCIN performed on par with human experts and above the average of general practitioners.

This is the first time an AI system has demonstrated human expert level or above capabilities in a meaningful task.

IN ADDITION TO MYCIN, THERE ARE OTHER EXPERT SYSTEMS, SUCH AS DENDRAL, ALSO FROM STANFORD UNIVERSITY AND LED BY ED FEGGENBAUM.

DENDRAL was developed to help chemists determine the composition and structure of compounds based on information provided by mass spectrometers.

DANDRAL HAS ALSO ACHIEVED GOOD ADOPTION, AND IN THE MID-80S OF THE 20TH CENTURY, HUNDREDS OF PEOPLE USED DANDRAL EVERY DAY.

However, even with these achievements, the craze of artificial intelligence has only picked up, and it has not returned to the peak of the previous wave.

There will even be cases where practitioners are embarrassed to mention their relationship with artificial intelligence.

For example, in 1983, in an artificial intelligence activity launched by the British government called the "Alvi Project", participants did not want to call it artificial intelligence, but only said that it was called "knowledge-based intelligent systems".

At this time, an artificial intelligence project called Cyc came to smash the pot.

The CYC Project was founded by an ambitious researcher, Doug Lennett.

Lynette was so famous at the time that in 1977 he was awarded the Computer and Ideas Prize, the most prestigious award awarded to a young AI scientist.

Leinart believes that a vast and all-encompassing knowledge base could be the key to unlocking artificial intelligence in general.

The project was launched in 1984 when Project Cyc received funding from the Microelectronics and Computer Federation in Austin, Texas.

But soon he also encountered the problem of symbolism above - to describe this ever-changing world with a fixed set of rules, this is not science, not even science fiction, but fantasy.

Cyc unsurprisingly failed.

Hit by this, artificial intelligence entered the cold winter for the second time.

The renaissance of neural networks

Under Minsky's blow, the neural network was almost destroyed, and major universities basically canceled this research direction.

But even in this environment, there are very few people who stick to this path.

One of them is Jeff Hinton.

70 years of the evolutionary history of artificial intelligence (Part I): Before the inflection point

On the left is Hinton, on the right is Yang Likun. Yang Likun is also a postdoctoral fellow at Hinton.

Hinton was born in Wimbledon, England, just after World War II, and his family had a very strong academic atmosphere.

Hinton's great-great-grandfather, George Boer, was the mathematician and philosopher who founded Boolean algebra. His great-grandfather was mathematician and fantasy writer Charles Howard Hinton, whose concept of the "fourth dimension," including what he called the "Cosmic Cube," ran through the next 130 years of popular science fiction and reached the pinnacle of pop culture in the first decade of the 21st century's Marvel superhero movies. His father was a Fellow of the Royal Society and entomologist Howard Everus Hinton. Cousin, nuclear physicist Joan Hinton, is one of the few female members of the Manhattan Project...

So, do you think it's natural for Jeff Hinton to also devote himself to the academic world...

You're wrong.

It's not that he didn't devote himself to the academic circle, but that he didn't devote himself to the wrong posture.

Hinton is an outlier.

When Hinton was an undergraduate at King's College Cambridge, he studied at least N majors in physiology, chemistry, physics, architecture, philosophy, psychology, etc.

Except for the last psychology, none of these majors were learned, and all of them were abandoned halfway.

What is even more speechless is that Hinton Cambridge has nothing to do with the academic world after graduation, but went to London to work as a carpenter.

Carpenter......

Hinton's father was very speechless to him, and this child had no genes of his own family at all.

What his father often said to him was, "If you work hard enough, maybe you'll be half the way I do when you're twice my age now." ”

Of course, we now know that what we know is the name of Jeff Hinton, and for Howard Everus Hinton, it is generally only said that this is Jeff Hinton's father.

Hinton wasn't cynical, he was looking for answers.

When Hinton was a child, he overheard others talk about the working mechanism of brain information storage, so he established a dream of studying the brain at a young age (the hobby of big coffee is so ordinary and simple).

He kept changing majors in college in search of answers about how the brain works.

Even when he was a carpenter in London, Hinton would go to the Essex Road Library in Islington every Saturday morning to teach himself how the brain works.

The point came, so he eliminated the interference and tried to study on his own, and Hinton was stunned to return to the academic circle by publishing papers and attending academic conferences.

In 1972, Hinton gave up his career as a carpenter to study for a doctorate at the University of Edinburgh, where he studied under the great chemist Christopher Higgins (1923-2004), and chose to study neural networks.

However, the test is not over.

In the field of neural networks, due to the blow of Minsky's "Perceptron", neural networks have always been regarded as a dead end, and even Hinton's mentor Higgins gave up.

As a mentor, Higgins naturally wanted Hinton to revise the direction of his research, but Hinton was so insistent that he had a fierce argument with his supervisor.

Thanks to Higgins, an academic, although not optimistic about neural networks, it was not difficult for Hinton, and even in 1975, when Hinton had not yet achieved obvious achievements, he still agreed to award him a degree in artificial intelligence.

It can be said that Hinton chose a very lonely path.

Fortunately, he is used to loneliness, otherwise ordinary people really can't hold on.

It wasn't until 1986 that the turning point finally appeared.

Hinton and psychologist David Rumelhart (1942-2011) published a paper in Nature entitled "Learning Representations by Error Backpropagation Algorithm."

In a top scientific journal such as Nature, Hinton completely reversed the negative impact of Minsky's book "Perceptron" with a practical and operational method of training multi-layer neural networks.

Academia is finally recognizing the promise of multilayer neural networks.

But unfortunately, neural networks have just taken a step in the revival of artificial intelligence, and they have encountered the second winter of artificial intelligence.

Blue

Although Hinton's error backpropagation algorithm justifies the neural network, it only makes people stop saying things like "neural networks are a dead end".

Due to the limited computing power of computers at that time, which could not handle large training samples, and the lack of fruitful application cases, neural networks were still regarded as unpromising.

So, after 1986, symbolism remained the main direction of development.

IBM's Deep Blue represents this phase.

In May 1997, IBM's Deep Blue chess program defeated then-chess world champion Kasparov, becoming the first computer to beat the chess world champion.

This is not the first time the two have played against each other.

In fact, in the first match in February 1996, Kasparov also defeated Deep Blue 4:2.

However, just a year later, Kasparov lost 2.5:3.5 against the upgraded Deep Blue.

70 years of the evolutionary history of artificial intelligence (Part I): Before the inflection point

The picture shows chess fans watching the second chess man-machine battle on television in New York on May 11, 1997.

Kasparov is recognized as the strongest chess player of all time.

Since becoming world chess champion in 1985, he has never been challenged for 12 consecutive years, with a rating of more than 2,800 points – a height no one has ever reached.

Kasparov has said that computers won't have to beat world champions until at least 2010.

The implication is that if artificial intelligence wants to defeat humans, it will have to wait until it leaves the chess scene, but Deep Blue has advanced this date by 13 years.

After the game, IBM announced the retirement of Deep Blue.

This "human-machine war" marks the recognition of the ability of computer programs in complex intellectual games, and also makes people realize for the first time that the challenge of artificial intelligence to the field of human governance has arrived.

Deep Blue's success comes mainly from two factors, one is heuristic search, and the other is the huge increase in computer computing power.

So, onlookers generally consoled themselves that Deep Blue's victory was the result of brute force calculations by computers, and that it was nothing remarkable.

But in fact, they were wrong, Deep Blue's victory certainly has the factor of computing power, but it is also the result of the algorithm.

In the following decade, professional chess sports such as Japanese shogi and Chinese chess were also conquered by artificial intelligence, and even supercomputers were not needed, and the computing power of an ordinary smartphone alone could defeat the international masters of these chess.

Only Go has become the only intellectual game in which artificial intelligence cannot defeat humans in all-information confrontation games.

For machines, the difficulty of Go is mainly in two aspects: first, the machine cannot write an evaluation program to decide the win or loss of each move; Second, Go is an intuitive game.

Compared with chess, Go is more like art and more embodies human intelligence.

AI's challenge to humanity continues, but this time the task has to be left to connectionism, and symbolism can only stop there for the time being.

Deep learning

In 2006, Hinton published two more papers, "Data Dimensionality Reduction Processing by Neural Networks" and "A Fast Learning Algorithm Based on Deep Belief Network".

These two articles demonstrate that deep learning has two advantages.

The first advantage is its representation ability, and the second is more critical, deep learning enables neural networks to achieve machine learning.

Everyone knows that the problem of the expert system is mainly that no matter how large the knowledge base is, it cannot keep up with the changes in reality, so when facing problems, it is completely unable to reflect the spirituality of people.

And now connectionism makes machine learning possible with the help of deep learning, so neural networks allow artificial intelligence to find its way.

The first year of deep learning began, and since then, neural networks have entered a stage of rapid development.

In 2012, after 6 years of preparation, Hinton was ready to strike again.

There is an ImageNet Visual Recognition Challenge competition in the field of artificial intelligence.

ImageNet is a large-scale image recognition project jointly developed by Li Feifei, a Chinese female scientist at Stanford University, and Li Kai, an academician of the American Academy of Engineering and a Chinese professor at Princeton University known as "the richest Chinese professor".

Since 2010, ImageNet has held the "ImageNet Large-Scale Visual Recognition Challenge" every year to see which program can classify and detect objects and scenes with the highest accuracy.

This confrontational event quickly became an arena for teams to show off their strength.

In October 2012, Hinton took AlexNet, a deep learning-based neural network system designed by two students, Ilya Sutskever and Alex Krizhevsky, to participate in the ImageNet image recognition competition.

70 years of the evolutionary history of artificial intelligence (Part I): Before the inflection point

Ilya (left), Alex (center) and Hinton (right).

The academic and business worlds suddenly boiled over.

In the past, the difference between champions and runners-up in this competition was only 1%-2%, but AlexNet's accuracy was more than 10% higher than that of the second-ranked University of Tokyo team!

Deep learning shines in this battle.

There is also an anecdote in this competition: the "Google Brain" team actually participated in this competition secretly, and it was directly led by Google's technology guru Jeff Dean, but they were also defeated by the Hinton team.

This result completely shocked Dean and laid the groundwork for Google to recruit Hinton into its company regardless of the cost in the near future.

(To be continued)

Click on "Albums" and welcome to follow Tago FRM.

Read on