laitimes

Is it possible for artificial intelligence to surpass humans in the future?

author:China Engineering Science and Technology Knowledge Center

Caption: A man with paper, pen and eraser and adhering to a strict code of conduct is essentially a universal Turing machine.

—Alan Turing

Artificial Intelligence, abbreviated as AI. It is a new technical science for the research and development of theories, methods, techniques and application systems used to simulate, extend and extend human intelligence.

Artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can respond in a similar way to human intelligence, and research in this field includes robotics, language recognition, image recognition, natural language processing, and expert systems. Since the birth of artificial intelligence, theory and technology have become more and more mature, and the application field has been expanding, and it is conceivable that the scientific and technological products brought by artificial intelligence in the future will be the "container" of human intelligence. Artificial intelligence can simulate the information process of human consciousness and thinking. Artificial intelligence is not human intelligence, but it can think like people, and may also surpass human intelligence.

When it comes to the competition between artificial intelligence and human intelligence, it is easy to think of this matter: in 2016, the Go artificial intelligence program "Alpha Go" developed by Deep Mind, a subsidiary of Google, can be described as "unlimited scenery". It first defeated Lee Sedol, a Korean Go nine-dan 9-dan in the Wufan Chess Battle, which had been famous for many years and was also one of the top ten in the world's Go rankings at the time. Half a year later, one by one, they fought against the world's most recognized and powerful Go masters, including the Chinese "genius teenager" Ke Jie Jiudan, who ranked first in the world's Go level, and won 50 consecutive games in the fast chess tournament, except for one game caused by technical problems. For a long time, Go was considered a realm in which artificial intelligence could not defeat humans, and after that, it also declared "fall".

Is it possible for artificial intelligence to surpass humans in the future?

Ke Jie and Alpha Go man-machine battle

Faced with the triumph of the Alpha Dog, commentators in the scientific community were divided into two camps.

One faction is the "pessimist faction", who believe that artificial intelligence is developing too fast, and even threatens the security of human beings, and the "intellectual crisis" and even "Matrix" of robots dominating human beings often seen in literary works and science fiction movies are coming.

The other is the "optimist", who believes that even if it can beat the strongest Go player in the field of Go, the alpha dog and the supercomputing program it represents are still some distance away from the real "artificial intelligence". Because compared with the learning, memory and computing abilities it exhibits, the Alpha Dog is still a blank in the field of "emotion" and "thinking". Humans playIng Go and losing to alpha dogs is like a human can't win a car, at least for now, artificial intelligence does not pose too much of a threat to human survival.

Which view is more realistic? It's also hard to make an assertion. However, we can comb through the development of artificial intelligence in recent decades to see if we can glimpse one or two from the historical development process.

Human imagination about artificial intelligence has a long history. As early as China's ancient "Liezi TangWen" recorded that in the Western Zhou Dynasty, a craftsman named "Yanshi" made an "intelligent robot", which could not only speak but also sing and dance; the famous Greek mathematician Xi Luo also claimed that he had made a robot similar to a "vending machine", but these were only limited to legends and stories, and whether it was true or not could not be verified.

The first person in history to really propose the principle of artificial intelligence was the British mathematician Alan Mathison Turing, who comprehensively analyzed the human computing process and reduced the calculation to the simplest, most basic, and most certain operation action, so as to describe the basic calculation program in a simple way. This simple approach was based on an abstract automata concept, and the result was that the algorithmic computable function was the function of this automata calculation — which not only gave a definition to the calculation, but also for the first time associated the computation with the automaton, which had a huge impact on later generations, and this "automaton" was later called the "Turing machine". Turing also proposed a test method for determining whether a machine has intelligence, which is what we now often call the "Turing test.".

Is it possible for artificial intelligence to surpass humans in the future?

Alan Messison Turing

The Turing test refers to the case where the tester is separated from the testee (a person and a machine), and asks questions to the testee through some device (such as a keyboard).

After multiple tests, if the machine let the average participant make more than 30% false positives, the machine passed the test and was considered to have human intelligence.

Through this thinking experiment, Turing can convincingly show that a "thinking machine" is possible, and the Turing test has become the first serious proposal in artificial intelligence.

The term "artificial intelligence" really appeared in 1956 (two years after Turing's death). Several scholars from various fields such as mathematics, psychology, neurology, computer science and electrical engineering gathered at Dartmouth College in the United States to discuss how to simulate human intelligence with computers, and officially named the subject area "artificial intelligence" according to the advice of computer scientist John McCarthy. Two cognitive psychologists, Herbert Simon and Alan Newell, attended the historic conference as representatives of the psychology community, and the "logic theorists" they brought to the conference were the only AI software that could work at the time. As a result, Simon, Newell, and the founders of the Dartmouth Conference, George McCarthy and Marvin Minsky, are recognized as the founders of artificial intelligence, also known as the "father of artificial intelligence."

McCarthy and Minsky initiated the conference with a grand goal of designing a truly intelligent machine through a two-month effort by a dozen or so people. In fact, the years after dartmouth were indeed a golden age for ai development. Using cumbersome transistor computers, they have developed a series of amazing AI applications: they can solve algebraic application problems, prove geometric theorems, learn and use English... These young researchers expressed considerable optimism in private exchanges and published papers. In 1970, Marvin Minsky said in a speech: "In 3 to 8 years we will have a machine with human average intelligence." ”

Is it possible for artificial intelligence to surpass humans in the future?

Source pexls

It was also during this period that ELIZA, the first robot to talk to people, was invented, and it would talk to users according to the answers set in its own library. However, unlike the Apple mobile phone software Siri or Microsoft Xiaoice that we use now, ELIZA doesn't really know what it's talking about. It simply talks to humans in a predetermined routine, or simply repeats the problem in a syntactical way.

The development of artificial intelligence soon hit a bottleneck – on the one hand, the computer hardware could not keep up, and on the other hand, scientists found that some seemingly simple tasks, such as face recognition or letting robots control themselves to walk around the house, were extremely difficult to achieve. They were able to make an AI that could easily solve a junior high school geometry problem, but it couldn't control its feet as it walked out of a small room. In the star wars series of the famous science fiction movies of the 1980s, the two intelligent robot images more or less reflected what artificial intelligence looked like in people's minds at that time: funny, loyal, clumsy.

McCarthy and Münster, the two giants of artificial intelligence, also have differences of opinion. The AI that Minster wants is AI that can truly understand human language, understand the meaning of stories, and be no different from the human brain, and even make robots make some judgments that are not based on logical algorithms like humans — or let artificial intelligence have "perception". Their faction is known as the "Wu Miscellaneous Sect". Correspondingly, another faction represented by McCarthy is called the "minimalist school", which does not want robots to have the same way of thinking as humans, they only want a "machine" that can solve problems according to established procedures.

But with the advances in computer technology and the study of human brain neuroscience, a whole new way of thinking emerged in the 1980s: they believed that in order to be truly intelligent, machines must have bodies—they need to perceive, move, survive, and interact with the world. During this period, both the United States and Japan filmed a large number of entertainment programs featuring giant robots, the most famous of which was, of course, the "Transformers" series and the "Hundred Variations Lion Series" that our generation indulged in as children.

But whether it's Optimus Prime" or "Megatron", these giant robots from alien planets are at least one thing different from the artificial intelligence we see: the "thoughts" and "emotions" in their minds are innate, not artificial.

Is it possible for artificial intelligence to surpass humans in the future?

Giving real life to machines is not an easy task. However, with the speed of progress of computer hardware, artificial intelligence has also rapidly "grown". According to Moore's Law (Moore's Law is the experience of Gordon Moore, one of Intel's founders, its core content is that the number of transistors that can fit on an integrated circuit doubles every 18 months. ), the computer's computing speed and memory capacity double every two years. Any computer can now compute tens of millions of times faster than the computers McCarthy used in the 1950s. In the face of rapid increase in computing power, many problems that previously seemed to never be solved have been solved.

On May 11, 1997, IBM's super artificial intelligence "Deep Blue" defeated world champion Kasparov in a chess match. This has also become a landmark event in the progress of artificial intelligence, and even people have made up many paragraphs to render the horrors of artificial intelligence.

The 1999 film "The Matrix" swept the world, more or less reflecting people's "worship and fear" psychology of artificial intelligence. In this film, a young cyber hacker Neo discovers that the seemingly normal real world is actually controlled by a computer artificial intelligence system called "Matrix", and that real humans have long since become slaves to artificial intelligence, immersed in nutrient solutions and become biological batteries.

But in the nearly two decades since, AI has not been able to show any hostility toward humans — and it's possible that we've long since been controlled by them. Over the years, it has been widely recognized that many of the problems that need to be solved to study AI have become research topics in the fields of mathematics, economics, and operations research. The sharing of mathematical languages not only allows AI to collaborate at a higher level with other disciplines, but also makes research results easier to evaluate and prove, and AI has become a more rigorous branch of science. However, the topic of "artificial intelligence dominating mankind" has rarely been mentioned except in the science fiction circle.

However, the appearance of alpha dogs still adds a layer of worry to people. This is because its design breaks through the forbidden area where the original artificial intelligence chess player will not blur the selection point, and will "think" like a human. So in time, can the real Turing machine really appear? Will this kind of artificial intelligence, which can crush humans in terms of intelligence, really serve us?

Speaking of which, it is necessary to mention Isaac Asimov, a scientist who is a part-time popular science writer. It was he who, in his 1950 collection of works, I, Robot, came up with the famous "Three Laws of Robotics," namely:

The first law: Robots must not harm individual human beings, or stand idly by when they witness that individual human beings will be in danger.

Second Law: The robot must obey the order given to it by a human, except when that command conflicts with the first law.

Third Law: Robots should protect their survival as much as possible without violating the first and second laws.

Is it possible for artificial intelligence to surpass humans in the future?

These three laws are all "nonsense" on the surface, but a closer look will find that they are logically interlocked, putting a shackle on artificial intelligence that "can protect itself and will not harm humans". Looking at the history of the development of artificial intelligence, we can draw a certain conclusion: is it possible for artificial intelligence to surpass humans in the future? Yes! Not only is there but there is a lot of hope, and with the advancement of hardware technology, this day will soon come. So is it necessary to deliberately guard against artificial intelligence? Nope! Because as long as the three laws of robots are still there, they will not be able to turn the sky.

If one day the three laws are cracked by the robot, then please ask for more blessings!

Source: Origin Reading

Read on