laitimes

What is general artificial intelligence?The evolution of the concept of AGI in the AI field

author:Chinese Society of Artificial Intelligence

Redirected from ScienceAI

Original title: Science | What exactly is general artificial intelligence, and the evolution of the concept of AGI in the AI world

What is general artificial intelligence?The evolution of the concept of AGI in the AI field

Compile | Kaixia

3 月 21 日,美国圣菲研究所(SFI,Santa Fe Institute)的 Melanie Mitchell 教授在《Science》发表题为《关于通用人工智能本质的辩论》(「Debates on the nature of artificial general intelligence」)的文章。

The authors explore how the concept of AGI has changed throughout the history of AI, discussing how AI practitioners view intelligence very differently than those who study biological intelligence. And what exactly is AGI that everyone is talking about?

What is general artificial intelligence?The evolution of the concept of AGI in the AI field

Link to paper: https://www.science.org/doi/10.1126/science.ado7069

ScienceAI has edited and organized the original paper without changing the original meaning:

The term "artificial general intelligence" (AGI) has become ubiquitous in the current discussion about AI.

OpenAI says its mission is to "ensure that artificial general intelligence benefits all of humanity."

DeepMind's corporate vision statement states that "Artificial General Intelligence...... It has the potential to be one of the greatest changes in history. 」

AGI is highlighted in both the UK government's National AI Strategy and the US government's AI document.

Microsoft researchers recently claimed that there is a "spark of AGI" in GPT-4, a large language model.

Current and former Google executives have also declared that "AGI has arrived."

The question of whether GPT-4 is an "AGI algorithm" is at the heart of Elon Musk's lawsuit against OpenAI.

What is general artificial intelligence?The evolution of the concept of AGI in the AI field

Given the ubiquity of AGI discussions in the business community, government, and media, one cannot be blamed for assuming that the meaning of the term has been determined and agreed upon. However, the opposite is true: what AGI means, or whether it means anything coherent, is hotly debated in the AI community.

The meaning and possible consequences of AGI have become more than just an academic debate about occult terms. The world's largest tech companies and governments are making important decisions based on their views on artificial general intelligence.

But digging deeper into the speculation about AGI reveals that many AI practitioners have a very different view of the nature of intelligence than those who study human and animal cognition – a difference that is important for understanding the current state of machine intelligence and predicting its possible future.

The initial goal in the field of artificial intelligence was to create machines with general-purpose intelligence comparable to that of humans.

Early AI pioneers were optimistic: in 1965, Herbert Alexander Simon, in his book The Shape of Automation for Men and Management, predicted that "within 20 years, machines will be able to do whatever a human can do," and in 1970, Marvin was quoted as Life magazine Minsky's words: "In three to eight years, we will have a machine with the intelligence of an ordinary person. I'm referring to a machine capable of reading Shakespeare, refueling cars, engaging in office politics, telling jokes, and fighting. 」

What is general artificial intelligence?The evolution of the concept of AGI in the AI field

《Artificial General Intelligence 2008》

These optimistic predictions did not materialize. In the decades that followed, the only successful AI systems were "narrow" rather than generic – they could only perform a single task or a limited range of tasks (for example, speech recognition software on a phone could transcribe your dictation but not respond to it intelligently).

The term "AGI" was coined at the beginning of the 21st century to recapture the original ambitions of AI pioneers, seeking a renewed focus on "trying to study and reproduce intelligence as a whole in a domain-independent way".

Until recently, this quest remained a rather obscure corner of the AI space, and more recently, leading AI companies have made achieving AI General Purpose their top goal, pointing to AI "doomers" claiming that the existential threat of AI General Intelligence is their number one fear.

Many AI practitioners have speculated about the timeline for AGI, with some predicting that "there is a 50% chance that we will have AGI by 2028". Others have questioned the premise of AGI, saying it is vague and ill-defined. A prominent researcher tweeted, "The whole concept is unscientific and people should even be embarrassed to use the term." 」

While early AGI proponents believed that machines would soon take on all human activities, researchers learned the hard way that it was much easier to create an AI system that could beat you in chess or answer your search questions than it was to build a robot that could fold clothes or repair pipes.

What is general artificial intelligence?The evolution of the concept of AGI in the AI field

The definition of AGI has also been adjusted accordingly to include only the so-called "cognitive tasks". DeepMind co-founder Demis Hassabis defines AGI as a system that "should be able to accomplish almost any cognitive task that a human can do," while OpenAI describes it as "a highly autonomous system that surpasses humans in the most economically valuable work," with "the majority" missing tasks that require physical intelligence that may not be completed by robots for a period of time.

The concept of "intelligence" in AI (cognitive or otherwise) is often built in terms of individual agents optimizing for rewards or goals. An influential paper ("Universal Intelligence: A Definition of Machine Intelligence") defines general intelligence as "the ability of an agent to achieve goals in a variety of environments."

What is general artificial intelligence?The evolution of the concept of AGI in the AI field

Related paper links: https://link.springer.com/article/10.1007/s11023-007-9079-x

Another article ("Reward is enough") states that "intelligence and its associated abilities can be understood as promoting the maximization of rewards. In fact, that's how AI works today – for example, the computer program AlphaGo is trained to optimize a specific reward function ("win the race"), while GPT-4 is trained to optimize another reward function ("predicting the next word in a phrase").

What is general artificial intelligence?The evolution of the concept of AGI in the AI field

Related paper links: https://doi.org/10.1016/j.artint.2021.103535

This perception of intelligence has led some AI researchers to speculate that once an AI system achieves AGI, it will recursively improve its own intelligence by applying its optimization capabilities to its own software, and quickly become "thousands or millions of times smarter than us," thus rapidly achieving superhuman intelligence.

What is general artificial intelligence?The evolution of the concept of AGI in the AI field

"Before we can share the planet with super-intelligent machines, we must develop a science to understand them. Otherwise, they'll be in control," says author James Barrat of his new book, "Our Final Invention: Artificial Intelligence and the End of the Human Era."

This focus on optimization has led some in the AI community to fear that "inconsistent" general AI could go insanely away from the goals of its creators, putting humanity at risk for survival.

In his 2014 book Superintelligence, philosopher Nick Bostrom proposed a now-famous thought experiment: he imagined humans giving super-intelligent AI systems a goal to optimize paper clip production. Taking this goal literally, the AI system uses its genius to control all the resources on the planet and turn everything into a paper clip. Of course, humans don't want to destroy the earth and humans to make more paperclips, but they ignore this in the manual.

AI researcher Yoshua Bengio gave his own thought experiment: "We may ask AI to solve climate change, it may design a virus that causes mass human deaths, because our directives are not clear enough about the harm, and humans are actually the main obstacle to solving the climate crisis. 」

What is general artificial intelligence?The evolution of the concept of AGI in the AI field

Superintelligence by Nick Bostrom in 2014.

This speculative view of AGI (and "superintelligence") is different from that held by those who study biological intelligence, especially human cognition. Although cognitive science does not have a strict definition of "general intelligence" and there is no consensus on the extent to which humans or any type of system can have general intelligence, most cognitive scientists agree that intelligence is not a quantity that can be measured on a single scale. Nor is it an arbitrarily adjusted amount up or down, but a complex integration of general and specialized capacities that are largely adapted in a particular evolutionary niche.

Many people who study biological intelligence also wonder if the so-called "cognitive" aspect of intelligence can be separated from other modes and captured in machines that are detached from the entity. Psychologists have shown that important aspects of human intelligence are based on a person's specific physical and emotional experiences. There is also evidence that individual intelligence is largely dependent on one's participation in the social and cultural environment. The ability to understand, coordinate with, and learn from others may be much more important than an individual's ability to optimize for a person to successfully achieve their goals.

Moreover, unlike the hypothetical paperclip-maximizing AI, human intelligence is not centered on the optimization of a fixed goal; Unlike super-smart paperclip maximizers, the increase in intelligence allows us precisely to gain better insight into the intentions of others and the possible impact of our own actions, and modify those behaviors accordingly.

As the philosopher Katja Grace wrote, "The idea of the universe as a substep is utterly ridiculous for almost any human goal. So why do we think the goals of AI are different?"

What is general artificial intelligence?The evolution of the concept of AGI in the AI field

The ghost of machines improving their own software to increase their intelligence by orders of magnitude also deviates from the biological view that intelligence is a highly complex system that transcends isolated brains. If human-level intelligence requires a complex integration of different cognitive abilities, as well as social and cultural scaffolding, then the "smart" level of the system is likely to be unable to seamlessly access the "software" level, just as we humans cannot easily design our brains (or genes) to make ourselves smarter. However, we as a collective have improved our effective intellect through external technological tools such as computers, such as the construction of cultural institutions such as schools, libraries, and the Internet.

The meaning of AGI and whether it is a coherent concept or not is still debated. In addition, speculation about what general AI machines are capable of is largely based on intuition, not scientific evidence. But how much credibility can such an intuition have? The history of AI has repeatedly overturned our intuition about intelligence.

Many early AI pioneers believed that machines programmed with logic would capture the full range of human intelligence. Other scholars predicted that for a machine to beat a human in chess, or to translate between languages, or to have a conversation, it would need to have human-level general intelligence, but the results proved wrong.

At every step in the evolution of AI, human-level intelligence is more complex than researchers expected. Will current speculations about machine intelligence prove to be equally wrong, and can we develop a more rigorous, universal science of intelligence to answer these questions?

It's unclear whether AI science is more like the science of human intelligence, or more like astrobiology, which predicts what life on other planets might look like. Making predictions about things that have never been seen before, or may not even exist, whether it's extraterrestrial life or superintelligent machines, requires theories based on general principles.

Ultimately, the meaning and consequences of "AGI" will not be resolved through media debates, lawsuits, or our intuition and speculation, but through a long-term scientific examination of these principles.

【Disclaimer】The content published on this official account is only for learning and communication, and the copyright of the content belongs to the original author. If there is any infringement of your rights and interests, please contact us in time, and we will delete the content as soon as possible. The content is the author's personal opinion and does not represent the position of this official account and is responsible for its authenticity.