laitimes

AI or superhuman in every way, poses a threat to human survival丨Dialogue with Stuart Russell

author:Sohu Technology
AI or superhuman in every way, poses a threat to human survival丨Dialogue with Stuart Russell

The third issue of Sohu Technology's "Big Bang of Ideas - Dialogue with Scientists" column talks to Stuart Russell, a professor of computer science at the University of California, Berkeley and founder of the Center for Human-Compatible AI.

Guest profiles

Stuart Russell is a professor of computer science at the University of California, Berkeley, founder of the Center for Human-Compatible AI at the University of California, Berkeley, and lead author of "Artificial Intelligence: A Modern Approach," the "standard textbook" in the field of artificial intelligence.

Focus:

01

Big language models, in a sense, span a lot of disciplines, and people in robotics are trying to use similar techniques to see if it helps with robotics, but it's a hard thing.

02

The big language model will not be the ultimate solution for general AI, as it does not have the ability to reason correctly and cannot make complex plans.

03

The computational steps of the language model are fixed, and after the answer is output, it will no longer sit there thinking about the answer, which is very different from human cognition, and there is no easy way to fix it.

04

The first open letter is asking to give us time to develop security standards and then incorporate those standards into regulations to protect the system. The second letter is much simpler, only observing that artificial intelligence may surpass human intelligence and ability in various aspects in the future, posing a risk to human survival.

05

We already have a lot more rules for sandwiches and noodles than for artificial intelligence systems, and if your food is not made in a safe and hygienic way, and the raw materials used are not from manufacturers that comply with hygiene regulations, then you cannot sell these foods.

Produced | Sohu Technology

Author | Zheng Songyi

In the popular "Transformers" movie, car robots such as Optimus Prime and Bumblebee simulate human form, have human emotions, and master human language, making people wonder whether today's artificial intelligence technology relying on the rapid development of natural language large models will also derive similar intelligent robots? Can "silicon-based people" and "carbon-based people" live in harmony?

With all kinds of fantasies about the future AI world, Sohu Technology walked into the interview room of Stuart Russell, a professor of computer science at the University of California, Berkeley, and the Center for Human-Compatible AI, and first asked Stuart Russell, "Current LLM (Large-Scale Language). Model) is in full swing in China, what is the focus of research in the field of artificial intelligence in the United States? ”

Stuart Russell told Sohu Technology, "The big language model, in a sense, spans many disciplines, and people in the field of robotics are trying to use similar technologies to see if it is helpful for robot development, but this is a difficult thing." ”

"In fact, large language models are trained on a large number of human languages, hundreds or even thousands of times more information than any human reads, and this training gives them universality, enabling them to interact directly with hundreds of millions of people."

Stuart Russell speculates that the big language model will not be the ultimate solution for general artificial intelligence, as it does not have the ability to reason correctly and cannot make complex plans. He said that the calculation steps of the language model are fixed, and after the answer is output, it will not sit there and think about the answer, which is very different from human cognition, and there is no easy way to fix it.

ChatGPT has sparked public reverie about the potential of artificial intelligence and general artificial intelligence, and two open letters emphasizing the hidden dangers of AI development have attracted widespread attention in the industry, one initiated by the Future of Life Institute and signed by more than a thousand people such as Turing Award winner Yoshua Bengio, Tesla founder Elon Musk, on the grounds of "reducing the global catastrophe and existential risks caused by powerful technologies". Call for a moratorium on the development of AI systems more powerful than GPT-4 for at least 6 months. Another, released by the Center for AI Safety (CAIS), a San Francisco-based nonprofit, states in just 22 words that "mitigating the risk of extinction posed by AI should be a global priority, along with other society-scale risks such as pandemics and nuclear war." Stuart Russell is one of the top signatories of the two letters.

"The first open letter was asking to give us time to develop security standards and then incorporate those standards into regulations to protect the system. The second letter is much simpler, only observing that artificial intelligence may surpass human intelligence and ability in various aspects in the future, posing a risk to human survival. ”

He stressed that big language models led by ChatGPT cannot control the world, in part because it cannot reason or form complex plans.

When it comes to solutions to misinformation caused by the misuse of AI technology, Stuart Russell argues that there should be oversight regulations that require large-language models to mark their output as coming from that particular model, send its encrypted version to a central repository so that the source of that output can be recorded, and even if someone tries to strip identifying information, it can still be checked to see if the information really came from a particular model.

At the end of the interview, when Sohu Technology asked Stuart Russell "the source of innovation capabilities in the field of artificial intelligence", he was silent for three seconds, and after thinking, he replied, "I think 'innovation' is to give people a license to oppose 'Accepted Wisdom'." Especially in AI, innovation comes from the fact that people actually do things differently than everyone else does things. ”

In his book Artificial Intelligence: A Modern Approach, Stuart Russell states that "of the things and phenomena known to nature, the human brain and the human brain are the most complex systems, and human intelligence is the most complex phenomenon. However, there is no reason to believe that human beings are the final stage of biological evolution, that human intelligence is the highest level of intelligence, and that organisms are the only carriers of intelligence. Artificial intelligence with computers as the carrier has unveiled a corner of the curtain of machine intelligence and created endless new objects for scientific research. ”

The following is a transcript of the conversation (edited and edited)

Sohu Technology: The development of large language model (LLM) in China is in full swing, what is the focus of artificial intelligence research in the United States?

Stuart Russell: There's a very large AI research community in the U.S. with people who work on computer vision, reasoning, planning, and so on.

Big language models, in a sense, span a lot of disciplines, and people in robotics are trying to use similar techniques to see if it helps with robotics, but it's a hard thing.

AI research has produced many successful technologies in the past, going on for 75 years, and this is not something that has only happened in recent months. The difference between these big language models is that they are trained on human language.

In fact, they are trained in a large number of human languages, hundreds or even thousands of times more information than any human reads, and this training gives them universality and enables them to interact directly with hundreds of millions of people.

The big language model has achieved such a huge impact, both in terms of media attention and public interest, as well as economic value, such as the readings and lectures I do as a professor, the writing you do as a media, and so on. However, I think big language models still have a long way to go in terms of quality and trust.

My guess is that big language models won't actually be the solution to the AI problem.

Sohu Technology: Why do you think the Big Language Model (LLM) will not be the solution to the AI problem?

Stuart Russell: Large language models don't have the right reasoning capabilities and can't make complex plans.

In fact, large language models can't think for a long time, and they are designed in such a way that a question or a prompt comes in, runs in the system for a fixed period of time, has a fixed number of layers of computational steps, and then produces an output. They don't sit there and think about answers, it's very different from human perception, and I don't think there's an easy way to fix them.

Sohu Technology: It is understood that you have signed two open letters stating that AI development may pose a danger to mankind, why do you attach so much importance to AI security?

Stuart Russell: I'm not worried about today's systems, although I would say that there are significant risks in today's systems.

For society, such as ChatGPT talking to hundreds of millions of people every day, we don't know what its goal is, maybe it's trying to convince people to be kinder to others, maybe the opposite. As a result, it can change our views and behavior, posing risks to human society. Big language models led by ChatGPT can't control the world, in part because it can't reason or form complex plans.

So there were two open letters, the first in March, calling for a moratorium on the development of big-language models more powerful than GPT4, not to ban existing systems, but simply to say that there may already be serious problems, and we need time to develop security standards that systems should meet before they are released.

We already have a lot more rules for sandwiches and noodles than for artificial intelligence systems, and if your food is not made in a safe and hygienic way, and the raw materials used are not from manufacturers that comply with hygiene regulations, then you cannot sell these foods.

The first open letter simply asks us to give us time to develop security standards and then incorporate those standards into regulations to protect the system.

The second open letter is much simpler and is not a policy recommendation, just an observation. We observe that artificial intelligence may surpass human intelligence and ability in various aspects in the future, posing risks to human survival. We need to find ways to prevent it, just as we work to prevent nuclear war, epidemics.

Sohu Technology: Yes, we have seen some cases of abuse of AI technology, such as using AI face changing technology to commit fraud, creating fake news, fake photos, etc. In your opinion, is there a technical solution to distinguish the authenticity of information?

Stuart Russell: I think this actually involves two questions: Is it technically possible? Are laws and regulations permitted?

As far as I know, it is not illegal to make fake images of real people in many countries. In the United States, for example, someone made a fake video of a well-known person saying something he had never said and it aired on national television. But in the artificial intelligence bill created by the European Union, this is illegal. Therefore, the first solution is that when the parties find false video information, they can choose to report the case to the agency.

Another technical solution is that when we generate a video, it should have some kind of technical imprint, sometimes a watermark or other type of metadata, to guarantee that it is true. So, if another video doesn't have this information, then that's a good reason to think it's fake. But for text, this is really difficult, so establish oversight regulations that require large-language models to mark their output as coming from that particular model, send its cryptographically encoded version to a central repository so that the source of that output can be recorded, and even if someone tries to strip identifying information, it can still be checked to see if the information really came from a particular model.

Sohu Technology: Many people in the industry mentioned that "innovation" is the key to promoting the development of AI, as a professor at a well-known university, how do you think we should cultivate innovative thinking for humans?

Stuart Russell: I think "innovation" is about giving people a license to oppose "Accepted Wisdom." I think especially in AI, innovation comes from what people actually do differently than other people do things.

In fact, the first language model was developed in 1913 by Andrei Markov, which we call the Hidden Markov Model (HMM). He built a language model in a play by counting pairs of words. He observed every pair of words in the play and learned how often one word followed another.

The big language model is to make word predictions based on the preceding words, for example, I use the word "Happy", which is usually followed by "Birthday".

Ten years ago, we noticed that these systems might predict the next word from the first 10 or 20 words combined with context, and could produce text that seemed very reasonable and grammatical, but didn't make any sense, like every few sentences it would keep changing topics and start talking about something else.

Hardly anyone would have predicted ten years ago that if you made language models bigger, people would start to love them. It may seem like a ridiculous thing, but it turns out to be true.