laitimes

Controversy: What is the difference between OpenAI Ultraman, Hinton, and Yang Likun's views on AI?

author:DeepTech

On 9 June, KLCII opened in Beijing at KLCII 2023. As an annual international artificial intelligence high-end professional exchange event, the conference has been held for 5 consecutive years.

It is reported that the conference lasted for two days, the core topic is the opportunities and challenges facing the development of artificial intelligence, a total of more than 200 top experts in the field of AI attended, including OpenAI creator Sam Altman, Turing Award winners Geoffrey Hinton and Yann LeCun and others. They all gave their opinions around the development and challenges of AI.

Controversy: What is the difference between OpenAI Ultraman, Hinton, and Yang Likun's views on AI?

(Source: Infographic)

"Will artificial neural networks soon be smarter than real neural networks?" In his speech, Hinton mainly discussed such an issue.

Controversy: What is the difference between OpenAI Ultraman, Hinton, and Yang Likun's views on AI?

(Source: Infographic)

Being able to follow instructions precisely is the key to computers in traditional computing. Based on this feature, users can use different hardware when running the same program or neural network. This shows that there is no dependency on the weight of the neural network in the program and the hardware.

"The reason they follow instructions is because they're designed to let us first look at the problem, determine the steps needed to solve it, and then tell the computer to perform those steps." Hinton said.

In order to train large language models at a lower cost, he proposed "finite computing", that is, abandoning the basic principle of hardware and software separation in traditional computing, and based on the simulation of hardware, to efficiently perform computing tasks.

However, there are two main problems with this approach.

First, "the learning process must take advantage of the specific analog characteristics of the parts of the hardware it runs on, and we don't know exactly what those characteristics are." Hinton said.

Second, the method is limited. Hinton explains: "When a particular hardware device fails, all the knowledge learned is also lost, because knowledge and hardware details are closely linked. ”

To solve this problem, he and his collaborators tried a number of methods, such as the "distillation method", which proved to be very effective.

At the same time, he also said that the way the group of agents share knowledge will affect many factors in computing. That is to say, the current large-scale language model seems to be able to learn massive knowledge, but because it mainly obtains knowledge by learning documents, it does not have the ability to learn directly from the real world, so the way of learning is very inefficient.

"If they can learn in an unsupervised way like modeling videos, that would be very efficient." "Once these digital agents start doing this, they'll be able to learn more than humans and learn fairly quickly," Hinton said. ”

If this development is followed, agents will soon become smarter than humans. However, this will also bring many problems, such as the competition for control between agents and humans, ethical and security issues, etc.

"Imagine that in the next decade, Artificial General Intelligence (AGI) will surpass the level of expertise that humans had in the early 90s of the 20th century." Altman said.

In his speech and Q&A session with KLCII Chairman Hongjiang Zhang, he discussed the importance and strategies of advancing AGI safety.

Controversy: What is the difference between OpenAI Ultraman, Hinton, and Yang Likun's views on AI?

(Source: Infographic)

"We must take responsibility for the problems that may arise from reckless development and deployment." Altman said it pointed to two directions, namely the establishment of inclusive international norms and standards, and the promotion of AGI's safety system through international cooperation.

So, for now, how to train large language models so that they can truly become safe and beneficial assistants for humans is the first problem that needs to be solved.

In this regard, Ultraman proposed several solutions.

First, invest in scalable supervision, such as training a model that can assist humans in supervising other AI systems.

Second, machine learning techniques are continuously upgraded to further enhance the interpretability of models.

"Ultimately, our goal is to train AI systems to better optimize themselves." "As future models become smarter and more powerful, we will find better learning techniques that reduce risk while taking full advantage of AI's extraordinary benefits," Altman said. ”

Yang Likun and Ultraman share the same view on whether today's AI is risky and how to overcome it. The former said: "These risks exist, but they can be mitigated or suppressed through careful engineering. ”

However, as an expert who has always opposed GPT-like large models, Yang Likun clearly pointed out the advantages and disadvantages of AI systems represented by self-supervised learning in his speech entitled "Towards a Machine That Can Learn, Think and Plan".

Controversy: What is the difference between OpenAI Ultraman, Hinton, and Yang Likun's views on AI?

(Source: Infographic)

Although it has shown extremely powerful effects in natural language processing and generation, it does not have the ability to reason and plan like humans and animals, so it will inevitably lead to some factual errors, logical errors, value poisoning and other problems.

Based on this, Yang Likun believes that AI will face three major challenges in the next few years: learning models for world representation and prediction; Learning reasoning skills; Learn how to break down complex tasks into simple tasks and advance them hierarchically.

He also made the idea that the world model is central to the road to AGI.

In his view, the world model is a system that can imagine what will happen in the future, and can make its own predictions at minimal cost.

"The system works by processing the current state of the world through previous thoughts about the world that it may have stored in memory. Then you use the world model to predict how the world will work next. Yang Likun said.

So, how do you implement this world model? "We have a hierarchical system that extracts more and more abstract representations of the state of the world through a series of encoders, using world models of different levels of predictors." He said.

Simply put, it is to complete complex tasks in a decomposition and planning to milliseconds. However, it should be noted that there is also a system for cost control and standard optimization.

In summary, it can be seen that although the three experts have different views and views on AI, they all show that the development of AI has become the general trend. As Altman said, we can't stop the development of AI.

On this basis, how to find a better way to develop AI, inhibit a series of risks and harms it may bring, and finally move towards the end of AGI will be the main direction that people need to focus on in the next stage.

Read on