laitimes

LeCun bursts again: ChatGPT? Not even a dog! The language model is just fed

author:New Zhiyuan

Editor: Layan

LeCun depreciated ChatGPT again in a debate yesterday! The intelligence of this AI model is not as good as that of dogs.

LeCun, one of the Turing Big Three, broke gold again yesterday.

"In terms of intelligence, ChatGPT may not even be as good as a dog."

The quote comes from a debate between LeCun and Jacques Attalie on Vivatech on Thursday.

LeCun bursts again: ChatGPT? Not even a dog! The language model is just fed

CNBC even put the sentence directly in the title, and LeCun quickly retweeted it later.

"ChatGPT and Dogs: Not a Little Compare"

LeCun said that current AI systems, even ChatGPT, simply do not have the level of human intelligence, or even dogs.

You know, in the development of AI explosion today, countless people have been impressed by the powerful performance of ChatGPT. In this case, LeCun's words can be described as shocking.

LeCun bursts again: ChatGPT? Not even a dog! The language model is just fed

However, LeCun's consistent view is that don't be too nervous, today's level of AI intelligence is nowhere near the point where we should be worried.

Other tech giants have diametrically opposed views with LeCun.

For example, Hinton and Bengio, who are also the Turing Big Three, as well as the open letter signed by Sam Altman, Musk's crisis remarks and so on.

In this environment, LeCun has always "never forgotten his original intention" and firmly believes that there is really nothing to worry about now.

LeCun said that current generative AI models are trained on LLM, and this language-only model is not very smart.

"The performance of these models is very limited, and they don't have any understanding of the real world. Because they are trained purely on a lot of text."

And because most of the knowledge that humans have has nothing to do with language, this part of the content cannot be captured by AI.

LeCun used the analogy that AI can now pass the bar exam because the content of the exam is stuck in words. But AI is absolutely impossible to install a dishwasher, and a 10-year-old child can learn how to install it in 10 minutes.

LeCun bursts again: ChatGPT? Not even a dog! The language model is just fed

That's why LeCun emphasized that Meta is trying to train AI with video. Video is more than just language, so training with video can be more difficult to implement.

LeCun gave another example to try to illustrate what the difference in intelligence is.

A five-month-old baby sees a floating thing and doesn't think too much. But a nine-month-old baby would be very surprised to see another floating object.

Because in the cognition of a nine-month-old baby, an object should not be floating.

LeCun says we don't know how to get AI to achieve this cognitive ability. Until this can be done, AI simply cannot have human intelligence, not even cats and dogs.

LeCun bursts again: ChatGPT? Not even a dog! The language model is just fed

Attali: I'm also going to sign the open letter

In this discussion, French economic and social theorist Jaques Attali said that AI is only as good as how people use it.

Yet he is pessimistic about the future. He, like the AI bulls who signed the letter, believes that humanity will face many dangers in the next three or four decades.

He pointed to climate disasters and war as his top concerns, while fearing that AI robots would "thwart" us.

Attali believes that boundaries need to be set for the development of AI technology, but who will set the boundaries and what boundaries are still unknown.

This is the same as the two open letters signed a while ago.

LeCun bursts again: ChatGPT? Not even a dog! The language model is just fed
LeCun bursts again: ChatGPT? Not even a dog! The language model is just fed

Of course, the open letter LeCun did not cut at all, and tweeted that the buddy did not sign.

LeCun bursts again: ChatGPT? Not even a dog! The language model is just fed

LeCun blasted ChatGPT — it didn't stop

Before that, LeCun had said something similar about ChatGPT more than once.

On January 27 of this year, at a small gathering of media and executives at Zoom, LeCun gave a surprising assessment of ChatGPT -

"In terms of the underlying technology, ChatGPT is not very innovative. Although in the eyes of the public, it is revolutionary, but we know that it is a well-assembled product and nothing more."

LeCun bursts again: ChatGPT? Not even a dog! The language model is just fed

"In addition to Google and Meta, there are six startups, basically all with very similar technologies."

In addition, he also said that the Transformer architecture used by ChatGPT was proposed by Google, and its self-monitoring method was exactly what he advocated, when OpenAI was not yet born.

There was a bigger commotion, and Sam Altman went straight to LeCun on the push.

On January 28, LeCun scored twice and continued to bombard ChatGPT.

"Large language models don't have physical intuition, they are text-based," he says. If they can retrieve answers to similar questions from vast associative memories, they may answer physical intuition questions correctly. But their answer may also be completely wrong."

LeCun's view of LLM has remained consistent and has never changed. As can be seen from yesterday's debate, he feels that what language training produces has no intelligence at all.

On February 4 of this year, LeCun bluntly stated that "on the road to human-level AI, large language models are a crooked road."

LeCun bursts again: ChatGPT? Not even a dog! The language model is just fed

"LLMs that rely on autoregression and responses to predict the next word are a crooked path because they can neither plan nor reason."

Of course, LeCun has good reason to believe this.

The big language model of ChatGPT is "autoregressive". The AI is trained to extract words from a corpus of up to 1.4 trillion words to predict the last word in a given sequence of sentences, the next word that must appear.

LeCun bursts again: ChatGPT? Not even a dog! The language model is just fed

The research carried out by Claude Shannon in the 50s of the last century is based on this principle.

The principle has not changed, it has become the size of the corpus, and the computing power of the model itself.

LeCun said, "Right now, we can't rely on these models to generate long, coherent text, and these systems are not controllable. For example, we can't just ask ChatGPT to generate a text aimed at 13-year-olds.

Secondly, the text generated by ChatGPT is not 100% reliable as a source of information. GPT functions more like an auxiliary tool. Just like existing driver assistance systems, with the automatic driving function, you also have to put the steering wheel.

Moreover, the autoregressive language models we know today have a very short lifespan, five years is a cycle, and after five years, no one will use the old models anymore.

The focus of our research should be on finding a way to control these models. In other words, the AI we want to study is one that can reason and plan according to a given goal, and that its safety and reliability standards are consistent. This AI can feel emotions."

LeCun bursts again: ChatGPT? Not even a dog! The language model is just fed

You know, a large part of human emotions is related to the achievement of goals, that is, to some form of expectation.

With such a controllable model, we can produce long, coherent text.

LeCun's idea is to design enhanced models in the future that blend data from different tools, such as calculators or search engines.

Models like ChatGPT are only text-trained, so ChatGPT's knowledge of the real world is incomplete. If you want to develop on this basis, you need to learn something related to the sensory perception and world structure of the whole world.

However, the funny thing is that Meta's own model was sprayed by netizens for three days galactica.ai online.

LeCun bursts again: ChatGPT? Not even a dog! The language model is just fed

The reason is nonsense.

Laugh.

Resources:

https://www.cnbc.com/2023/06/15/ai-is-not-even-at-dog-level-intelligence-yet-meta-ai-chief.html

Read on