Bian Ce Xiao Xiao was sent from The Temple of Oufei
Qubits | Official account QbitAI
"Today's large neural networks may already be beginning to take shape."
OpenAI chief scientist Ilya Sutskever dropped this sentence on Twitter, which is not surprising.

This can make the AI circle explode, and the reply of experts in the field of artificial intelligence and neuroscience rolls in.
The most fierce counterattack was Turing Award winner Yann LeCun, chief scientist of Meta AI.
LeCun hit back on the spot, and the academic bigwigs came down one after another
LeCun's first response to this was this:
disagree! [Current AI] hasn't reached the lower limit of being "slightly conscious," or even the upper limit of "large neural networks."
He also said that if you have to define AI at this stage, then:
I think you need a specific type of macro architecture that the current network doesn't have.
Seeing that his employees and original entrepreneurial partners were intimidated, OpenAI co-founder Sam Altman also personally commented:
OpenAI's chief scientist expressed curiosity and openness to a mysterious idea, presupposing the word "possibly."
Meta's chief AI scientist, on the other hand, said "no" to the point.
The tone of the following sentence is slightly sarcastic, as if to imply that OpenAI has achieved more results due to its open attitude.
This may explain a lot of what has happened over the past 5 years.
At the end of the tweet, Sam Altman did not forget to dig the foot of the competitor's wall:
Dear Meta AI Researcher: My email address is [email protected]. We are hiring!
Then, LeCun also pulled up Musk, another co-founder of OpenAI, to compare the differences between aviation and aerospace technology.
One can build faster airplanes and break altitude records.
But if one's goal is to enter orbit in space, one must study cryogenic tanks, turbopumps, and so on.
Don't be so flashy. You can go and ask Musk.
In addition to LeCun, other AI people have also criticized OpenAI's remarks.
Toby Walsh, a well-known AI expert and professor at the University of New South Wales, interjected:
Every time such speculative rhetoric is made, it takes months of effort to reassure people.
DeepMind senior research scientists scoffed:
If this holds true, then there may be a little pasta in a large wheat field.
While criticism on Twitter is overwhelming, there are still some pro-OpenAI remarks.
Tamay Besiroglu, a researcher from MIT CSAIL, said:
It's disappointing to see so many famous machine learners scoff at the idea.
That gives me hope that they'll be able to solve some of the most important problems of the coming decades.
He believes that labs like OpenAI rather than Meta are more likely to solve the deep, strange and important problems that will arise in the field in the near future.
In addition, there was an unexpected gain from the quarrel, that is, Sam Altman revealed the latest news about GPT-4.
From his formulation, GPT-4 is likely to be a continuation of GPT-3.
Last year, Cerebras, the company that supplies OpenAI's super-large AI chip WSE-2, revealed that GPT-4 has about 100 trillion parameters and will have to wait years.
Returning to the "core" of this storm of controversy, what exactly made Ilya Sutskever feel this way?
Perhaps a glimpse of this can be seen from his experience.
From the original AlexNet, to the later AlphaGo, to witnessing the emergence of models such as GPT-3 and Codex, he has participated in almost every "circle-breaking" technology in the field of AI.
"Going farther and farther" on the road to AGI
Ilya Sutskever graduated from the University of Toronto with a ph.D. and was a former student at Geoffrey Hinton.
In fact, Sutskever is one of the authors of AlexNet.
In 2012, under the guidance of Hinton, he and Alex Krizhevsky co-designed the architecture, winning the ImageNet Challenge that year, 10.8% lower than the error rate of second place.
After Hinton's DNN Research was acquired by Google, Sutskever joined Google Brain as a research scientist.
He was involved in the development of the famous AlphaGo and became one of many authors of papers, which defeated Li Shiqian 4-1 in a 2016 Go match.
While at Google, he also collaborated with two other scientists at Google Brain to come up with the seq2seq algorithm, one of the classic frameworks for NLP.
In late 2015, Sutskever left Google to co-found OpenAI with Musk, Sam Altman, and others.
In 2018, after more than two years as a research director, he became OpenAI's chief scientist.
It can be said that OpenAI from GPT-2 development to GPT-3, from Rerun, who beat the DOTA2 champion team, to Codex, who can write code to make games, he is one of the witnesses who is personally involved in it.
Under his leadership, OpenAI is gradually moving towards the path of AGI (General Artificial Intelligence).
At the beginning of 2021, the multimodal model DAL · E and CLIP appeared, opening up the way between text and images.
At that time, OpenAI proposed a study that CLIP is very similar to the way humans think, so that netizens directly call the arrival of AGI much faster than imagined.
Also last year, GitHub and OpenAI partnered to launch GitHub Copilot, an autocomplete code tool, and AI began to acquire the skills of some programmers.
At the beginning of this year, The mathematical AI model Lean studied by OpenAI went one step further, and successfully solved two international Olympiad problems after adding a neural theorem prover.
Now OpenAI is studying the quadrillion parameter big model GPT-4, perhaps this is also related to this exclamation from Suthkever.
"20 billion parameter AI can't even add and subtract"
But in fact, the path of AGI does not seem so clear.
At least recently, Brendan Dolan-Gavitt, an assistant professor at New York University, found that the 20-billion-parameter GPT-NeoX, a large model, could not do even the most basic integer arithmetic problems correctly.
GPT-NeoX is not the official open AI model, but an open source model created by a machine learning group called ElsetherAI (because GPT-3 is not open source).
△ Partial results are displayed
In Brendan's 100 questions of integer addition, subtraction and multiplication, ai only counted 10 correct questions, although the other answers were "similar" to the correct answers, but after all, they were all wrong. It can be seen that AI does not really understand the four operations.
As for the discussion of whether the neural network is self-aware, netizens also expressed their views.
Some netizens believe that these big guys are not so much discussing whether AI is conscious, but rather that they are discussing what the definition is:
Some people also joke that deep neural networks have entered the "metaphysical" deep water area:
Some netizens put a robot's magic emoji in the comment area of this discussion, "AI frowned, realizing that things are not simple":
What do you think of the idea that neural networks are beginning to become conscious?
Reference Links:
[1]https://futurism.com/conscious-ai-backlash
[2]https://weibo.com/5124470749/LfhTasFiP
[3]https://www.linkedin.com/in/ilya-sutskever/
[4]https://twitter.com/ilyasut/status/1491554478243258368
[5]https://twitter.com/sama/status/1492645047585570816
[6]https://en.wikipedia.org/wiki/Ilya_Sutskever
[7]https://openai.com/blog/authors/ilya/
[8]https://gist.github.com/moyix/267d122f9d92268e3432b05f08976816