
Reporting by XinZhiyuan
Editor: Yuan Xie is sleepy
As a star research institute in the AI industry, OpenAI, which has always been engaged in things, is it really general artificial intelligence (AGI) now?
"Will AI become a terminator", this is a clichéd question that laymen often ask and insiders only have stalls.
However, if this is said by the big man in the industry, the effect will be different immediately.
Recently, a sentence from openAI's chief scientist has blown up the entire machine learning circle.
On Feb. 10, Ilya Sutskever tweeted, "Today's large neural networks may already have a weak sense of autonomy."
The argument went on for a few days.
On February 13, LeCun even went down in person, unusually resolute and straightforward:
"Not true."
Then he explained politely but sharply:
"Even the adjectives of the apparent micronutrials in 'weak consciousness' and the adjectives of 'large neural networks' for large sizes are wrong. I think it needs to be underpinned by a large architecture that all neural networks don't have today."
During this time, Teacher Ilya ended in a cold sentence, so now no one knows exactly what he is talking about.
In particular, this "hasty" sentence is so that the first letter is not capitalized, which can't help but make people think about it.
Are you saying that the new GPT model in development is doing wonders? Or is it to say that their existing models are inspired by the apocalypse to have a spiritual intelligence that is no less than that of man? Or is it a repetition of the idea that "general artificial intelligence" (AGI) is good and powerful?
If you really want to imagine super AI, watching "Suspect Tracking" is much more comfortable than watching a boring documentary.
The big guys sprayed into a pot of porridge
Back to the point, after the tweet was posted, various replies followed.
Not surprisingly, there is a strong uneasiness about AI's "self-perception", but more people think it is pure nonsense.
Some netizens replied under the original push of Ilya: "The self-consciousness that human beings know now must have a considerable degree of dynamic randomness, which can even be called chaos. If not, it is only an algorithmic behavior. The algorithm may indeed be complex enough to fool the observer into thinking it is conscious."
Toby Walsh, an AI researcher at the University of New South Wales at Sidney, said: "Every time such speculative comments are published, it takes months to bring the conversation back to the more realistic opportunities and threats posed by AI."
Valentino Zocca, vice president of Citibank and an expert on deep learning, has a similar view: "Ai is unconscious, but obviously, hype is more important than any fact."
Jürgen Geuter, an independent tech sociologist, said in a follow-up: "It's quite possible that this has no factual basis at all, just a sales gimmick for a startup that runs a huge number of simple statistics and boasts of its magical technical power."
This view is supported by software testing expert Michael Bolton, who jokes: "AI is not necessarily weakly conscious, but Elijah is likely to be running trains with a 'weak' mouth, maybe even more than 'weak.'"
Leon Dercynski, an associate professor at the University of Information Technology in Copenhagen, also sneered: "Somewhere between Earth and Mars, there is probably a teapot flying around the sun, which is much more reasonable than Ilya's imagination." Because the regular framework for celestial orbital flight actually exists, and a precise definition of what a teapot is."
Roman Werpachowski, a Microsoft Norwegian data scientist and former DeepMind researcher, responded to the news by posting the famous "I'm your dad" message.
However, after being attacked by countless people, Teacher Ilya replied very noblely and coldly: "Arrogance is the ultimate enemy."
For this faint rebuttal, netizens also have a faint ridicule.
Leonard Mosescu, a code farmer who has worked for Microsoft and Google and is now employed on Nvidia's deep learning projects, mocks Ilya's golden sentence: "The big arrogance of the moment may also have a weak sense of autonomy."
Jigar Doshi, another code farmer who has worked at Intel, IBM, and Georgia Tech, asked, "If there is no self-obsession, how can human consciousness be active?"
In the face of all the criticism, Mr. Ilya was finally willing to say a few more words to fight back: "In order to feel good about myself in the short term, and unconsciously choose to believe in obvious illusions, it is not the optimal life strategy."
"Cognition = action, advanced deep learning agents may also have shortcomings similar to this kind of human inferiority." 」
In the midst of all the fingerings, OpenAI boss Sam Altman said he supported his colleagues while fighting back against LeCun:
OpenAI Chief Scientist: Expressed curiosity and openness to mysterious ideas, starting sentences with "maybe";
Meta Chief AI Scientist: The opening sentence is an arbitrary "no."
Maybe this explains the performance in the past five years.
Dear Meta AI researchers, this is my email address, we are now hiring.
Influencer LeCun, who has never had the habit of being angry on social networks, immediately replied:
It is true that a certain company can build faster planes and break flight altitude records every day, and doing these things can make its own name often make headlines, and these breakthroughs may also have a little real use.
But if the goal is to truly break through the atmosphere and fly into space orbit, it will require hard work on unreponsive technologies such as condensate reservoirs and turbopumps.
Don't believe you to ask Musk to go.
Microsoft's deep learning research mogul Devon Hjelm told the big truth:
Researchers and students in the AI community, if you are not clear, here is the synopsis of this incident:
Aside from the old-fashioned AI companies fighting each other, there is nothing meaningful about this.
It's best to ask quality questions, stay curious, and be skeptical.
Is AI autonomous?
In addition to being the chief scientist and co-founder of OpenAI, Ilya is one of the many authors of the AlphaGo paper, and one of the most enthusiastic AGI personalities in the AI industry.
OpenAI itself announced in "The Company's Future Mission" to develop "AGI for the Benefit of All Mankind"
In an interview with iHuman, a 2019 documentary on artificial intelligence, Ilya announced that she would solve all the problems that humanity encounters today, but warned that AGI may have the potential to continue to be authoritarian indefinitely.
The interviewer should have been polite and did not ask the question of "whether the use of AGI can solve the problem of dictatorship facing human society" and "attack the shield of the son with the spear of the child".
Jacy Reese Anthis, a social scientist and co-founder of the sentience Institute, a nonprofit research institute, explored this in depth in a Tweet post.
After advertising for her own agency, Jacy Reese Anthis elaborated:
First, the meaning of consciousness is highly controversial among philosophers and scientists.
This can be described in more than 3 more precise terms:
Thought, usually a semantic flow of words
Perception, the perception achieved through the five senses or imagination
Emotions, positive and negative emotions
However, even these are controversial.
Are these behaviors? process? function? dualism? Unspeakable? Wait a minute.
Jacy argues that a eliminatist view should be adopted. Because these terms are so vague, consciousness does not exist objectively.
So, what specific characteristics would indicate perception or consciousness in AI?
Researchers at the "School of Perception" believe that if AI can be called conscious, it may have one or more of the following characteristics: the detection of harmful induction data and the degree of attention paid to this variable, the response to the trigger and memory association, the state change similar to the emotional change of the trigger, enhanced learning, goal-based behavior, and semantic representation of emotional states.
http://sentienceinstitute.org/blog/assessing-sentience-in-artificial-entities
At present, such as GPT-3 and other transformation models, the above features are almost non-existent. Still, Jacy doesn't think there's a completely objective answer to this question.
Because the definition of consciousness, such as "how does this thing feel" or "inner listener", etc., are too vague to answer this question.
Today, the most precise answer that can be achieved in the case of disagreement, ambiguity, and uncertainty is probably: "Neural networks may have a very small consciousness, but less than insects."
Resources:
https://futurism.com/the-byte/openai-already-sentient
https://wordpress.futurism.com/conscious-ai-backlash/amp
Ilya Sutskever:https://twitter.com/ilyasut/status/1491554478243258368
LeCun:https://twitter.com/ylecun/status/1492890267468320778
Sam Altman:https://twitter.com/sama/status/1492644786133626882