laitimes

Turing Award winner Yann LeCun: ChatGPT is nowhere near a real person

Although ChatGPT can answer like a stream and a variety of tricks, how far is it from the real person? Yann LeCun, one of the three giants of deep learning, and others wrote an in-depth discussion of this problem.

At the end of 2022, OpenAI launched ChatGPT, and its popularity continues to this day, this model is simply walking traffic, and it will cause a frenzy of discussion everywhere it goes.

Major technology companies, institutions, and even individuals have stepped up the research and development of ChatGPT products. At the same time, Microsoft plugged ChatGPT into Bing, almost front and back, and Google released Bard to power the search engine. NVIDIA CEO Jensen Huang gave a high opinion of ChatGPT, saying that ChatGPT is the iPhone moment in artificial intelligence and one of the greatest technologies in computing history.

Many people are convinced that conversational AI has arrived, but are these types of models really perfect? Not necessarily, there are always uncanny moments in them, such as making unfettered statements at will, or chattering about plans to take over the world.

To understand these absurd moments of conversational AI, Yann LeCun, one of the Big Three of Deep Learning, and Jacob Browning, a postdoctoral fellow in the Department of Computer Science at New York University, and others co-authored an article "AI Chatbots Don't Care About Your Social Norms," which discusses three aspects of conversational AI: chatbots, social norms, and human expectations.

Turing Award winner Yann LeCun: ChatGPT is nowhere near a real person

The article says that human beings are very good at avoiding slips of the tongue and not allowing themselves to make mistakes and disrespectful words and deeds. Chatbots, by contrast, often make mistakes. So understanding why humans are good at avoiding mistakes can better help us understand why chatbots can't be trusted today.

The chatbot incorporates human feedback to keep the model from saying the wrong thing

For GPT-3, mistakes are made in ways that include statistical inaccuracies in the model. GPT-3 relies more on user cues, and its understanding of context, context, etc. only focuses on what can be obtained from the user's prompts. The same goes for ChatGPT, though with a slight modification in a novel and interesting way. In addition to statistics, the model's responses were reinforced by human evaluators. For the output of the system, the human evaluator reinforces it so that it outputs a good response. The end result is that the system will not only say something plausible, but also (ideally) something that humans will judge to be appropriate — even if the model says the wrong thing, at least without offending others.

But this approach feels too mechanical. There are countless ways to say the wrong thing in human conversation: we can say something inappropriate, dishonest, confusing, or just stupid. We are even blamed for saying the right thing for saying the wrong tone or intonation. In the process of dealing with others, we will cross countless "dialogue minefields". Controlling yourself not to say the wrong thing is not just an important part of the conversation, it is often more important than the conversation itself. Sometimes, keeping your mouth shut may be the only correct course of action.

This begs two questions: How do we navigate the dangerous situation of models not saying the wrong thing? And why can't chatbots effectively control themselves not to say the wrong thing?

How should the conversation work?

Human conversations can touch on any topic, as if scripted: restaurant ordering, small talk, apologizing for being late, etc. However, these are not text scripts, and the middle is full of improvisation, so this human dialogue model is a more general pattern, and the rules are not so strict.

This scripted human speech and action is not constrained by words. Even if you don't know the language, the same script can work, such as making a gesture to know what the other person wants. Social norms govern these scripts and help us navigate our lives. These norms dictate how each person behaves in certain situations, assign roles to each person, and give broad guidance on how to act. Compliance is useful: it simplifies our interactions through standardization and processification, making it easier for each other to predict each other's intentions.

Humans have developed routines and norms to govern every aspect of our social life, from what fork to use to how long you should wait before honking. This is essential to survive in a world of billions of people, where most of the people we meet are complete strangers whose beliefs may not align with ours. Putting these common norms in place will not only make conversations possible, but productive, with a list of what we should be talking about — and all the things we shouldn't be talking about.

The other side of the norm

Humans tend to sanction those who violate norms, sometimes openly and sometimes in secret. Social norms make it very simple to evaluate a stranger, for example, on the first date, through dialogue and asking questions, both parties evaluate each other's behavior, and if the other person violates one of these norms – for example, if they behave rudely or inappropriately – we usually judge them and refuse the second date.

For humans, these judgments are based not only on dispassionate analysis, but also on our emotional responses to the world. Part of our childhood education is emotional training to ensure that we give the right emotions at the right time in conversations: anger when someone violates etiquette norms, disgust when someone says offensive things, shame when we lie. Our moral conscience allows us to react quickly to anything inappropriate in a conversation and anticipate how others will react to our words.

But not only that, a person who violates simple norms, his entire character will be questioned. If he lied about one thing, would he lie about something else? Therefore, going public is to shame the other person and, in the process, force the other person to apologize for their actions (or at least justify their actions). Norms have also been strengthened.

In short, humans should strictly abide by social norms, otherwise there is a high risk of speaking words. We are to take responsibility for anything we say, so choose to speak carefully and hope that the same will be done by those around us.

Unconstrained chatbots

The high stakes of human conversation reveal what makes chatbots so disturbing. By merely predicting how the conversation will proceed, they end up loosely adhering to human norms, but they are not bound by those norms. When we talk casually with chatbots or test their ability to solve language puzzles, they usually give some plausible answers and behave like humans. Some might even mistake a chatbot for a human.

However, if we change the prompt slightly or use a different script, the chatbot will suddenly spit out conspiracy theories, racist tirades, or nonsense. This may be because they are trained in what conspiracy theorists, trolls, etc. write on Reddit and other platforms.

It's possible for any of us to say things like trolls, but we shouldn't because trolls' words are full of nonsense, offensive remarks, cruelty and dishonesty. Most of us don't say these things because we don't believe them. Decent norms have pushed offensive behavior to the margins of society, so most of us don't dare to say that.

In contrast, chatbots don't realize that there are things they shouldn't say, no matter how statistically likely those words are. They are unaware of the social norms that define the boundaries between what should and should not be said, nor of the deep social pressures that influence our use of language. Even if chatbots admit to messing up and apologize, they don't understand why. If we point out that they are wrong, the chatbot will even apologize to get the right answer.

This sheds light on a deeper problem: we want human speakers to be true to what they say and hold them accountable. We don't need to examine their brains or know any psychology to do this, we just need to know that they are consistently reliable, norm-abiding, and behaving respectfully to others, and we will believe them. The problem with chatbots is not that they are "black boxes" or that the technology is unfamiliar, but that they have been unreliable and offensive for a long time, and there is no effort to improve or even realize that there is a problem.

Developers are certainly aware of these issues. They, and companies that want their AI technology to be widely used, worry about the reputation of their chatbots and spend a lot of time restructuring their systems to avoid difficult conversations or eliminate inappropriate answers. While this helps make chatbots safer, developers need to go out of their way to get ahead of the people trying to break them. As a result, the developer's approach is reactive and always lagging behind: there are too many wrong ways to predict.

Smart but not human

This shouldn't make us complacent about how smart humans are and how dumb chatbots are. Rather, their ability to speak of everything demonstrates a deep (or superficial) understanding of human social life and the world at large. Chatbots are smart enough to at least score well on tests or provide useful information references. The panic that chatbots cause among educators is enough to show that they are impressive in learning about book knowledge.

But the problem is that chatbots don't care. They don't have any intrinsic goal to achieve through dialogue, and they are not motivated by the thoughts or reactions of others. They don't feel bad for lying, and their honesty doesn't get rewarded. They are shameless in a way, and even Trump cares deeply about his reputation, or at least claims to be honest.

Therefore, chatbot conversations are meaningless. For humans, dialogue is a way to get what we want, like making connections, getting help with a project, passing the time, or learning about something. Dialogue requires that we be interested in the person we are talking to, and ideally should care about the other person.

Even if we don't care about the person we're talking to, at least we care what the other person thinks of us. We deeply recognize that success in life (such as having close relationships, doing good work, etc.) depends on having a good reputation. If our social status declines, we may lose everything. Conversations shape how others see us, and many people shape their perception of themselves through inner monologues.

But chatbots don't have their own stories to tell and no reputation to defend, and they don't feel the appeal of responsible action as much as we do. Chatbots can and do a lot of highly scripted situations, from playing dungeon masters, writing reasonable quests, or helping authors explore ideas, and so on. But they lack the knowledge of themselves or others to be trustworthy social agents, the kind of people we want to talk to most of the time.

If you don't know about the norms of honesty and decency, and don't care about your reputation, then chatbots are limited in usefulness, and relying on them also poses real dangers.

Weird dialogue

As a result, chatbots don't talk in a human way, and they can never do so by talking that seems statistically plausible alone. Without a true understanding of the social world, these AI systems are just boring, no matter how witty or eloquent.

This helps shed light on why these AI systems are just very interesting tools and why humans shouldn't anthropomorphize them. More than just calm thinkers or speakers, humans are essentially norm-abiding creatures that connect emotionally with each other through common, mandatory expectations. Human thinking and speech stem from its own sociality.

Mere dialogue is divorced from broad world participation and has little in common with humanity. Chatbots don't use language like we do, even if they sometimes speak exactly the same way as we do. But at the end of the day, they don't understand why we talk the way we say, that's obvious.

Machine Heart Compilation, Machine Heart Editorial Office

Read on