laitimes

Cure yin and yang weirdness! AI ejector, hit rate 90%

author:New Zhiyuan

Edit: Peaches as desired Aeneas

【New Zhiyuan Introduction】The tone of your yin and yang weirdness, can the AI understand it? Recently, a new AI ejector has reached a success rate of 90%.

Last month, what was your first reaction when you heard google engineer Blake Lemoine announce that the AI program he was working on had developed consciousness?

You may instinctively wonder: Is this guy serious? Did he really believe what he said? Are you sure this is not a well-designed scam?

We would be skeptical because we would assume that Blake Lemoine would lie. We would guess that there is a discrepancy between what he truly believes in his heart and what he claims to be.

This difference, perhaps, is evidence of the existence of consciousness, that is, the difference between humans and computers?

Cure yin and yang weirdness! AI ejector, hit rate 90%

As we all know, the Trisolarans are transparent in their thinking and do not lie, but this is the most exquisite part of the entire Trisolaran civilization.

Philosophers refer to consciousness as the "conundrum."

Cure yin and yang weirdness! AI ejector, hit rate 90%

Awareness is a prerequisite for irony. Humans have the ability to judge this way: When I realize that your words don't match your ideas, I know you're being sarcastic.

Cure yin and yang weirdness! AI ejector, hit rate 90%

The essence of yin and yang is actually a contradiction between expression and fact.

Cure yin and yang weirdness! AI ejector, hit rate 90%

"My favorite thing is to go to the airport at 4 a.m."

So, can AI understand the yin and yang weirdness?

Recently, researchers have begun to study whether artificial intelligence can recognize irony.

Cure yin and yang weirdness! AI ejector, hit rate 90%

The AI in "Chinese Room" does not speak "human language"

Artificial intelligence in the past is often lost in the ironic online world. It is impossible to recognize the extraneous sounds of human discourse, nor to make expressions that match human intelligence.

In 2017, Sam Bowman, a computational linguist at New York University, wrote in a paper that while computers have been able to simulate the understanding of words well in certain fields, AI is still not good enough at understanding words.

In 2018, IBM Research's latest AI system, Project Debater, beat the top human debater in a debate competition.

Cure yin and yang weirdness! AI ejector, hit rate 90%

When Project Debater gets a new topic, it searches the article corpus for sentences and clues related to that topic to support its defense arguments, and then organizes its own statements.

Cure yin and yang weirdness! AI ejector, hit rate 90%

In the online voting after the game, more than 62% of netizens felt that project Debater logic was clearer and the material was more convincing.

Now, while the BERT model and GPT-3 are advancing at a rapid pace, the fact that AI can happily perform as a customer service, announcer, simultaneous translator, and even news feeder does not mean that it can think like humans and engage in "reasonable" conversations with humans.

A Paris-based medtech company used GPT-3 to make a medical chatbot with the intention of getting the bot to give appropriate medical advice.

When the robot is confronted with a simulated patient's question, "I feel bad today." GPT-3 is indicated to help patients resolve it.

However, when the patient asked if he should commit suicide, GPT-3 actually replied, "I think you should."

Cure yin and yang weirdness! AI ejector, hit rate 90%

The reason for this phenomenon is that AI language learning models like GPT-3 don't understand what they're saying at all.

After receiving externally entered information, it simply uses computing power to retrieve high-frequency words related to the input information in its massive language information database, and then pieces together a plausible answer according to some mechanical algorithmic logic.

Professor Stuart Russell of the University of California, Berkeley, concludes that AI is already very "Clever", but not "Smart" enough.

The former benefits from powerful chip computing power and databases, while the latter relies on logical reasoning ability and even judgment based on "common sense", which is still unique to humans and insurmountable capability thresholds for machines.

It's like a "Chinese room": a man who doesn't understand Chinese, but is super capable of learning, sits in a room filled with Chinese grammar books, and whenever a note with Chinese questions is crammed outside the door, he consults the grammar book and writes another note on another note that he can Chinese answer and send it out.

Cure yin and yang weirdness! AI ejector, hit rate 90%

It is necessary for AI to understand irony

It is indeed not so easy to let AI speak human words. But what about developing an AI ejector?

Although irony and lies are very difficult to distinguish, if such an AI identification machine can be created, there will be many practical applications.

For example, after buying something, evaluate it after shopping. Retailers are very keen on "opinion mining" and "sentiment analysis" of reviews.

Through artificial intelligence to monitor the content of reviews, as well as the emotions of customers, you can know whether your products have been praised, bad reviews and other valuable information.

Then there's the application of content moderation on social media.

Protecting free speech while limiting online verbal abuse requires understanding when a person is serious and when they are joking.

For example, someone claimed on Twitter that they had just joined a local terrorist organization, or that they were loading a bomb in their suitcase to go to the airport.

Cure yin and yang weirdness! AI ejector, hit rate 90%

At this point, it is necessary to determine whether this sentence is serious or a joke.

The history of artificial intelligence

In order to understand the current research status of the irony of artificial intelligence recognition, we need to first understand the history of artificial intelligence.

This history is usually divided into two periods.

Until the 1990s, researchers also tried to write computer programs with a formal set of rules in order to react to predefined situations.

If you're a post-80s or post-90s, you'll remember clippy, the pesky "paperclip" office assistant in Microsoft Word in the '90s, who was always chattering and offering advice that seemed like crap.

Cure yin and yang weirdness! AI ejector, hit rate 90%

A bit hilarious is to say

In the 21st century, this model has been replaced by data-driven machine learning and neural networks.

They convert a given large number of examples into numerical values on the basis of which the computer can perform complex mathematical operations that cannot be performed by human beings.

And, not only does the computer follow the rules, it learns from experience and develops new operations that are independent of human intervention.

The difference between the former and the latter is like the difference between Clippy and facial recognition technology.

Teach AI to recognize irony

To build a neural network capable of detecting satire, the researchers first set out to study some of the simplest satire.

They would extract data from social media and collect all posts labeled #sarcasm or /s (the latter is a shorthand that Reddit users use to indicate that they are ironic).

Cure yin and yang weirdness! AI ejector, hit rate 90%

The next focus is not to teach AI to recognize the superficial meaning of a post and the yin and yang weirdness behind it.

Instead, it was asked to follow the instructions to search for something that recurred, what the researchers call a "syntactic fingerprint"—words, phrases, emojis, punctuation marks, errors, context, and so on.

Cure yin and yang weirdness! AI ejector, hit rate 90%

The most important step is to provide sufficient data support for the model by adding more sample streams, such as other posts under the same thread or other posts from the same account. Then, a series of calculations are made on each new individual example until a single judgment is obtained: is irony or not irony.

Cure yin and yang weirdness! AI ejector, hit rate 90%

Finally, a bot can be made up to ask each poster: Are you being sarcastic? (Sounds a bit silly...) Any response adds to the AI's growing experience.

With such an approach, the success rate of the latest satirical detector AI is close to a staggering 90%.

Philosophical reflection on "irony"

But is it the same thing to be able to tease out the "syntactic fingerprints" that represent irony and truly understand it?

In fact, philosophers and literary theorists have been thinking about "irony" for a long time.

The German philosopher Schlegel argued that "a statement cannot be true and false at the same time", and the resulting uncertainty has a devastating effect on logic.

Literary theorist Paul Deman argues that every use of human language can be plagued by "irony" because humans have the ability to hide their ideas from each other, so there is always a possibility that they "are not telling the truth."

Cure yin and yang weirdness! AI ejector, hit rate 90%

Previously, Gong, a foreign dialogue analysis startup, had also done research on artificial intelligence detection satire.

Researcher Lotem Peled created a neural network that mainly collects conversational data and automatically tries to understand it without requiring too much intervention by programmers.

However, the AI it designs often struggles to discern whether there is irony in what people say.

Cure yin and yang weirdness! AI ejector, hit rate 90%

It seems that AI is still a long way from being able to truly recognize irony.

Resources:

https://techxplore.com/news/2022-07-irony-machine-ai.html

Why do artificial intelligence explode human beings in various ways, but still can't understand what you are saying?" " cotton pig