laitimes

Why can't the "little black room" of the Internet platform shut down the Internet violence?

Author | Zhao Ruoci

Edit | Liu Yang

Three years ago, the 25-year-old B station UP master "Kafka Muffin Jun" died of lung cancer, and many netizens left messages of condolences under her Weibo, one of which was full of malicious hot comments was very dazzling, only five words - "open champagne to celebrate".

Muffin Jun experienced many online storms before her death, and her last Weibo wrote: "Many things are not your turn, so you can play for the tiger on the side and shout for the sky." ”

Huang Han had similar feelings. After she resigned from the company naked last year, the first video she released on station B had more than 50,000 views. However, with this came questioning and attacks in the comment section and barrage. Huang Han told "Leopard Change" that a comment that made her feel uncomfortable was liked by more than 200 people.

In November 2022, the Cyberspace Administration of China issued the Notice on Effectively Strengthening the Governance of Online Violence, requiring website platforms to establish governance mechanisms such as early warning, protection, and non-proliferation of online violence. On March 6 this year, platforms such as Douyin, Kuaishou, Weibo, Xiaohongshu, Station B, Douban, and Zhihu released guidelines to prevent online violence.

Although various platforms have been improving mechanisms and rectifying online violence, judging from the results, online violence incidents are still emerging one after another. Liu Xuezhou, who was blackened by his biological mother, Zheng Linghua, the "pink-haired girl" who took a photo with her grandfather, and the travel blogger who drove to Tibet on a tractor all died after a large-scale cyberattack.

According to the China Internet Network Information Center, as of December 2022, the number of mainland Internet users reached 1.067 billion, with a penetration rate of 75.6%.

In this era where everyone can speak, what steps has the social platform at the center of the online public opinion field taken in preventing online violence? After continuous rectification, why does online violence still appear repeatedly? In the face of online violence, why can't platforms "keep you safe"?

1. "More than 800 people have been blocked on Weibo"

"The early introduction of the product is all users with high compatibility with the platform, so the atmosphere will be purer, and as the number of users grows, it will inevitably face problems caused by diversification." An employee of Byte's commercialization department told Leopard Change.

The so-called "problem with diversity" is that because people from different circles come together, comments are becoming more and more difficult to control.

Douban's official operating account regularly sends announcements on the governance of online violence, and the announcement released on March 13 shows that more than 10,000 messages have been deleted, mostly online violence content, and some accounts have been phased banned and permanently banned.

Xiaoman is a regular user of Weibo and has accumulated a group of fans due to star chasing. She told "Leopard Change": "When I first played Weibo, everyone communicated together and the atmosphere was very friendly. In the past two years, there have been many people who have obviously felt that they are 'unqualified', and I have blocked more than 800 people. ”

In addition to manual blocking by users, at this stage, to what extent has the platform protected users?

The first is the report and complaint function. "Leopard Change" tested mainstream community apps such as Weibo, B station, Douyin, Xiaohongshu, Kuaishou, Douban, Zhihu, etc., and the comment area can be "reported" or "complained", and users are required to select the type of violation, and the test results are shown below.

The platform also provides a "one-click anti-Internet violence" function, and users can choose to block strangers' private messages and comments for a period of time to achieve the purpose of preventing online violence.

The duration of "one-click anti-Internet violence" on each platform is different, Weibo, Zhihu, and Xiaohongshu can be set for up to 7 days, Douban for up to 14 days, Douyin for up to 30 days, and Kuaishou has no time limit.

However, the platform's "one-click anti-net riot" entrance is relatively hidden, and it is difficult for users to protect with "one-click". For example, after Douyin enters the "Me" page, click the "three bars" icon in the upper right corner, select "More Functions", find Douyin Xiaoan, and only after entering the interface can you set "one-click protection".

So, does the platform have a chance to prevent the emergence of online violence content before users publish content?

In 2019, social platform Instagram launched an anti-online violence feature that uses AI to detect content posted by users and warn users to think twice if the content is judged offensive before it is posted.

However, these seemingly perfect anti-online violence mechanisms have not played a big role. A 2020 global survey of online violence showed that harassment attacks remain at 23% on Instagram

"Pink Haired Girl" Zheng Linghua has not stopped defending her rights before her death after experiencing online violence, photo and video theft by marketing accounts, and she also recorded her rights protection process on social platforms. In the first rights protection record, she said that she reported the complaint to the official Douyin manual telegram + email, but "Douyin can only prohibit others from downloading original works if it reaches hundreds of millions of fans."

In the subsequent rights protection record, she also mentioned that "Douyin and Baijia have always reported complaints without success."

It is understood that most of the current platforms mainly have two methods: system review and manual review. Su Zhen, who works at the Douyin review post, told "Leopard Change": "The verification ratio of the system is about 80%, such as triggering some keywords, the machine can identify it, and the remaining part of the system cannot be determined, it will be handed over to manual review." ”

Li Min, who used to be responsible for reviewing high-quality content in Toutiao, said that most of the review of article content is systematic review, but when it touches some keywords and the account size is relatively large, it will enter the manual review process.

"In fact, the review itself is a mechanical work, and many things are slowly handed over to algorithms, AI, and will gradually optimize to replace human labor." Li Min told "Leopard Change" that around the beginning of 2022, her position will be replaced by AI.

However, for the current level of development of artificial intelligence, the accuracy of judging "online violent content" is not high enough.

Su Zhen said: "Some onomatopoeia, substitute words for newborn insulting remarks, abbreviations and other content cannot be recognized by artificial intelligence. Then there is the speech of 'yin and yang strangeness', which the system does not understand at all. ”

A-Zhe, who used to work at NetEase Live CCtalk, said that even if he is an internship in an audit position, he has to train for several days before taking up the post. "There will be special people talking about PPT, explaining through cases which prohibited penalties should be made, and how to judge the degree of punishment according to the degree of violation, as well as training on values and ethics."

Su Zhen still remembers that during the interview, she was asked "what do you think of a certain news event", and before joining the company, she did a psychological quality test, personality test, stress test, and after induction training, she passed the exam to get on the job.

As A-Zhe said, "In the end, it is up to the judgment of the auditor himself." ”

2. The person who manages the "little black room" is also in the small black room

"If there is no violation, the report is accurate, but the violation does not matter." "I commented that not a single dirty word was reported, others swear words greeted the whole family, I reported none of them successfully, what are the reviewers doing?" Below the official post of a platform, you can often see similar user comments.

From a user perspective, the community atmosphere seems to depend on whether the "auditor" is doing something, but is that really the case?

In the Byte Building in the Yinshan Road Science and Technology Park in Shanghai's Minhang District, unlike other departments' open, high-end work areas, the audit department's work area is always dimly lit, and employees stare expressionlessly at computer screens.

The work of the auditor seems to have the right to "block with one click", but these people who manage the small black room are themselves in the small black room.

According to an article in the Harvard Business Review, the auditor is playing the role of a "sin eater" in folklore: the gist of this historical ritual is that the poor spiritually bear the sins of the dead, often by eating the bread placed on the corpse in exchange for money.

According to Su Zhen, there are three main types of auditors in large factories: regular employees, outsourced employees and part-time employees. Su Zhen is a regular employee of Byte, and can get a salary of about 10,000 to 10,000 a month in the audit post, but in addition to the rent of four or five thousand a month, the remaining money is only enough to live.

Outsourced employees are generally in third- and fourth-tier cities, and according to her understanding, the salary is about four or five thousand yuan, and part-time employees work more.

"When I first joined, everything was very good and the working hours were normal. But after about two weeks of formal work, shifts will be scheduled. We work three shifts, the day shift from 9 to 5, the evening shift from 3 pm to 2 am, and the big night shift from 12 pm to 7 am. Everyone had KPIs every day, and I was reviewing 200 videos a day, plus a few thousand to ten thousand comments. ”

Sitting for a long time and staying up late made Su Zhen's physical condition very poor. She told "Leopard Change": "I gained more than ten pounds in a few months of employment, crazy acne on my face, I was depressed every day, and I often woke up crying in the middle of the night." ”

This pain comes more from the mental damage of the audit work. Su Zhen said: "After doing it for about three days, I can clearly feel the strong physical and psychological discomfort. Every day I watch all the extremely negative energy, those pornographic, violent, bloody videos to watch, comments for no reason to abuse and attack others to censor. ”

"What pains me the most is that for the past twenty years, we have lived in a relatively safe and kind real environment, and after working as an auditor, I found that there is so much malice on the Internet. Now I will feel that the bad guy is hiding around, many people may be kind in life, he will take care of children, donations, but do not delay him to abuse others on the Internet. ”

Su Zhen said that the proportion of online violence can account for about ten percent in the video, and there are many in the comments, accounting for about 60 percent or 70 percent.

She said: "After working as an auditor, you can fully understand the mentality of those victims who commit suicide because of online violence, because these abuses are absolutely beyond the limit that a person can bear, and even netizens see a small part of the review released." The unreleased obscenities may be dozens of times more likely to be seen, and this is an ongoing attack. ”

In addition, in addition to the work itself, while receiving negative energy every day, Su Zhen also needs to complete weekly reports. "The psychological state is so bad, but in the weekly newspaper you can't complain, you can only write how much you love this job." The work content is the same every day, but you also have to write a work plan. ”

Su Zhen said that this kind of work state is indeed not enough energy, and she also admits that sometimes it is inevitable to be passive, which is part of the reason why users complain that "auditors do not work".

In addition, in addition to these mental and psychological pressures, due to the low salary and little room for promotion in the audit position, the average life cycle of this position is only three to six months. "It's a lot like a female worker on an assembly line in a small workshop, without any future." Su Zhen sighed.

However, as the number of netizens grows and the demand for active users grows across platforms, more and more moderators are needed. Su Zhen said that more and more audits are being outsourced because it saves the company a lot of money.

However, this seems to form a vicious circle, the psychological and physiological conditions of reviewers are affected by a large number of negative content, resulting in passive sabotage and large personnel turnover, and more users joining make the Internet environment more complex, and the demand for auditing is greater, but the treatment of outsourced reviewers is declining.

In this case, "locking the abuser in a small black room can stop online violence" seems to be a false proposition.

Su Zhen believes that the power of tens of thousands of reviewers to review the content sent by hundreds of millions of netizens is limited. From the perspective of the platform, not only to increase the punishment, raise the threshold for comments, and improve the protection function, artificial intelligence needs to continue to learn, and the current situation of reviewers also needs to be seen.

3. Why can't the platform stop online violence?

Professor Luo Xiang, a professor at China University of Political Science and Law, once mentioned the phenomenon of "group polarization" in an article, which is also the deep cause of the breeding of online violence in the Internet era:

"In the online world, our emotions are easily provoked by extreme opinions, and we are increasingly inclined to judge others in black and white; And this emotion and judgment will be like a sharp blade to the individual in the vortex of public opinion. ”

Douban user Wang Ran told "Leopard Change" that he once wrote a film review on gender issues on Douban, but he did not expect to attract many users to scold in the comment area. Due to their different views, the two users were able to scold more than 50 times in the comment area. Some users post hundreds of words in the comment area to oppose your views, and some people directly insult you without even reading the article. ”

In August 2021, the journal Science Advances published a Yale University study in which researchers tracked Twitter users over time and found that social media incentives are promoting the expression of anger.

This is called "drainage", that is, attracting traffic through various methods, and the most direct way is to publish content that can trigger discussion, and the platform's algorithm will further "recommend" the content based on data such as views, likes, and comments.

As Wang Ran said, although there are many unfriendly remarks in the comment area and retweets below her article, this is also the most popular of all her articles, with more than 600 comments, nearly 2,000 likes and more than 300 favorites, and the popularity lasted for weeks, and even a year later, there were still messages.

In 2020, director Bi Zhifei also posted a screenshot on Weibo, "a large number of messages with incorrect views and trying to open the topic to divert attention" brought 210 million views to his Weibo, and advertising revenue of about 30,000 yuan.

In fact, the current social, video, and information platforms all take advertising revenue as the main business model, and daily active users and monthly active users are the basis for the further commercialization of each platform. Therefore, for the platform, traffic and users are tools to obtain benefits, which leads to the content on the platform is mostly traffic-oriented.

In other words, emotions and controversies on social platforms are, in the final analysis, traffic and interests.

According to the proportion of online violence content mentioned by the review staff above, if the account with online violence is completely banned, then the platform's users can be imagined.

On the other hand, after netizens have become accustomed to the online environment of casual expression, excessive restrictions on speech will inevitably cause dissatisfaction.

The most typical example is Douban, where Wang Ran recently discovered that the unfriendly remarks under his controversial film review have disappeared, many accounts in the comments have been cancelled, and some of the comments have not been seen.

Due to the long review time, another Douban user Xiaomu will record the review time every time he posts a news recently, ranging from two hours to a day or two. Some users said that their Douban account has been folded because of violations.

However, most platforms do not permanently lock an account into a small dark room just because it "makes a mistake once".

"Leopard Change" found that each platform only determined that "violations will be punished", but how to punish did not elaborate, and the usual expressions used were "the following dispositions will be taken against the illegal content or account as appropriate", and "if the convention is violated, the platform has the right to ban, ban or suspend some or all of the account's services".

There are also platforms that use credit scores to manage users, such as Weibo and Zhihu.

According to the detailed rules of the Weibo Community Convention, the user's initial credit score is 120 points, which can be added by answering questions online. In terms of deduction points, in the event of online violence, 3 to 5 points will generally be deducted, and the credit history score will be less than 0 points, and the user's account will be banned. Calculations, a Weibo account with a full credit score will be banned only after about 40 online abuses.

Su Zhen also told "Leopard Change": "A Douyin account publishing illegal information will give two chances, the third time will generally ban the account for 7 to 15 days, if this situation occurs more than twice, that is, by the third time, it will trigger a danger alarm, at this time the probability will be banned." Does this mean that a TikTok account may have 9 "chances" of violations?

From a platform perspective, how to balance business interests and user experience has always been a difficult problem, but ensuring user safety is the most basic issue. Perhaps, in the face of online violence, platforms that are slightly "powerless" still need to take more responsibility.

(At the request of interviewees, the characters in the article are pseudonyms)

Read on