laitimes

Algorithm = Values! The platform cannot lie in a "safe haven" all the time

Author丨Hu Deli, Wang Wenyi

Editor丨Haiwai

Image source丨Photogram network

Is the platform responsible for the occurrence of pirated, violent, pornographic and other undesirable content on the platform?

This controversy was gradually subsided by the "safe harbor principle", which states that the platform cannot guarantee that hundreds of millions of pieces of content are free of problems, but as long as it is notified and timely action is taken, the platform is not responsible.

But what if there is a problem with the content that the platform's algorithm actively recommends to users?

In February 2023, the U.S. Supreme Court accepted Gozalez v. Google and Taamneh v. Twitter, in which the victim's family claimed that internet giants Google and Twitter were responsible for the tragic cases for pushing terrorist messages.

Section 230 of the controversial Communications Decency Act has been questioned for acting as a "protective shield" for internet giants, providing "immunity" for internet platforms — that is, content posted by users should not be considered platform behavior.

In the mainland, with the promulgation and improvement of laws and regulations such as the Provisions on the Governance of the Online Information Content Ecology and the Provisions on the Administration of Internet Information Service Algorithm Recommendations, it means that the chaos of algorithm-related industries has bid farewell to the era of barbaric growth, and the rule of law system for the comprehensive management of network algorithms is being actively constructed.

Venture Bang and Shanghai AllBright (Tianjin) Law Firm jointly discussed the legal problems faced by the platform in the era of algorithms based on the governance environment of the mainland algorithm recommendation technology ecosystem.

Whether algorithmic recommendation techniques have a value orientation

Algorithm recommendation technology is an information presentation method that pushes personalized information for users based on user portraits, user feedback and other user information. Article 2 of the Provisions on the Administration of Internet Information Service Algorithm Recommendation stipulates that the application of algorithm recommendation technology refers to the use of algorithm technologies such as generating synthetic (such as games, virtual scenes, news releases), personalized push (such as advertisements, short videos, and "big data killing" scenarios), sorting and selection (such as various services and information popularity lists, e-commerce platform store ranking algorithms), search and filtering (such as various public search engines, internal search engines of e-commerce platforms), scheduling decisions and other algorithm technologies (such as ride-hailing platforms, Order matching algorithm of the takeaway platform) provides information to users.

Does algorithmic technology have a value orientation? Will there be an "information cocoon" adverse effect on users?

According to the Pew Research Center, a well-known think tank, "Public Attitudes Toward Computer Algorithms," 58 percent of Americans believe that algorithms and other computer programs always contain some degree of human bias.

In April 2018, Zhang Yiming, CEO of Toutiao, issued a public apology letter, saying, "For a long time, we have overemphasized the role of technology, but we have not realized that technology must be guided by socialist core values, spread positive energy, meet the requirements of the times, and respect public order and good customs." In the same month, Kuaishou CEO Su Hua issued an apology letter, saying, "The algorithms used in community operation have values, because behind the algorithms are people, the values of the algorithm are the values of people, and the defects of the algorithm are the defects in values." ”

Indeed, algorithmic techniques are value-oriented. The application of algorithm technology has innovated the way of information distribution, achieved the dissemination effect of thousands of people, and promoted the company's profit growth, but it has also caused a series of problems of lack of corporate social responsibility and alienation of "information cocoon". For users, algorithm technology automatically blocks information that users do not understand and agree with, only presents fragmented customized information, and end users reinforce inherent biases and preferences in continuous repetition and self-proof, making it more and more difficult to accept heterogeneous information, and are in the "information cocoon" without knowing it.

Algorithmic governance environment on the mainland

(1) Legislative level

On January 9, 2019, the China Network Audiovisual Program Service Association formulated the "Detailed Rules for the Review of Online Short Video Content";

On November 18, 2019, the Cyberspace Administration of China, the Ministry of Culture and Tourism, and the State Administration of Radio and Television issued the Provisions on the Administration of Online Audio and Video Information Services;

On December 15, 2019, the Provisions on the Governance of the Online Information Content Ecosystem were issued at the Cyberspace Administration of China;

On September 17, 2021, in order to strengthen the comprehensive governance of Internet information service algorithms and promote the healthy, orderly and prosperous development of the industry, nine ministries and commissions, including the Cyberspace Administration of China and the Central Propaganda Department, formulated the Guiding Opinions on Strengthening the Comprehensive Governance of Internet Information Service Algorithms;

On December 31, 2021, the Cyberspace Administration of China deliberated and approved and issued the Provisions on the Administration of Internet Information Service Algorithm Recommendations after being approved by the Ministry of Industry and Information Technology, the Ministry of Public Security, and the State Administration for Market Regulation;

March 20, 2022, General Office of the Central Committee of the Communist Party of China. The General Office of the State Council issued the Opinions on Strengthening the Ethical Governance of Science and Technology;

On September 9, 2022, the Cyberspace Administration of China, the Ministry of Industry and Information Technology, and the State Administration for Market Regulation issued the Provisions on the Administration of Internet Pop-up Information Push Services;

On November 3, 2022, the Cyberspace Administration of China deliberated and approved, and with the consent of the Ministry of Industry and Information Technology and the Ministry of Public Security, jointly issued the Provisions on the Administration of Deep Synthesis of Internet Information Services.

 (2) The level of law enforcement and supervision

According to the Provisions on the Administration of Algorithm Recommendation for Internet Information Services, regulatory requirements such as hierarchical and categorical management, filing, security assessment, supervision and inspection of algorithm recommendation technology, etc., regulatory law enforcement in the field of algorithms is becoming increasingly complete.

From March 1, 2022, the Internet information service algorithm filing system was officially put into operation. ICP filing entities fill in ICP filing information and check ICP filing status through the official website, and ordinary users inquire about ICP filing information through the official website.

On April 8, 2022, the Cyberspace Administration of China issued the Notice on Carrying out the Special Action of "Qinglang 2022 Algorithm Comprehensive Governance". The special operation will be carried out from now until early December 2022, and mainly includes five aspects of work, including organizing self-inspection and self-correction, carrying out on-site inspections, supervising algorithm filing, compacting entity responsibility, and rectifying problems within a time limit.

On January 19, 2023, the Central Cyberspace Administration issued a document "National Network Law Enforcement Work Continues to Make Efforts and Increase Efficiency in 2022": In 2022, the national cyberspace system interviewed 8,608 website platforms in accordance with the law, warned 6,767, fined and punished 512, suspended functions or updated 621, removed 420 mobile applications, canceled illegal website licenses or filings and closed 25,233 illegal websites in conjunction with telecommunications authorities, and transferred 11,229 related case leads. The effect of law enforcement is impressive.

The boundaries of the online platform's duty of care

(1) "Know" and "Should Know" of the Red Flag Rule

The Red Flag Rule was originally stipulated by Section 13 of the Digital Millennium Copyright Act (DMCA), which states that network service providers who are aware of the obvious infringement that can be detected by the Red Flag will lose their eligibility for liability exemption if they do not take action. Based on the red flag rules, only when specific and too obvious infringing content falls within the scope of consciousness and will of the network service provider, the network service provider may be subjectively at fault for the dissemination of the infringing content. Article 1197 of the Civil Code of the People's Republic of China also clarifies that one of the prerequisites for a network service provider to bear tort liability is that it "knows or should know".

What is "knowing" and "should knowing"?

Articles 9 and 10 of the Provisions of the Supreme People's Court on Several Issues Concerning the Application of Law in the Trial of Civil Dispute Cases Involving Infringement of the Right of Information Network Dissemination stipulate the criteria for determining "should know", that is, comprehensively considering the nature and method of providing services provided by network service providers; the type of work, its popularity and the apparent extent of the infringing information;

Whether the network service provider actively selects, edits, revises, recommends and other factors to judge the work.

In addition, Articles 7, 8, and 9 of the Provisions on the Administration of Algorithm Recommendations for Internet Information Services and Article 9 of the Provisions on the Governance of the Online Information Content Ecosystem stipulate that providers of algorithm recommendation services shall implement the main responsibility for algorithm security, and establish and complete management systems and technical measures such as algorithm mechanism review, scientific and technological ethics review, information release review, and real-time inspection; Regularly review, evaluate and verify the algorithm mechanism, model, data and application results. That is, to a certain extent, it explains the obligation of online platforms to review and inspect the information released by users.

(2) "Notice of Necessary Measures" of the Safe Harbor Rule

The "notice of a necessary measure" rule is at the heart of the safe harbor rule. The "safe harbor" rule first emerged from judicial practice in the United States and applied to copyright liability, and the Digital Millennium Copyright Act (DMCA) officially wrote the "safe harbor" rule into legislation on October 29, 1998.

Articles 14 and 15 of the mainland's Regulations on the Protection of the Right of Information Network Transmission, the Provisions of the Supreme People's Court on Several Issues Concerning the Application of Law in the Trial of Civil Disputes Involving Infringement of the Right to Information Network Transmission, and Article 1195 of the Civil Code of the People's Republic of China refer to the safe harbor rule and formulate a liability exemption system for the conduct of network service providers. That is, if the right holder believes that the work on the platform infringes its right of information network dissemination, it may submit a written notice to the online platform; After receiving the notice from the right holder, the online platform shall immediately delete the infringing work involved in the case. At the same time, the online platform shall forward the notice to the service recipient, or announce it on the online platform.

The Provisions on the Governance of the Online Information Content Ecosystem divides information into information that encourages dissemination (Article 5) and illegal information that must not be disseminated (Article 7, such as endangering national security; distorting the deeds and spirit of heroes and martyrs; advocacy of terrorism; incitement to ethnic hatred; undermining other content prohibited by laws and administrative regulations such as state religious policies) and resisting negative information that prevents dissemination (Article 8, such as using exaggerated titles, the content is seriously inconsistent with the title; Hype scandals, scandals, bad deeds, etc.; Improper comments on natural disasters, major accidents and other disasters that have a negative impact on the network ecology), and clarified the handling methods for different types of information.

For the social terrorism information mentioned above in the "Gozalez v. Google case" and "Taamneh v. Twitter" cases, if mainland law applies, it should be illegal information under Article 7 of the Provisions on the Governance of the Online Information Content Ecosystem, and the online platform shall immediately stop the transmission, take measures such as erasure, prevent the spread of information, keep relevant records, and report to the internet information department and relevant departments.

Lawyer's advice: from the perspective of intellectual property duty of care

1. Set the appropriate attention standard according to the content type

In the case of the work itself, attention standards should be set according to the type of content. Higher attention shall be paid to copyrighted works, popular film and television dramas, and content with a high degree of topic that is included in the copyright early warning list of the State Intellectual Property Office. In addition, with the assistance of the Copyright Office and other administrative agencies, a copyright protection database may be established as appropriate to facilitate technical comparison to prevent infringement.

2. Classify the infringement notice issued by the right holder

Notifications of infringement issued by rights holders are categorized. Classify videos into PGC (Professional Generated Content) and UGC (User Generated Content), and focus on monitoring PGC content. If the account number involved in the infringement notice is a professional account such as an MCN institution, it is necessary to conduct a second screening of the content uploaded by such professional account to avoid repeated infringement of the content uploaded again.

3. Technically introduce filtering measures

Article 17 of the Digital Single Market Copyright Directive issued by the European Union in 2019 requires network service providers of online content-sharing services to perform filtering obligations on the content disseminated by users. Platforms such as YouTube have introduced copyright filtering technologies such as Content ID, which can block more than 99% of infringing content through corresponding anti-piracy technologies. The technical introduction of filtering measures is also a way to enhance the substantive review of recommended content.

(Source: Shanghai AllBright (Tianjin) Law Firm)

Read on