laitimes

New regulations on algorithm recommendation are introduced, and relevant departments will implement hierarchical and classified management of service providers

New regulations on algorithm recommendation are introduced, and relevant departments will implement hierarchical and classified management of service providers

On January 4, four departments, including the Cyberspace Administration of China, the Ministry of Industry and Information Technology, the Ministry of Public Security, and the State Administration for Market Regulation, issued the Provisions on the Recommendation and Administration of Internet Information Service Algorithms (hereinafter referred to as the "Provisions"), which came into effect on March 1, 2022.

As a natural extension of the Cybersecurity Law, the Data Security Law and the Personal Information Protection Law, the Provisions make corresponding provisions on issues such as algorithm models that induce users to indulge, traffic fraud, public opinion guidance, big data killing, the rights and interests of minors and the elderly, and the work scheduling of workers.

The Provisions make it clear that algorithm recommendation service providers should publicize the principles of algorithmic services and provide users with options that are not tailored to their personal characteristics. The Provisions also strengthen the governance of content ecology, with special emphasis on the supervision of algorithm recommendation services that "have the attributes of public opinion or social mobilization capabilities", and such algorithms should be filed with relevant departments and evaluated.

Some experts believe that this new law is not only an important guide for the legalization of algorithm technology, but also will become an interpretation and guide for algorithm judgment in conjunction with other laws such as the Personal Information Protection Law and the E-commerce Law. However, at the same time, some experts pointed out that while supervising algorithms, it is also necessary to consider protecting the innovation of new technologies, and some provisions of the Provisions may be vague in the process of implementation, which may lead to a harsh governance environment, thereby inhibiting innovation.

1

Explicitly encourage greater algorithmic transparency

Nandu reporter learned that the "Provisions" are applicable to the application of algorithm recommendation technology to provide Internet information services - the application of algorithm recommendation technology refers to the use of algorithm technology such as generative synthesis, personalized push, sorting selection, retrieval and filtering, scheduling and decision-making to provide information to users.

Among them, the Provisions make specific requirements for improving the transparency of algorithms: Articles 12 and 16 are clearly defined, encouraging algorithm recommendation service providers to optimize the transparency and interpretability of rules such as search, sorting, selection, push, and display; algorithm recommendation service providers shall inform users of their provision of algorithm recommendation services in a conspicuous manner, and publicize the basic principles, purposes, and main operating mechanisms of algorithm recommendation services in an appropriate manner.

In this regard, Zhu Wei, a professor at China University of Political Science and Law, believes that the basis for the supervision of algorithms is algorithm disclosure. Article 24 of the Personal Information Protection Law stipulates that if the algorithm may have a significant impact on the rights and interests of individuals, individuals have the right to request the platform to make an explanation. From the perspective of the Personal Information Protection Law, the disclosure of algorithms has certain restrictive conditions, not the active disclosure of online platforms. However, in practice, which areas fall within the scope of "having a significant impact on individual rights and interests" is not clear and lacks operability. The Provisions encourage platforms to actively disclose algorithms, and can cooperate with the provisions of the Personal Information Protection Law on application disclosure, so that the algorithms are open and transparent.

Duan Weiwen, director of the Center for Science, Technology and Social Research of the Chinese Academy of Social Sciences, proposed that in addition to being open and transparent to users, the platform should also be open and transparent to the workers on its platform. He believes that there is a kind of "contract" between workers and the platform, and unlike the sales and distribution behavior of the platform to consumers and users, there is a greater interest relationship between the worker and the platform, so the right to know about the algorithm recommendation service should be greater. But he also pointed out that although the requirement for algorithmic transparency is legal and necessary, it is not necessarily sufficient and completely feasible at the technical level, because a fact that cannot be ignored is that whether or not technical means are used, there is ambiguity in any form of cognition, and people's interpretation and interpretation of the world always have a certain ambiguity.

In terms of informing users, the Provisions not only require users to be informed of the algorithm mechanism, but also require users to be prompted of the content generated by the algorithm. In recent years, with the continuous development of technologies such as "deep forgery", the problem of abuse has become more and more prominent, and even used by criminals. Article 9 stipulates that where it is discovered that the synthetic information generated by the algorithm without a conspicuous identification, it shall be conspicuously marked before it can continue to be transmitted.

2

New news services and antitrust regulations

The Provisions also strengthen the content ecological management, requiring algorithm recommendation service providers not to use algorithms to falsely register accounts, falsely like, comment, forward or implement blocked information, manipulate lists, and other behaviors.

In addition, article 8 of the Provisions requires that algorithm recommendation service providers shall periodically review, evaluate, and verify algorithm mechanism mechanisms, models, data, and application results, and must not set up algorithm models that violate laws and regulations or violate ethics and morality, such as inducing users to indulge or over-consume.

In this regard, Duan Weiwen believes that this clause may have a greater impact on the short video content distribution platform, and it is difficult to grasp the specific operation.

"Under what circumstances does (the algorithm recommend service provider) induce users to indulge and over-consume? The threshold of this needs to be stipulated, and this regulation may still be driven by the incident in the process of implementation, and may have a stronger punishment for the event that causes too much impact. He believes that if the event is objectively used as the basis for supervision, it is conducive to forming external pressure on the compliance of the content recommendation algorithm; at the same time, it may also produce a "chilling effect" and form an uncertain impact on the existing algorithm model of short video.

Nandu reporter noted that compared with the "Provisions on the Recommendation and Administration of Internet Information Service Algorithms (Draft for Comment)" issued by the Cyberspace Administration of China last August, the officially issued "Provisions" added specific requirements for the provision of Internet news information services.

Article 13 of the "Provisions" provides that where algorithm recommendation service providers provide Internet news information services, they shall obtain Internet news information service licenses in accordance with law, must not generate synthetic false news information, and must not disseminate news information published by units that are not within the scope of state regulations.

Zhu Wei believes that the article will have an impact on the algorithmic recommendation of current political news and commentary. "There are many factors to consider, including qualifications and the legal compliance of information sources, which are a necessary threshold to consider. If you don't even have this, you certainly can't recommend it with an algorithm. ”

Duan Weiwen believes that this article will have a certain impact on the self-media industry. "(Units not within the scope of the state regulations) is also a more ambiguous word, and the content published by any unit can be called information, but can this information be called news? What is unofficial information? This increases the risk of disseminating information in the self-media industry, and if the information comes from unofficial, the self-media needs to consider the consequences of information dissemination. ”

In addition, the officially promulgated Provisions also add anti-monopoly relevant provisions. Article 15 provides that algorithm recommendation service providers must not use algorithms to unreasonably restrict other Internet information service providers, or obstruct or disrupt the normal operation of Internet information services they lawfully provide, and carry out monopolistic or unfair competition conduct.

In Zhu Wei's view, in the platform economic system with the blessing of algorithms, algorithms have become a means for giant platforms to seize monopoly benefits or engage in unfair competition. Most platforms that rely on algorithms to carry out unfair competition behaviors have a monopoly position in the market, so the order of algorithmic competition has also become an important regulatory scope for platform anti-monopoly at present.

3

Explicit prohibition of the use of algorithms to "kill"

In terms of the protection of user rights and interests, in addition to informing users of the interpretation algorithm mechanism and providing users with non-personalized recommendation options, the Provisions make specific provisions on the rights and interests of specific groups.

The first is the protection of minors. Article 18 provides that algorithm recommendation service providers must not push information to minors that might affect minors' physical and mental health, and must not use algorithm recommendation services to induce minors to become addicted to the Internet.

The protection of the elderly focuses on ageing and fraud prevention. Article 19 proposes to fully consider the needs of the elderly for travel, medical treatment, consumption, and handling affairs, provide intelligent services for the elderly in accordance with relevant national provisions, and lawfully carry out monitoring, identification, and disposal of information related to telecommunications network fraud.

With the development of the platform economy, the survival status of the flexible employment group has received widespread attention in recent years. Along with this, there is also concern and even questioning of the platform's algorithms. The Provisions clearly protect the legitimate rights and interests of workers such as remuneration, rest and leave, and require algorithm recommendation platforms to establish and improve relevant algorithms such as platform order distribution, remuneration composition and payment, working hours, rewards and punishments.

However, Duan Weiwen raised his own concerns. He believes that the protection of workers' rights and interests cannot be fully handed over to the platform, and its core problem is the gig economy, which requires a socialized insurance mechanism. We cannot rely solely on the platform to assume the responsibility of protecting the rights and interests of workers, which is also beyond the capabilities of the platform.

Like the Personal Information Protection Law, the Provisions also explicitly prohibit "big data killing". Article 21 provides that algorithm recommendation service providers must not use algorithms to carry out unreasonable differential treatment and other illegal acts based on consumer preferences, trading habits and other characteristics.

Why does the arithmetic system emphasize user rights and interests so much? Zhu Wei believes that from the perspective of the final object of algorithm application, the majority of netizens are the direct objects of algorithm application. Netizens' rights related to algorithms mainly include the right to personal dignity, the right to self-determination, the right to privacy and personal information, the right to know, the right to inquire, etc., and involve many laws such as the Civil Code, the Consumer Rights and Interests Protection Law, and the Personal Information Protection Law. Many of these rights bases are expressed in network practice through algorithms, such as the selection and recommendation of a certain product or service, the selection decision of the content of interest, and the setting preference of personal interest points.

He believes that this new law is not only an important guide for the legalization of algorithm technology, but also an interpretation and guide for algorithm judgment in line with other laws such as the Personal Information Protection Law and the E-commerce Law.

4

Establish a hierarchical and classified security management system for algorithms

It is worth noting that the Provisions also make special requirements for platform algorithms with public opinion attributes - algorithm recommendation service providers with public opinion attributes and social mobilization capabilities should conduct security assessments and filings in accordance with law.

Why is there a special provision for platform algorithms for the attributes of public opinion? Zhu Wei analyzed that in nature, algorithms belong to intellectual property rights and trade secrets, but from the perspective of operating rules and performance methods, algorithms essentially belong to network services and behavior rules. In particular, the Internet has natural mass communication attributes, and some will involve the public's right to know, public interest and national security issues, which is the reason for the filing of platform algorithms with public opinion attributes. According to the Requirements of the Cybersecurity Law and the Provisions on the Security Assessment of Internet Information Services with Public Opinion Attributes and Social Mobilization Capabilities jointly issued by the Cyberspace Administration of China and the Ministry of Public Security in 2018, the inclusion of algorithms in the scope of assessment may include forums, microblogs, short videos, live broadcasts, public accounts and other categories of products and services.

Zhu Wei believes that the regulation of algorithm recommendation information in the Provisions reflects that the values of algorithms cannot be separated from the actual rule of law. "From a practical point of view, many vulgar, vulgar and kitsch short videos, public accounts and other information is most likely to form a 'blockbuster', and the core reason why it is difficult to cure is not only the content production itself, but also the impetus of the algorithm." What kind of algorithm there is, what kind of content, what kind of user, what kind of algorithm will appear. This logical relationship between the interaction between the algorithm, the crowd and the content will be completely broken from the algorithm itself. He said.

However, while managing the problems posed by algorithms, it is also necessary to protect their room for development. Duan Weiwen believes that solving the problem of algorithms needs to consider the principle of proportionality. He cited big data killing and algorithmic discrimination as an example, the substantial harm caused by it is difficult to define from the legal level, or in real life, and while regulating new technologies, it is necessary to consider the protection of their innovation and promote the growth of overall wealth.

In addition, Duan Weiwen also mentioned that in the algorithm audit and accountability, it should also be realized that even if the developer has no purpose of discrimination, because the data and algorithm will strengthen the stereotype and bias, amplify the existing differences and differences, it may also produce unexpected biases.

"This provision should be advocating for the good of enterprises." At present, the implementation of the Provisions may be vague and may bias the governance environment towards harshness for a period of time. Duan Weiwen said, "This reflects the importance that managers attach to the problem." In the long run, with the accumulation of governance experience, the actual effects should be comprehensively considered and necessary dynamic adjustments should be made. ”

Article 23 of the Provisions also proposes that the Internet information departments, together with relevant departments such as for telecommunications, public security, and market supervision, establish a hierarchical and categorical security management system for algorithms, and implement hierarchical and categorical management of algorithm recommendation service providers based on the public opinion attributes of algorithm recommendation services or their ability to mobilize society, content categories, user scale, the importance of data processed by algorithm recommendation technology, and the degree of intervention in user behavior.

In addition, while the Provisions are legal provisions, ethical review is also mentioned, and Article 7 stipulates that algorithm recommendation service providers shall implement the main responsibility of algorithm security, establish and improve management systems and technical measures such as algorithm mechanism mechanism review, scientific and technological ethics review, etc.

Duan Weiwen believes that this is a path of "both symptoms and root causes": "Now many algorithms bring problems are actually still in a vague area, from the perspective of both symptoms and root causes, strengthening the ethical awareness of algorithms and reviewing them through scientific and technological ethics can reduce the harm and discrimination of algorithms to consumers." The institutionalization of scientific and technological ethics awareness and scientific and technological ethics review will help to build an ethical soft landing mechanism for scientific and technological innovation. ”

"Any disruptive innovation has a process of ethical correction and correction. The key to the ethics and legal governance of science and technology lies in the search for creative intermediate solutions, and the overall trade-offs and real-time adjustments are made according to the complexity and uncertainty of scientific and technological innovation. He said.

Written by: Nandu reporter Li Yaning trainee reporter Hu Gengshuo

Read on