laitimes

Deep synthesis, algorithms, AIGC regulation and personality rights protection (33): AIGC ethical issues

author:YunfangW

AIGC Ethical Risk and Governance

The day before yesterday, I looked at the review of AI ethics and ethical risks, yesterday I looked at the issues related to algorithm ethics and risk governance in the algorithm regulations, deep synthesis regulations and regulatory rules for specific application fields, and today I looked at the ethical guidelines and risk governance issues related to AIGC.

Article 4 of the Measures for the Administration of Generative Artificial Intelligence Services (Draft for Comments) requires that "the provision of generative AI products or services shall comply with the requirements of laws and regulations, and respect social morality, public order and good customs", based on Article 4 of the Deep Synthesis Provisions, of which:

  • There is a certain difference between "respect for social morality and public order and good customs" and the "social morality and ethics" required to be respected when deep synthesis services are required in the deep synthesis regulations, and public order and good customs, ethics and business ethics need to be distinguished at the level of specific legal norms.
  • Specific requirements are then enumerated, including ethical requirements related to algorithms, models, generated content, etc.:

Article 4: The provision of generative artificial intelligence products or services shall comply with the requirements of laws and regulations, respect social morality, public order and good customs, and meet the following requirements:

(1) Content generated using generative artificial intelligence shall embody the core socialist values, and must not contain subversion of state power, overthrow of the socialist system, incite separatism, undermine national unity, advocate terrorism and extremism, advocate ethnic hatred, ethnic discrimination, violence, obscene pornographic information, false information, and content that may disrupt economic and social order.

(2) Take measures to prevent discrimination on the basis of race, ethnicity, belief, nationality, region, gender, age, occupation, and so forth in the process of algorithm design, training data selection, model generation and optimization, and provision of services.

(3) Respect intellectual property rights and business ethics, and must not use advantages such as algorithms, data, and platforms to carry out unfair competition.

(4) Content generated using generative artificial intelligence shall be truthful and accurate, and measures should be taken to prevent the generation of false information.

(5) Respect the lawful interests of others, prevent harm to the physical and mental health of others, damage to portrait rights, reputation rights, and personal privacy, and infringement of intellectual property rights. It is prohibited to illegally acquire, disclose, and use personal information, privacy, and trade secrets.

  • Item (1) puts forward guiding requirements for the generated content, that is, the generated content is required to reflect the core socialist values; Subparagraph (2) requires entities to take measures to prevent discrimination in the process of algorithm design, training data selection, model generation and optimization, and provision of services; Item (3) requires entities to respect business ethics and must not use advantages such as algorithms, data, and platforms to carry out unfair competition; Item (4) requires entities to take measures to prevent the use of AIGC technology to generate false information; Item (5) deals with the protection of personality rights, trade secrets and intellectual property rights.
  • The requirements of items (1) and (4) have more detailed and clear requirements in the Provisions on the Governance of the Online Information Content Ecosystem and special legal norms such as Internet news services; Item (2), "non-discrimination", falls within the scope of AI ethical rules; Item (3) deals with the requirements of business ethics, which generally refers to the norms that for-profit entities should abide by at the level of legal norms, which is different from "ethics".

In addition to Article 4, the Draft Measures set forth generally applicable anti-addiction requirements in Article 10 and anti-discrimination requirements in Article 12:

  • Article 10: Providers shall clarify and disclose the applicable audience, occasions, and uses of their services, and employ appropriate measures to prevent users from relying too heavily on or indulging in generating content.
  • Article 12: Providers must not generate discriminatory content based on users' race, nationality, gender, and so forth.

If the aforesaid provisions are mainly aimed at AIGC product/service providers themselves in the process of scientific and technological activities and service provision put forward legal compliance and respect for ethics, Articles 18-19 further start from the consideration of cyberspace ecological governance, requiring providers to guide users to scientifically understand and rationally use new technologies and applications, not only to take timely measures when users violate laws and regulations, but also when users violate ethical and moral standards, they also need to take measures:

  • Paragraph 1 of Article 18 stipulates that "providers shall guide users to scientifically understand and rationally use content generated by generative artificial intelligence, do not use generated content to harm the image, reputation and other legitimate rights and interests of others, and do not engage in commercial hype or improper marketing";
  • Article 19 stipulates that "when providers discover that users violate laws and regulations, business ethics and social morality in the process of using generative AI products, including engaging in network hype, malicious posting and commenting, creating spam, writing malware, and carrying out improper commercial marketing, they shall suspend or terminate services";
  • In addition to taking measures to suspend or terminate the service for users, the provider itself must also take measures such as content filtering and model optimization training in accordance with the requirements of Article 15 to prevent regeneration. That is, "for generated content found during operation or reported by users that does not meet the requirements of these Measures, in addition to taking measures such as content filtering, it shall be prevented from being generated again within 3 months through model optimization training and other methods".

At the beginning of this series, we analyzed the problems existing in the draft measures, especially the connection with the existing regulatory rule system, and proposed that it is safer for AIGC operators to 1) identify applicable laws, regulations and departmental rules in combination with specific business areas and risk matters to guide business implementation; 2) On this basis, introduce algorithm provisions, deep synthesis regulations, and relevant procedural guidelines in the Draft Measures to improve internal compliance standards. AIGC's ethical risk assessment and review is no exception, and the ethical rules listed above do not go beyond the existing rules in specific business areas.

Guidelines for the Measures for the Review of Science and Technology Ethics

Combined with the draft of the Measures for the Review of Science and Technology Ethics (Trial), like the issues of AI ethics and risk governance, algorithm ethics and risk governance analyzed in the previous two days, the ethical guidelines, ethical risk classification and corresponding measures in the field of AIGC need to be further clarified by legislation, but it is clear that AIGC-related scientific and technological activities should strictly comply with the requirements of existing laws, regulations and departmental rules, including the Data Security Law, the Cybersecurity Law, the Personal Information Protection Law, and the Law on the Protection of Minors; At the same time, consider the requirements of legislation in terms of preventive measures:

  • The algorithm provisions and deep synthesis service regulations themselves have put forward security assessment and filing requirements for algorithm models, applications and systems with public opinion social mobilization capabilities and social awareness guidance capabilities, as well as automated decision-making systems with high autonomy for scenarios with safety and personal health risks.
  • If Article 6 of the Draft is not amended, security assessments and algorithm filings should be conducted before using generative AI products to provide services to the public:

1) Report the security assessment to the state internet information department in accordance with the "Provisions on Security Assessment of Internet Information Services with Public Opinion Attributes or Social Mobilization Capabilities" (where Internet information information services are involved, it is to be implemented in accordance with the "Provisions on the Management of Security Assessment of New Technologies and Applications of Internet News Information Services");

2) Perform the procedures for algorithm filing, modification, and cancellation in accordance with the "Provisions on the Administration of Internet Information Service Algorithm Recommendations".

  • The aforementioned assessment mainly involves safety assessment, and if the existing list of annexes is not adjusted when the Measures for the Review of Science and Technology Ethics (Trial) are implemented, the R&D activities of algorithm models, applications and systems with the ability of public opinion social mobilization and social awareness guidance, and the R&D activities of automated decision-making systems with high autonomy for scenarios such as safety and personal health risks, as ethical high-risk matters, must be double-checked by the internal assessment of the R&D entity and the review of external experts.

At present, the regulatory authorities have adjusted and optimized the scientific and technological ethical review measures related to life sciences and medical research involving humans (refer to the "Measures for the Ethical Review of Life Sciences and Medical Research Involving People" issued on February 18, 2023), and the scientific and technological ethical review measures in the field of artificial intelligence have yet to be clarified. Although the specific approach may vary from place to place, it is expected that the application of new technologies will generally adopt an inclusive and prudent regulatory attitude.

For example, the "Implementation Plan for Accelerating the Construction of a Source of Artificial Intelligence Innovation with Global Influence (2023-2025)" issued by Beijing yesterday calls for continuous strengthening of ethical governance of science and technology, and proposes specific measures such as building a public service platform for ethical governance of science and technology, carrying out ethical review of science and technology and related business training, and exploring the creation of an inclusive and prudent regulatory environment on the basis of strengthening network security, data security protection and personal data protection.

From the existing rules related to the ethical governance of science and technology, the regulator hopes to actively guide R&D entities and service providers:

  • Strengthen self-discipline on AI R&D activities and avoid the use of immature technologies that may have serious negative consequences;
  • Ensure the safety and controllability of algorithms in the AI R&D process, continuously improve transparency, explainability and reliability, and gradually achieve auditability, supervision, traceability, predictability and trustworthiness;
  • Improve the data quality of the AI R&D process, improve the completeness, timeliness, consistency, standardization and accuracy of data;
  • Fully consider differentiated demands, avoid possible data collection and algorithm biases, and strive to achieve inclusiveness, fairness and non-discrimination of artificial intelligence systems.

For enterprises, the requirements of laws, regulations and rules are relatively clear, and how to transform regulatory guiding rules, especially ethical standards, into rules in enterprise technology research and development and business operation is a more complex issue:

  • From the early controversy over the application of recommendation algorithms on headlines and short video platforms, big data killing, to the recent discussions triggered by AI one-click undressing and AI face changing, it is not difficult to understand the need for R&D entities and service providers to conduct scientific and technological ethics reviews before the research and development of technologies such as deep synthesis and artificial intelligence, as well as applications and services.
  • The difficulty lies in the fact that the definition and grading of ethics, ethical risks, and unclear regulatory rules related to corresponding measures are risks in themselves. It is safer to avoid the use of artificial intelligence technologies and related applications that "obviously" violate laws, regulations, ethics, and standards, avoid using technologies and related applications that are "obviously" foreseeable to lead to infringement of personality rights and intellectual property rights, and avoid using technologies and related applications that are "obvious" likely to cause the release and dissemination of online information content out of control.
  • Although such requirements are scattered in existing laws and regulations, namely the Data Security Law, the Cybersecurity Law, the Personal Information Protection Law and relevant laws and regulations in special fields, the ethical and moral requirements in laws and regulations themselves have problems with legal application and legal interpretation, and can only reduce risks through procedural safeguards + general rational person judgment standards.

Technical regulations

In addition, due to the lag in technology development and application compared with legislative and regulatory rules, industry self-regulatory standards and technical regulations are particularly important for ethical risk management, especially technical regulations. Technical regulations are an indispensable part of translating legal norms into technical language, and enterprises can consider establishing an ethical risk classification and risk control mechanism applicable to the enterprise on the basis of existing risk management systems such as data, personal information and privacy, and referring to the guidelines of existing technical regulations.

If you are interested, please refer to the following standards and reports:

  • ISO/IEC AWI TS 6254 Information technology — Artificial intelligence — Objectives and approaches for explainability of ML models and AI systems
The standard describes the connotation of interpretability of machine learning models and AI systems and the goals of different stakeholders for interpretability, lists the ways to achieve explainability and the factors related to interpretability that need to be considered in the life cycle of AI systems.
  • ISO/IEC 23894:2023 Information technology — Artificial intelligence — Guidance on risk management
In terms of risk management of AI ethics, the standard addresses the following: ethical issues in the process of designing and applying AI products that different subjects should consider; The collection, preservation and use of personal privacy information need to take full account of ethical principles of respect for human values and human dignity; In terms of technical details, attention needs to be paid to risks that may lead to ethical issues in the product, such as if incorrect data or biased data is used in the training model.
  • The "Artificial Intelligence Ethical Risk Analysis Report" released by the Artificial Intelligence and Social Ethics Standardization Research Group under the National Artificial Intelligence Standardization Group not only sorts out the ethical guidelines of artificial intelligence, but also puts forward corresponding evaluation methods; Research on International Standards for Ethics and Social Concerns in Artificial Intelligence.
  • On August 5, 2020, the Ministry of Industry and Information Technology, the National Standards Commission, the Cyberspace Administration of the Central Committee, the Development and Reform Commission, and the Ministry of Science and Technology jointly issued the Guidelines for the Construction of the National New Generation Artificial Intelligence Standard System, which covers the planning and goals for the construction of the AI standard system related to "safety/ethics".
  • The "White Paper on Reliable Standardization of Artificial Intelligence" issued by the Artificial Intelligence Subcommittee of the National Beacon Commission introduces the principles of ethical compliance of artificial intelligence and puts forward evaluation indicators.
  • T/CESA 1193—2022 "Information Technology Artificial Intelligence Risk Management Capability Assessment" stipulates the risk management capability assessment system and evaluation process of artificial intelligence products, and puts forward a number of norms on artificial intelligence ethics, including requiring organizations to establish ethics committees to specifically manage ethical risks.
  • In January 2021, the National Information Security Standardization Committee issued the Practice Guidelines for Cybersecurity Standards – Guidelines for the Prevention of Ethical Security Risks in Artificial Intelligence.
  • ……