laitimes

Generative artificial intelligence is coming, how to protect minors

author:Fly close to the ground

  In response to the rapid development of powerful generative artificial intelligence, UNESCO convened its first meeting of global ministers of education on this topic on 25 May this year. Comprehensively and objectively analyzing the unprecedented development opportunities and risk challenges of generative artificial intelligence is an urgent proposition of the times to ensure the healthy growth of minors, and it is also a key part of implementing the protection and development of minors.

Generative artificial intelligence is coming, how to protect minors

  Among the top ten scientific breakthroughs of 2022 published by Science, generative artificial intelligence (AIGC) is prominently listed. Generative artificial intelligence has shown strong logical reasoning and relearning ability, can better understand human needs, break through the bottleneck of natural language processing, and reshape the social knowledge production system, talent training system, business model, social division of labor system, etc., bringing huge development opportunities to all fields of society, but also facing severe risks. At present, minors have interacted with artificial intelligence technology in a variety of ways, and these technologies are often embedded in toys, apps and video games, unknowingly affecting the healthy growth of minors. Because minors have not yet developed their minds and cognitive abilities, they are highly susceptible to the influence of the intelligent environment. Therefore, comprehensively and objectively analyzing the unprecedented development opportunities and risks and challenges of generative artificial intelligence is an urgent proposition of the times to ensure the healthy growth of minors, and it is also a key part of implementing the protection and development of minors.

  Opportunities and challenges presented by generative AI

  In terms of development opportunities, artificial intelligence technology is a tool to guide minors to learn education, and relying on generative artificial intelligence technology will make education more targeted and effective. Generative AI can bridge individual differences and regional disparities in education, promote inclusive equity of educational resources, and narrow the digital divide between individuals and regions to a certain extent. Generative AI will also promote the ability of minors to innovate and create. For example, large models further lower the barrier to entry for using low-code and no-code development tools, and may give birth to a whole new class of smart development technologies.

  Generative AI will also bring cognitive, thinking, behavioral challenges and other aspects to minors. The first is the proliferation of false information. How to continuously improve the accuracy of facts urgently requires technological breakthroughs. The second is the problem of data leakage. At present, data leaks have led to the loss of conversation data and related information for some users. In particular, generative AI applications currently lack a system to verify the age of users, which may expose minors to content that is "completely disproportionate to their age and awareness", adversely affecting their physical and mental health. The third is the problem of embedded algorithm discrimination. Some responses given by generative AI are sexist and racially discriminatory, which may mislead users into making incorrect decisions about discriminatory responses as "correct answers", which can negatively affect social cognition and ethics. Especially in dealing with the issue of algorithmic inclusiveness, due to the different roots and evolution paths of Chinese and Western cultures, it will also involve the interpretation, evaluation and dissemination of traditional culture and reality observation, which may be ignored or deliberately amplified in generative artificial intelligence technology. Fourth, intellectual property infringement. Whether it is an image or a large text model, there are actually many copyright-related risks, and the training data may infringe the copyright of others, such as news organizations and image library vendors. There is no particularly appropriate way to crack how content creators of works used in AI-generated songs, articles, or other works are compensated. If the copyright of the original creator is difficult to guarantee, it will not be conducive to the sustainable and healthy development of the artificial intelligence content ecology. Fifth, the digital divide. The language performance of the large model is quite different in English and Chinese, and it is obviously better for writing, expression and understanding in the English context. The difference between Chinese and English language families, combined with the data Matthew effect, may make the gap wider. Sixth, network security risks. Such as advice or encouragement of self-harm, graphic material such as pornography or violence, harassment, disparagement and hateful content, etc.

  CI-STEP system for generative artificial intelligence

  In September 2020, UNICEF released an exposure draft of AI for Children – Policy Guide, which sets out three principles for child-friendly AI: protection, i.e. "do no harm"; empowerment, i.e. "for good"; Participation, i.e. "inclusion". In November 2021, UNICEF issued Recommendation 2.0 on AI policies and systems to safeguard children's rights, which sets out three foundations for AI to safeguard children's rights: AI policies and systems should be designed to protect children; The needs and rights of children should be met equitably; Children should be empowered to contribute to the development and use of AI. How AI technology can create an enabling environment for the current and future development of minors has become a hot topic of global discussion. Subsequently, China, the United States, the European Union, and the United Kingdom have also successively issued relevant laws and regulations to further standardize and improve the management measures of artificial intelligence technology for minors.

  According to Piaget's theory of the stages of children's intellectual development and Kohlberg's theory of stages of moral development, combined with the relevant laws and policies of the mainland - the Personal Information Protection Law of the People's Republic of China, the Cybersecurity Law of the People's Republic of China, the Data Security Law of the People's Republic of China, etc., as well as relevant policy documents of the United States, the European Union, the United Kingdom and other countries and regions, The Research Center for Juvenile Internet Literacy of the School of Journalism and Communication of Beijing Normal University released the first generative AI juvenile protection and development assessment (CI-STEP) index system in China, providing important reference and evaluation indicators for generative AI juvenile protection (see figure below).

Generative artificial intelligence is coming, how to protect minors

  The indicator model specifically includes comprehensive management, information cue, scientific popularization and publicity education, technical protection, and emergency complaint and reporting mechanism Six dimensions, complaint and reporting mechanism), privacy and personal information protection system, each dimension contains 2~3 specific indicators, a total of 15 evaluation indicators, in order to provide scientific and comprehensive evaluation concepts and practical guidance for safeguarding the rights and interests of minors.

  First, in terms of comprehensive management, including time management, authority management and consumption management. Time management is based on minors' social interactions, scenes, uses, etc. to determine reasonable and appropriate online time, and implement anti-addiction requirements. Permission management is to set up a verification mechanism to check age, according to the security rules or security policies set by the system, minors can access and can only access their authorized resources. Consumer management is generative AI products or services should prohibit unfair and deceptive business practices, take the necessary safeguards to limit bias and deception, and avoid serious risks to business, consumer and public safety.

  The second is information tips, including private information tips, age-appropriate reminders and risk reminders. Private information reminders are those where minors publish private information through the Internet, they shall promptly prompt and employ necessary protective measures. Age-appropriate prompts are generative AI product or service providers shall, in accordance with relevant national provisions and standards, classify AI products, make age-appropriate prompts, and employ technical measures, and must not allow minors to contact inappropriate products or services. Some generative AI providers require users to be at least 18 years old, or 13 years old with parental consent to use AI tools, but verification options and implementation mechanisms need to be improved. Risk warning means that generative artificial intelligence products or services should comply with the requirements of laws and regulations, respect social morality, public order and good customs, and prompt possible illegal content such as fraud, violent terrorism, bullying, violence, pornography, prejudice, discrimination, and inducement information.

  The third is scientific popularization and propaganda and education. Generative AI product or service providers are in a leading position in technology, and shall provide support for minors' related theme education, social practice, career experience, internship investigations, etc., and organize and carry out science popularization education and innovative practice activities for minors in combination with industry characteristics.

  Fourth, technical protection, including real identity information registration, adverse information filtering, and graded network security protection. In accordance with the Cybersecurity Law of the People's Republic of China and other provisions, users are required to provide real identity information. Where generative AI product or service providers discover that users have published or disseminated information containing content that endangers minors' physical and mental health, they shall immediately stop disseminating the relevant information, and employ measures such as deleting, blocking, or disconnecting links. Providers of generative AI products or services shall perform security protection obligations in accordance with the requirements of the graded network security protection system, ensuring that networks are protected from interference, destruction, or unauthorized access, and preventing network data leakage, theft, or tampering.

  Fifth, emergency complaint and reporting mechanisms. Providers of generative AI products or services shall establish mechanisms for receiving and handling user complaints, disclose information such as complaints and reporting methods, promptly handle individuals' requests for correction, deletion, and blocking of their personal information, and formulate emergency response plans for network security incidents, and promptly handle security risks such as system vulnerabilities, computer viruses, network attacks, network intrusions, data poisoning, and prompt word injection attacks; In the event of an incident that endangers network security, the emergency plan is immediately activated and corresponding remedial measures are taken.

  Sixth, privacy and personal information protection system. Providers of generative AI products or services should clarify rules for the collection and use of personal information, how to protect personal information and individual rights, how to handle minors' personal information, how personal information is transferred globally, and how privacy policies are updated. Generative AI product or service providers must not use data to sell services, advertise, or build user profiles, remove personal information from training datasets where feasible, fine-tune models to deny requests for private information, prevent damage to portrait rights, reputation rights, and personal privacy, and prohibit illegal acquisition, disclosure, and exploitation of personal information and privacy. (Produced by Social Science Newspaper's Social Science Newspaper Social Media "Thought Workshop", the full text can be found in Social Science Daily and official website)

Read on