laitimes

Generative artificial intelligence is coming, how to protect minors? | Social Science Daily

author:Social Science Daily
Generative artificial intelligence is coming, how to protect minors? | Social Science Daily

Artificial intelligence and the development of the protection of minors

In response to the rapid development of powerful generative artificial intelligence, UNESCO convened its first meeting of global ministers of education on this topic on 25 May this year. Comprehensively and objectively analyzing the unprecedented development opportunities and risk challenges of generative artificial intelligence is an urgent proposition of the times to ensure the healthy growth of minors, and it is also a key part of implementing the protection and development of minors.

Generative artificial intelligence is coming, how to protect minors? | Social Science Daily

Original text: "Generative artificial intelligence is coming, how to protect minors"

Author | School of Journalism and Communication, Beijing Normal University Fang Zengquan/Researcher Yuanying/Lecturer Qi Xuejing/Assistant Researcher

Image | Internet

Among the top ten scientific breakthroughs of 2022 published by Science, generative artificial intelligence (AIGC) is prominently listed. Generative artificial intelligence has shown strong logical reasoning and relearning ability, can better understand human needs, break through the bottleneck of natural language processing, and reshape the social knowledge production system, talent training system, business model, social division of labor system, etc., bringing huge development opportunities to all fields of society, but also facing severe risks. At present, minors have interacted with artificial intelligence technology in a variety of ways, and these technologies are often embedded in toys, apps and video games, unknowingly affecting the healthy growth of minors. Because minors have not yet developed their minds and cognitive abilities, they are highly susceptible to the influence of the intelligent environment. Therefore, comprehensively and objectively analyzing the unprecedented development opportunities and risks and challenges of generative artificial intelligence is an urgent proposition of the times to ensure the healthy growth of minors, and it is also a key part of implementing the protection and development of minors.

Opportunities and challenges presented by generative AI

Generative artificial intelligence is coming, how to protect minors? | Social Science Daily

In terms of development opportunities, artificial intelligence technology is a tool to guide minors to learn education, and relying on generative artificial intelligence technology will make education more targeted and effective. Generative AI can bridge individual differences and regional disparities in education, promote inclusive equity of educational resources, and narrow the digital divide between individuals and regions to a certain extent. Generative AI will also promote the ability of minors to innovate and create. For example, large models further lower the barrier to entry for using low-code and no-code development tools, and may give birth to a whole new class of smart development technologies.

Generative AI will also bring cognitive, thinking, behavioral challenges and other aspects to minors. The first is the proliferation of false information. How to continuously improve the accuracy of facts urgently requires technological breakthroughs. The second is the problem of data leakage. At present, data leaks have led to the loss of conversation data and related information for some users. In particular, generative AI applications currently lack a system to verify the age of users, which may expose minors to content that is "completely disproportionate to their age and awareness", adversely affecting their physical and mental health. The third is the problem of embedded algorithm discrimination. Some responses given by generative AI are sexist and racially discriminatory, which may mislead users into making incorrect decisions about discriminatory responses as "correct answers", which can negatively affect social cognition and ethics. Especially in dealing with the issue of algorithmic inclusiveness, due to the different roots and evolution paths of Chinese and Western cultures, it will also involve the interpretation, evaluation and dissemination of traditional culture and reality observation, which may be ignored or deliberately amplified in generative artificial intelligence technology. Fourth, intellectual property infringement. Whether it is an image or a large text model, there are actually many copyright-related risks, and the training data may infringe the copyright of others, such as news organizations and image library vendors. There is no particularly appropriate way to crack how content creators of works used in AI-generated songs, articles, or other works are compensated. If the copyright of the original creator is difficult to guarantee, it will not be conducive to the sustainable and healthy development of the artificial intelligence content ecology. Fifth, the digital divide. The language performance of the large model is quite different in English and Chinese, and it is obviously better for writing, expression and understanding in the English context. The difference between Chinese and English language families, combined with the data Matthew effect, may make the gap wider. Sixth, network security risks. Such as advice or encouragement of self-harm, graphic material such as pornography or violence, harassment, disparagement and hateful content, etc.

Generative artificial intelligence is coming, how to protect minors? | Social Science Daily

CI-STEP system for generative artificial intelligence

Generative artificial intelligence is coming, how to protect minors? | Social Science Daily

In September 2020, UNICEF released an exposure draft of AI for Children – Policy Guide, which sets out three principles for child-friendly AI: protection, i.e. "do no harm"; empowerment, i.e. "for good"; Participation, i.e. "inclusion". In November 2021, UNICEF issued Recommendation 2.0 on AI policies and systems to safeguard children's rights, which sets out three foundations for AI to safeguard children's rights: AI policies and systems should be designed to protect children; The needs and rights of children should be met equitably; Children should be empowered to contribute to the development and use of AI. How AI technology can create an enabling environment for the current and future development of minors has become a hot topic of global discussion. Subsequently, China, the United States, the European Union, and the United Kingdom have also successively issued relevant laws and regulations to further standardize and improve the management measures of artificial intelligence technology for minors.

According to Piaget's theory of the stages of children's intellectual development and Kohlberg's theory of stages of moral development, combined with the relevant laws and policies of the mainland - the Personal Information Protection Law of the People's Republic of China, the Cybersecurity Law of the People's Republic of China, the Data Security Law of the People's Republic of China, etc., as well as relevant policy documents of the United States, the European Union, the United Kingdom and other countries and regions, The Research Center for Juvenile Internet Literacy of the School of Journalism and Communication of Beijing Normal University released the first generative AI juvenile protection and development assessment (CI-STEP) index system in China, providing important reference and evaluation indicators for generative AI juvenile protection (see figure below).

Generative artificial intelligence is coming, how to protect minors? | Social Science Daily

Generative Artificial Intelligence Assessment of the Protection and Development of Minors (CI-STEP) indicator system

The indicator model specifically includes comprehensive management, information cue, scientific popularization and publicity education, technical protection, and emergency complaint and reporting mechanism Six dimensions, complaint and reporting mechanism), privacy and personal information protection system, each dimension contains 2~3 specific indicators, a total of 15 evaluation indicators, in order to provide scientific and comprehensive evaluation concepts and practical guidance for safeguarding the rights and interests of minors.

First, in terms of comprehensive management, including time management, authority management and consumption management. Time management is based on minors' social interactions, scenes, uses, etc. to determine reasonable and appropriate online time, and implement anti-addiction requirements. Permission management is to set up a verification mechanism to check age, according to the security rules or security policies set by the system, minors can access and can only access their authorized resources. Consumer management is generative AI products or services should prohibit unfair and deceptive business practices, take the necessary safeguards to limit bias and deception, and avoid serious risks to business, consumer and public safety.

The second is information tips, including private information tips, age-appropriate reminders and risk reminders. Private information reminders are those where minors publish private information through the Internet, they shall promptly prompt and employ necessary protective measures. Age-appropriate prompts are generative AI product or service providers shall, in accordance with relevant national provisions and standards, classify AI products, make age-appropriate prompts, and employ technical measures, and must not allow minors to contact inappropriate products or services. Some generative AI providers require users to be at least 18 years old, or 13 years old with parental consent to use AI tools, but verification options and implementation mechanisms need to be improved. Risk warning means that generative artificial intelligence products or services should comply with the requirements of laws and regulations, respect social morality, public order and good customs, and prompt possible illegal content such as fraud, violent terrorism, bullying, violence, pornography, prejudice, discrimination, and inducement information.

The third is scientific popularization and propaganda and education. Generative AI product or service providers are in a leading position in technology, and shall provide support for minors' related theme education, social practice, career experience, internship investigations, etc., and organize and carry out science popularization education and innovative practice activities for minors in combination with industry characteristics.

Fourth, technical protection, including real identity information registration, adverse information filtering, and graded network security protection. In accordance with the Cybersecurity Law of the People's Republic of China and other provisions, users are required to provide real identity information. Where generative AI product or service providers discover that users have published or disseminated information containing content that endangers minors' physical and mental health, they shall immediately stop disseminating the relevant information, and employ measures such as deleting, blocking, or disconnecting links. Providers of generative AI products or services shall perform security protection obligations in accordance with the requirements of the graded network security protection system, ensuring that networks are protected from interference, destruction, or unauthorized access, and preventing network data leakage, theft, or tampering.

Fifth, emergency complaint and reporting mechanisms. Providers of generative AI products or services shall establish mechanisms for receiving and handling user complaints, disclose information such as complaints and reporting methods, promptly handle individuals' requests for correction, deletion, and blocking of their personal information, and formulate emergency response plans for network security incidents, and promptly handle security risks such as system vulnerabilities, computer viruses, network attacks, network intrusions, data poisoning, and prompt word injection attacks; In the event of an incident that endangers network security, the emergency plan is immediately activated and corresponding remedial measures are taken.

Sixth, privacy and personal information protection system. Providers of generative AI products or services should clarify rules for the collection and use of personal information, how to protect personal information and individual rights, how to handle minors' personal information, how personal information is transferred globally, and how privacy policies are updated. Generative AI product or service providers must not use data to sell services, advertise, or build user profiles, remove personal information from training datasets where feasible, fine-tune models to deny requests for private information, prevent damage to portrait rights, reputation rights, and personal privacy, and prohibit illegal acquisition, disclosure, and exploitation of personal information and privacy.

Generative artificial intelligence is coming, how to protect minors? | Social Science Daily

In the future, we must insist on attaching equal importance to protection and development

Generative artificial intelligence is coming, how to protect minors? | Social Science Daily

On the important topic of generative artificial intelligence and the protection of minors, we must adhere to the equal importance of protection and development, and under the guidance of the concepts of "child-centered", "protection of children's rights", "responsibility" and "multi-party governance", we must promote generative artificial intelligence as a new driving force for the development of minors and help realize the comprehensive and healthy growth of minors in the era of artificial intelligence. Under the guidance of the concept of technology for good, the development of generative artificial intelligence technology should adhere to the principle of the most beneficial to minors, take into account the cybersecurity and digital development of minors, pay attention to the application of AI technology in line with scientific and technological ethics, and encourage Internet enterprises to actively participate in industry co-governance.

Future applications of generative AI need to focus on four major issues. First, implement technical standards. In the future, the protection and development of minors in generative AI needs to focus on supervising the R&D activities of enterprises, supervising and intervening in generative AI, and putting forward approval and testing requirements for the development and release of AI models. Second, clarify the obligation to assess. It is necessary to clarify the security risk assessment obligations of R&D institutions and network platform operators for generative AI product applications, and ensure that R&D institutions and network platform operators must conduct security risk assessment before putting such products on the market to ensure that the risks are controllable. Third, focus on content risk. Regulators and online platform operators should further improve regulatory technology, and encourage multiple parties to participate in the research and development and promotion of relevant generative AI products, so as to standardize the content generated by generative AI technology. Fourth, build an AI literacy cultivation ecosystem. According to the phased characteristics of the development of juveniles' cognitive, emotional and behavioral abilities, strengthen the innovation of curriculum content, teaching methods and methods such as "information technology" in the compulsory education stage, take primary and secondary school basic education as the main position, form an artificial intelligence public service platform led by the government and jointly built by academia and enterprises, promote artificial intelligence literacy education and innovative practical activities, promote the balanced sharing of high-quality educational resources, avoid the aggravation of digital divide and inequality, and create a more inclusive, fair and open educational environment.

The article is originally published in the 2nd edition of the 1857 issue of the Social Science Daily, and is prohibited from being reproduced without permission, and the content in the article only represents the author's views, not the position of this newspaper.

Responsible editor of this issue: Wang Liyao

Further reading

Talent powerhouse | The Era of Artificial Intelligence: "Learner-Centered Precision Education"

Establish a trustworthy AI development mechanism | Social Science Daily

Read on