laitimes

肖飒团队 | 某音整治AI创作,AI虚拟人主播路在何方?

author:肖飒lawyer
肖飒团队 | 某音整治AI创作,AI虚拟人主播路在何方?
肖飒团队 | 某音整治AI创作,AI虚拟人主播路在何方?

On March 27, a certain audio security center issued the "Announcement on the Governance of Improper Use of AI to Generate Virtual Characters", aiming to further manage the creation of short videos and live broadcast content using AI technology on its platform, especially the use of AI-generated virtual characters as anchors. In view of the use of wording that is far more severe than the previous ones, after the announcement was released, many friends in the industry were worried about the application prospects of AI virtual human anchors on self-media platforms. So, is the release of this representative platform rule a blessing or a curse for the development of AIGC, and is the demarcation of the new "forbidden zone" a good thing or a bad thing for the future application of AI virtual human anchors?

01

AI-generated virtual character anchors, business opportunities and risks coexist

Compared with the early "leather virtual anchors" played by the "people in the middle" through real-time motion capture, at this stage, with the help of AI animation generation and speech synthesis technology, the "AI virtual anchors" with AI virtual humans as the protagonists seem to have the courage to compete with their predecessors in terms of expansion momentum. Although there is still a lot of room for iterative progress in the generation of long videos and real-time interaction of this type of technology, in the short video and live streaming industry, due to the relatively fixed scenarios of broadcast content and demand, the use of AI to create exclusive images to please the needs of specific groups in the market, and automatically generate voice and video content for a certain period of time according to the set prompt words, this more low-cost content production solution is increasingly favored by creators. Compared with the traditional live broadcast with increasingly high labor costs, relatively fixed live broadcast working hours, and difficulty in shaping the image of the anchor, AI virtual anchors have reflected a considerable degree of innate advantages. Especially after the epidemic era, the market demand for live streaming and self-media drainage on e-commerce platforms has been rising, generating huge business opportunities. Many self-media platforms themselves also provide, and can even be said to encourage more users to use AIGC, to increase the amount of content created. Take our protagonist today, a certain sound, as an example, although it does not provide direct AI virtual character generation services, it has already launched a creation module with the comprehensive functions of "AI map" and "AI generated video". It can be seen that even if a certain sound has just stepped on the brakes on the issue of standardization, it is impossible to have the idea of "banning" AI creation. However, it is undeniable that at present, because AIGC technology has greatly lowered the entry threshold for content creators, and the creation that once belonged to the field of "art" has shifted to the mechanical operation of the field of "technology", a large number of creators with uneven technical level and compliance awareness have participated in it with the psychology of embracing this business opportunity, which has indeed caused a considerable degree of legal and compliance risks. First and foremost are the legal and social issues arising from the application of AI technology to fraud. According to relevant security reports, the growth rate of AI-based deepfake fraud cases in 2023 is as high as 3,000%, and behind the alarming growth trend is the huge legal risk caused by the improper use of AI. At the same time, because AI-generated virtual characters are based on multiple AIGC technologies such as text, images, and voices, it is always difficult to completely get rid of the problems that may infringe on the intellectual property rights of others in the current AIGC field in terms of character design, background music, and live broadcast copywriting. In addition, there is also a potential risk of infringement of the image of virtual characters on the portrait rights, privacy rights, and personal information rights and interests of others, and the use of AIGC for inducing product promotion may also cause harm to consumers' right to know and right to choose. It can be said that the application of AI virtual human anchors and other technologies in the field of short video and live streaming is in a state of superposition of huge opportunities and huge risks at the moment.

02

Starting from a certain sound, the compliance key points of the virtual anchor are analyzed by AI

Therefore, it seems that it is not difficult to understand why a certain audio platform, as one of the pioneers in the industry, takes a more cautious attitude towards the compliance management of AI creation. In fact, in terms of AI virtual anchors, it is not the first time that a certain sound has "poured cold water" on everyone. As early as May 9, 2023, it issued the "Platform Specifications and Industry Initiative on AI-Generated Content", which for the first time clarified the behavior norms of AI-generated videos, pictures, and derivative virtual human live broadcasts on the platform, and formulated the "AI-generated Content Identification Watermark and Metadata Specification" based on its platform rules in accordance with the "Provisions on the Administration of Deep Synthesis of Internet Information Services", which further refines the requirements including conspicuous identification and avoiding confusion. The "ban" of a certain audio platform is mainly aimed at the behavior of "using AI-generated virtual characters to publish content that violates scientific common sense, falsifies, and spreads rumors" on its platform, which has been "repeatedly banned" and is expanding. Specifically, three typical situations are mentioned: First, it is the behavior of creating a false image of a foreigner and using a false persona abroad to consume patriotic psychology and win attention. Second, it is the act of generating a false image of handsome men and beautiful women, deceiving interactions, or posting emotional inducements and other content that expresses false emotions, directing users to private dating chat tools, and even committing fraud. Third, it is the act of generating false personas of elites, publishing bad content such as chicken soup for the soul, financial quotient, pseudo-Chinese studies, thick black studies, pseudo-success studies, etc., attracting fans with low quality, and even attracting traffic outside the station to make profits by selling classes and joining groups. It can be seen that in the above cases, a certain type of virtual character image generated by AI is used to carry out illegal and illegal activities such as low-quality fan attraction, traffic fraud, and even drainage fraud for specific social groups with emotional needs. In such activities, the AI avatar itself is not used for direct deception, but as a drainage medium to cater to the preferences of specific groups, and serves as a "preparatory stage" for subsequent fraud and other illegal activities, which has a considerable degree of concealment. Therefore, what is prohibited and restricted by Mouyin is not the creation of content by platform users with AI virtual anchors, but the hope that through its own compliance construction, it will work with its users to control the risks of violations and violations in related fields as much as possible. Seeing this behavior as a restriction on users' use of AI creation is somewhat unjustified for a certain audio platform, after all, the fundamental goal is still to regulate potentially risky violations and promote better development of related fields by means of compliance.

03

Who is responsible for the compliance of users or platforms, AI virtual anchors?

The reason why a certain audio platform does this is also to fulfill its responsibility as an Internet service platform, as well as a provider and disseminator of AI-generated content. At present, most of the mainland's management norms for generative AI services tend to stipulate the responsibilities of the platform. For example, in the Basic Requirements for the Security of Generative AI Services issued by the National Cybersecurity Standardization Technical Committee, the responsibility for the inspection and evaluation of generative AI is mainly aimed at the relevant service providers. For example, Article 14 of the Interim Measures for the Administration of Generative AI Services restricts the "safe harbor principle" for service providers, requiring them to immediately rectify and report to the competent authorities after discovering illegal acts, and Article 15 requires service providers to establish and improve complaint and reporting mechanisms to facilitate their timely handling of illegal content. From the announcement issued by a certain sound, it can also be seen that its reiteration of relevant regulations, under the relatively harsh wording, between the lines, actually reveals that it hopes that users will cooperate with the work, jointly supervise, and actively report the improper use of AI-generated content. At present, the integration of platforms and users in the management of AI-generated content is mainly focused on the first article of its reconsideration specification, that is, the "conspicuous identification" of AI-generated content. According to Article 12 of the Interim Measures for the Administration of Generative AI Services, providers shall identify generated content such as images and videos in accordance with the Provisions on the Administration of Deep Synthesis of Internet Information Services and the Practice Guide to Cybersecurity Standards - Methods for Identifying Content of Generative AI Services. Most of the self-media platforms that provide or accept AI-generated products, including a certain sound, have clearly defined the identification methods and requirements for their AI-generated products. At present, the official adoption of a certain sound is still the way of users cooperating with the declaration of AI-created content, and the platform adds relevant prompt text, this model is flawed in the execution of more realistic AI virtual human content, unless the creator adds tags such as "virtual idol/virtual person" in the copy. Although the punishment measures for violating users have been reiterated more severely in this announcement, they are limited to the current processing model, and it is more like hoping to further reach some tacit understanding between users and the platform before soldiers. Although the construction of platform rules, combined with technical means such as detection algorithms, seems to be the best solution for relevant service providers to take compliance measures, the effectiveness of such measures still needs to be tested in practice in this era of coexistence of opportunities and risks. It should be noted that virtual avatars can never become a "shield" for their creators' responsibilities, and the construction of AI creation compliance is indeed not wishful thinking on the part of the platform. In this announcement, the most prominent aspect of its regulation and the determination of platform governance is that after listing typical violations, it seems to indicate that it will take down videos and ban accounts for illegal use of AI-generated virtual characters. At the same time, the police will be called to crack down on the clues that some black industry gangs have learned about the improper use of AI-generated virtual characters for criminal purposes. It is true that the "tacit understanding" mentioned above may also exist objectively between users and the platform. Of course, the creator who engages in illegal creative acts will not only be punished by the platform rules, but may also bear other legal consequences, even criminal liability, depending on the severity of the activity.

04

Write at the end

We always emphasize that the regulation of new technologies and new fields does not necessarily hinder development. The promulgation of "bans" often also has its aspect of promoting development through compliance. After all, you can only go far if you do it right.

The above is today's sharing, thank you readers!

If you have friends around you who are interested in new technology and digital economy, welcome to forward it to Ta.

肖飒团队 | 某音整治AI创作,AI虚拟人主播路在何方?
肖飒团队 | 某音整治AI创作,AI虚拟人主播路在何方?

For more information, please contact our team

[email protected]

[email protected]

飒姐工作微信:【 xiaosalawyer】

Sister Sa's work phone:【 +86 171 8403 4530】

肖飒团队 | 某音整治AI创作,AI虚拟人主播路在何方?