laitimes

Virtual humans can't go to jail, but companies may go to jail for them

Produced by Tiger Sniff ESG

Author|Yuan hike

Screenshot of the game "Detroit: Become a Man"

This article is the 021st article in the #ESG Progress Watch#series

Key words of this observation: responsible AI, trusted virtual human

The AIGC big model unleashes the market's imagination of virtual humans.

Putting a "holster" on ChatGPT should be a very useful virtual person. If I think about it a little more, I can take all my historical data and train a virtual avatar of me, and let this "me" be my secretary. With distributed, customized AIGC, everyone in the world can have a virtual avatar. Some unimportant work and social communication are automatically completed by the doppelgangers in the virtual space.

Imagine further down and things get wild. A ready-made concern comes from the movie "The Wandering Earth": humans give too many decisions to artificial intelligence, and as a result, artificial intelligence pinches its fingers and finds that killing humans is the best solution for the world.

MOSS calculations have found that the best option to preserve human civilization is to destroy humanity Image source: "The Wandering Earth 2" stills

We are not ethically prepared for a world full of virtual humans.

This is indeed related to the future trend of virtual human governance and compliance. Recently, officials have begun to focus on the application of AI. An important event is the end of this month, when the EU Parliament plans to hold a vote on the Artificial Intelligence Act. The "Technical Requirements for Trusted Virtual Human Generation Content Management System" standard formulation project initiated by the Academy of Information and Communications Technology (CAICT) a day ago, and is preparing the "Trusted Virtual Human White Paper", which directly targets the virtual human AIGC.

If a virtual human obtains a more powerful intelligence core, what kind of social responsibility logic should it follow?

Governing virtual people is essentially governing real people

A game practitioner told Tiger Sniff that "virtual humans have many definitions and application scenarios, but in the final analysis, virtual humans can be divided into two types, soulful and soulless."

At present, the "soul" of virtual people is very expensive. It requires technical solutions on the one hand, and continuous investment in content on the other. In the C-end consumer market, "soul" or "personality" is the core competitiveness of virtual human IP: it is not difficult for virtual people to have a good-looking skin bag, and interesting souls can attract fans' continuous attention. This often requires a whole content planning and production team behind it, as well as some qualified "people in the middle". Some virtual idols also require user co-creation and user-generated content to enrich their personalities. So their core is still PGC or UGC. In addition, virtual humans need considerable technical investment to appear aura in expressions, sounds, movements, and interactions. Under the constraints of these costs, even virtual humans under large factories cannot be good at all of the above.

What a virtual human becomes is an operational issue, not a technical one—both now and for the foreseeable future. The personality of the virtual human is given by the operator, the user, and the person in the middle, and technology is only a means to achieve it. Therefore, the governance of virtual humans is mainly to manage the real people behind virtual humans.

At present, the domestic Internet ecology has begun to deal with the original version of this type of problem. For example, in 2020, the famous host He Jiong took a technology company to court, because the company's APP product provided users with customized chatbot services, so some users used He Jiong's name and portrait to "tweak" a chatbot. In the end, the Beijing Internet Court found that the defendant company's products not only violated He Jiong's portrait rights, but also had a potentially negative impact on the plaintiff's personal freedom and personal dignity, and ordered the defendant to apologize and compensate for losses.

Whether it's chatbots or more complex AI applications, it involves some legal problems that can be solved with existing logic. Regulators and law enforcement only need to draw a slightly longer "red line" for traditional media and Internet governance to include AIGCs and virtual humans.

On January 10, 2023, the Provisions on the Administration of Deep Synthesis of Internet Information Services (hereinafter referred to as the Provisions) jointly issued by the Cyberspace Administration of China, the Ministry of Industry and Information Technology, and the Ministry of Public Security came into effect. The key word "Deep Synthesis" in the regulation refers to the technology of using deep learning or virtual reality methods to generate digital works, covering AI-generated text, images and sounds, as well as digital products such as synthetic human voices, face swapping, and simulated spaces — which are mainly applications in the AI field that came out of the loop two years ago, but can also include AIGC in compliance.

The specific content of the Provisions is mainly a necessary extension of the previous regulatory requirements. For example, traditional information services need to ensure the information security attributes of the product and do not infringe on the privacy, portrait rights, personality rights, and intellectual property rights of others; These principles are now also applicable to deeply synthesized products or services in accordance with the Provisions. Similarly, the Provisions also require content platforms to perform review obligations to ensure that deep synthetic works published on the platform comply with laws and regulations and do not endanger national security and social stability.

From the existing new regulations and jurisprudence, it can be seen that although there are currently disputes over copyright, personality rights, content security, etc., these disputes are not controversial in most cases. On the issue of lines, there is nothing new. All the industry needs is a clear set of compliance principles and accountability, and then continue to work on their respective roles.

A spokesperson for SenseTime, a leader in AI digital human technology, told Tiger Snigh: "The Rules give a division of responsibilities that are more in line with the current business logic, provide 'technical supporters', 'content providers' and other industry entities with clear compliance expectations, and also help regulate the market and enhance consumer confidence." ”

Don't be run away by science fiction

There is a market for talk about the ethical risks of virtual humans. But the role of alarmism is not suitable for government regulation. The exploration of social responsibility of virtual humans still needs to be completed more spontaneously by enterprises and markets. In this, corporate social responsibility has a lot of room to play.

Here's a fresh negative textbook: the European Union's Artificial Intelligence Law. The law was drafted in 2021 and will be put to a vote in the European Parliament at the end of March, just days after this article was published.

However, it doesn't matter if you vote or not. Because the advent of ChatGPT has subverted part of the original intention of the Artificial Intelligence Law. For example, the European Union's Artificial Intelligence Law proposes to ban some AI applications that are contrary to human rights, such as certain facial recognition applications. However, for general-purpose generative AI like ChatGPT, it is difficult to use the original legislative ideas to determine its risk. In the spirit of the AI Law, the EU has the right to judge GPT as "high-risk AI". But this is tantamount to resting on its laurels.

In fact, even before the advent of ChatGPT, members of the European Parliament had raised objections, saying that the Artificial Intelligence Law was too carefully regulated on the application of AI, which would hinder scientific and technological innovation in the EU.

A researcher on AI ethics told Tiger Sniff that in fact, the EU's legislation has substantially hindered the development of AI technology since the General Data Protection Regulation (GDPR), which came into effect in 2018. Many technology companies have difficulty in achieving technological innovation due to the limitations of this data security law.

Once again, the AI Act practices the style of the EU legislature: quick involvement, protection of human rights, and full distrust of technology. When AI technology was still rapidly evolving, the EU hastily legislated and regulated, resulting in the current embarrassing situation of the Artificial Intelligence Law. This shows that the governance of AI ethics at this stage still needs to be led by enterprises.

Although the legislative and executive branches should not be too involved in the field of AI governance, the conservative position of the law is consistent.

Specific to the issue of virtual humans, the law should not treat AI and virtual humans as people in the foreseeable future. In other words, no matter how perfect artificial intelligence is, the law will not consider it to have initiative and autonomous motivation; The various risks involved in AI and virtual humans must ultimately be held accountable to the relevant individuals or organizations.

This may seem very unromantic. Artificial intelligence creations in various science fiction works, and even virtual idols and virtual Internet celebrities that are actually put into the market, have created a look of "having their own independent ideas and personalities, and being able to make decisions independently". However, from the current mainstream attitude of jurisprudence, the law will not regard virtual humans as responsible subjects. All the mistakes made by virtual humans need to be borne by the technical suppliers, content providers, operators and other entities behind them. We cannot expect virtual humans to assume any responsibilities under the law or enjoy any economic and political rights anytime soon. Virtual humans are just works, just as the eggs you fry in the morning are also works. The work should not be related to "jail" in the first place. If something goes wrong with the work, it can only be your responsibility.

At this point, we can simplify the romantic question of "how to give a qualified soul to a virtual person" into a corporate governance and social responsibility issue.

Responsible AI, a topic of corporate governance

After several years of discussion and accumulation, the general ethical requirements of human beings for AI are usually summarized into two concepts: "responsible AI" or "trustworthy AI". The two expressions can also be applied to the various sub-products of AI. Just as the standard to be formulated by the Academy of Information and Communications Technology, it says "trusted virtual human content generation".

The general requirement of "responsibility" or "credibility" can be broken down into a few specific principles. Different technology companies have different dismantling methods, but they are basically the same. Take industry pioneer Microsoft, for example, which summarizes responsible AI into 6 principles: fairness, security and reliability, privacy and data security, inclusiveness, transparency, and accountability.

Taking "The Wandering Earth" as an example, the MOSS artificial intelligence in the movie came to the conclusion that "the best option to preserve human civilization is to destroy mankind" through calculations that ordinary people cannot understand. From the principle of trusted AI, MOSS is first of all opaque, a computational black box, it is difficult to see which tendon she is wrong to come to such a heaven-defying conclusion; At the same time, it is unsafe, unreliable, and uncontrollable; In the end, she is still not accountable, MOSS causes various problems for the human society in the work, but people do nothing about her.

This kind of AI in the movie can create dramatic conflicts, but in the real AI ethics work, instead of imagining such an anti-heavenly AI and then feeling at a loss, it is better to eliminate the signs of loss of control from the beginning.

The relatively large technology suppliers in the industry chain usually have a special AI ethics department, which develops technical solutions that promote fairness, security, transparency and other characteristics according to the principle of trusted AI, and opens up the solutions. For example, the figure below is some algorithms that IBM open-sourced to promote AI fairness.

Source: IBM Trusted AI official website

But technology is far from all. As the Academy of Information and Communications Technology and JD Exploration Institute's "Trusted Artificial Intelligence White Paper" said, there is no perfect technology, the key is how to use the technology.

Trusted AI is also a set of corporate ways of working, and even corporate culture. Microsoft revealed on the company's Responsible AI Office website that when designing and developing AI products, the team will brainstorm around the aforementioned "six principles of responsible AI", analyze possible problems and omissions, and continuously monitor their performance in operation. Microsoft also recommends that operators of AI do the same.

Brainstorming within the company alone is not enough. As Mira Murati, chief technology officer of OpenAI, said in an interview, because GPT is a very general-purpose tool, it's hard for developers to know in advance all its potential impacts and flaws. That's why the company opened up some of the features of GPT to the public to explore potential problems with the technology.

What are the special governance issues of trusted virtual humans

Specific to the virtual human combined with AIGC, will it cause any special governance problems? At present, there are some social responsibility thinking in the industry, but the systematization and industry consensus of these thinking are relatively low. In this regard, SenseTime, as one of the main compilers of the "Technical Requirements for Trusted Virtual Human Generation Content Management System", revealed to Tiger Sniff some of the original intentions of participating in the project.

Regarding the special issue of trusted virtual humans, SenseTime mentioned that when AIGC technology is used to generate virtual humans, especially when it has intelligent interaction capabilities such as ChatGPT, virtual humans can achieve "hyper-realistic" effects. Virtual humans that make it difficult to distinguish between real and false may amplify the relevant risks and aggravate the possible harm caused to the parties. In this case, in addition to the current AIGC management methods, responsible companies can simultaneously use a series of technological solutions to strengthen the credibility of virtual humans. The company hopes to advance industry consensus on this issue this year.

According to SenseTime, in the Virtual Reality and Metaverse Industry Alliance (XRMA), member companies agree that the issue of "trustworthiness" is as important as industrial development, and technology promotion and social responsibility cannot be abandoned. In addition, another consensus is that "virtual humans are not decoupled from reality" - AIGC is not decoupled from the will of real people, the ownership of virtual humans is not disconnected from real constraints, and the industry is not disconnected from supervision. These consensuses are expected to be further elaborated in the "AIGC" and future industry standards.

Read on