laitimes

The future can be expected|The era of artificial intelligence: why strong regulation is the key?

author:The Paper
The future can be expected|The era of artificial intelligence: why strong regulation is the key?

Imagine having a super assistant who can recommend the perfect restaurant for you, answer complex questions, and even compose beautiful songs for you in an instant. That's right, that's artificial intelligence – the magical power of our time, breaking into various industries in a short period of time, like a "superhero" to create unprecedented value for us.

At the same time, concerns about artificial intelligence are quietly accumulating. Will it miscommunicate fake news? Is our job still safe? Will it suddenly announce one day in the future, "Your job, I'll do it"? Or even more palpitating, will it become smart enough to surpass humans and become the new hegemon of the planet? How to set the rules and set boundaries for this "superhero" to ensure that it serves humanity and not the other way around has become a hot topic around the world.

On June 12, 2023, UN Secretary-General António Guterres said that in response to a proposal by some AI industry executives, an international AI regulator like the IAEA should be established. In addition, he revealed that it is planned to launch a high-level AI advisory body by the end of this year to regularly evaluate the management of AI and make recommendations on how to align AI with human rights, the rule of law and the common interests of humanity.

On July 6, 2023, the 2023 World Artificial Intelligence Conference (WAIC) opened in Shanghai. At the opening ceremony, Tesla CEO Elon Musk delivered a video speech: "We need some regulatory measures to oversee artificial intelligence. Given the capabilities that deep AI may exhibit beyond humans, it could lead us into a positive future, or it could lead to some less optimistic outcomes. Our goal should be to ensure that we are moving towards that bright future. ”

On September 13, 2023, U.S. Senate Majority Leader Charles Schumer hosted the first "AI Insight Forum", which technology leaders were invited to attend. At the heart of the one-day, closed-door meeting was how to regulate increasingly powerful AI technologies to ensure that humans can control the technology, not be controlled by it.

When you look at the list of attendees, you may be shocked by their strength: Tesla's Elon Musk, Meta's Mark Zuckerberg, Alphabet's Sundar Pichai, Open AI's Sam Altman, Nvidia's Jensen Huang, current Microsoft CEO Satya Nadella and former CEO Bill Gates, and IBM's Arvind Krishna. The total value of this group of tech giants is already close to $550 billion, which is a staggering figure, equivalent to about 4 trillion yuan.

After the meeting, Schumer shared an observation to the reporters: When he raised the question of "whether government should play a role in the regulation of AI," everyone in the room raised their hands (in agreement). Although their specific views differ, there is a general consensus on the need for regulation.

Through the above news, we can find that the necessary regulation of artificial intelligence, whether politicians, scientists, or entrepreneurs, resonate. While the advancement of artificial intelligence has brought us many conveniences, it has also brought unprecedented challenges. Only through clear norms and guidance can we ensure that this technology matures, not gets out of control; Ensure that AI truly serves the future of humanity, not poses a threat to it. So, what are the current problems with artificial intelligence?

On February 16, 2023, Porcha Woodruff, a 32-year-old American woman who was 8 months pregnant, received an unexpected blow when she was falsely labeled as a suspect by facial recognition technology. Early that morning, while Porcha was busy dropping her children off at school, six police officers suddenly appeared in front of her house. Armed with a warrant, they charged Porcha with a robbery. The incident began a month ago when a victim reported the robbery to the police. In order to find the suspect, the police used facial recognition technology to analyze the relevant surveillance footage. As a result, the system provided 6 photos of the suspected suspect included Porcha Woodruff, and more unfortunately, the victim mistakenly identified her. Unwilling to accept this absurd miscarriage, Porcha chose to sue the Detroit police department and eventually succeeded in obtaining justice for herself. This was due to her pregnancy at the time, which proved that she could not be the real culprit in this case.

The above case reveals the problem of bias and discrimination in artificial intelligence. AI relies heavily on data to learn and make judgments. But if the training data itself has biases, then the algorithmic model may amplify that bias, such as racial and gender discrimination. Taking face recognition technology as an example, some systems have problems recognizing non-white faces because the training data is mainly based on the facial data of white people.

The Porcha Woodruff incident is just one of many problems posed by artificial intelligence. In terms of information dissemination, the problems with artificial intelligence are "AI hallucination" and deepfake technology. Deepfake is a conflection of the English word "deep learning" and "fake."

An obvious drawback of large models represented by ChatGPT is the illusion of artificial intelligence, which sometimes carries nonsense in one fell swoop. It always responds to questions or requests from users. But users must be wary that this could be a "lie" told by generative artificial intelligence (AIGC) to please you. There have been reports that when a large model is asked for details about historical events, it may add some untrue or exaggerated details.

On June 22, a federal judge in New York handed down a ruling in which Levidow, the law firm Levidow & Oberman, cited a court brief written by ChatGPT cited by false cases for bad conduct and fined him $5,000. The lawyer who was punished said afterwards that he repeatedly asked ChatGPT whether the non-existent cases were true, and ChatGPT replied in the affirmative, saying that it could be found in multiple legal databases.

In addition to nonsense, it is also the deepfake technology that has been criticized. In January 2023, the world's first program to use deepfake technology to achieve artificial intelligence to synthesize celebrity faces, Deep Fake Neighbor Wars, was launched on the streaming platform of British TV company ITV. In this comedy where no stars actually star, a group of "celebrities" synthesized by artificial intelligence based on stuntmen become neighbors and gag together.

Earlier this year, when former U.S. President Donald Trump was censored in New York, social media was flooded with many fake images generated by artificial intelligence that appeared to show him in a physical altercation with police. If this technique is used to fake videos of politicians or public figures saying or acting, it could have far-reaching negative effects on their reputations. In this context, we have to think: when the faces of celebrities are synthesized and used by artificial intelligence technology, can they claim that their image rights have been violated? Are they entitled to remuneration accordingly? What are the legal responsibilities of programs or platforms that produce such content? Does the government have any regulatory measures in place for this technology-generated content?

Another key issue is the transparency and explainability of AI decisions. As we all know, many deep learning models are considered "black boxes" because of their complexity. This means that while these models exhibit astonishing accuracy on a variety of tasks, their inner workings and decision-making mechanisms are often elusive to the algorithm developers themselves. For example, when an AI system rejects someone's loan application, how do we determine whether it is based on objective credit scores, income profiles, or other less explicit factors? This opacity can lead to biased decision-making and, in some cases, unjust results.

In addition, when people rely on an algorithmic model provided by a company to make decisions, if the error caused by the inaccuracy of the model causes financial loss or other negative impact, then we have to think: is the model or the user responsible? Or is there an obligation to provide models that identify their possible risks and limitations?

In the medical field, when AI medical systems provide doctors with disease diagnosis recommendations, doctors tend to think about the source of those suggestions and the data and reasoning logic behind them. This is not only a problem for doctors, but also for patients, because trust is key in important decisions that are critical to health and even life. But if AI models work like a "black box," how do we ensure that we aren't fooled by their potentially misleading outputs? When we start relying too heavily on machines to make critical medical decisions, we risk getting out of control, and who can guarantee that the system will make the right decisions every time?

Privacy leakage is also a growing concern. In order to personalize services or optimize their models, companies collect large amounts of user data. But with that comes the risk that this sensitive data can be illegally stolen or misused. According to previous media exposures, the information of 50 million Facebook users was illegally abused by Cambridge Analytica without the user's knowledge. The company used this to deliver highly customized political ads to target audiences aimed at canvassing for Trump's campaign in the 2016 U.S. presidential election. This incident highlights the privacy challenges we face in the digital age and the ethical boundaries of how businesses use data.

These questions are just the tip of the iceberg. As technology advances, we may face more challenges and dilemmas that have yet to be foreseen. In response to these situations, countries are strengthening their regulatory efforts and constantly exploring effective response strategies and forward-looking solutions.

As early as September 25, 2021, China's National New Generation AI Governance Professional Committee issued the New Generation AI Ethics Code, which aims to integrate ethics into the whole life cycle of AI and provide ethical guidance for natural persons, legal persons and other relevant institutions engaged in AI-related activities. Article 12 of this refers to enhanced security transparency. In the design, implementation and application of algorithms, improve transparency, explainability, comprehensibility, reliability, and controllability, enhance the resilience, adaptability, and anti-interference ability of artificial intelligence systems, and gradually realize verifiability, auditability, supervision, traceability, predictability and trustworthiness. Article 13 refers to the avoidance of biased discrimination. In data collection and algorithm development, strengthen ethical review, fully consider differentiated demands, avoid possible data and algorithm biases, and strive to achieve universality, fairness and non-discrimination of artificial intelligence systems.

Since 2021, China has successively issued three important departmental regulations: the Provisions on the Administration of Internet Information Service Algorithm Recommendations, the Provisions on the Administration of Deep Synthesis of Internet Information Services, and the Interim Measures for the Administration of Generative Artificial Intelligence Services, which put forward regulatory requirements for publishers of related technologies in terms of algorithms, deepfakes, and generative artificial intelligence.

These three norms also establish the principle that China will adopt "inclusive and prudent classification and hierarchical supervision". The relevant competent departments will formulate corresponding classification and grading regulatory rules or guidelines for the characteristics of generative AI technology and its service application in relevant industries and fields.

The EU's AI regulatory approach advocates "human-centric", which promotes the development and innovation of AI while building a regulatory system to prevent risks and protect citizens' basic rights and security. On 15 December 2022, the President of the European Commission, the European Parliament and the Council signed and published the European Declaration of Digital Rights and Principles. The Declaration emphasizes that AI should be a tool for humanity, with the ultimate aim of enhancing human well-being. Everyone has the right to benefit from the advantages of algorithms and AI systems, including making their own informed choices in the digital environment, while protecting their health, safety and fundamental rights from risks and damages.

The Declaration requires technology companies to commit to building human-centered, trustworthy and ethical AI systems. This requires them to do "six guarantees".

1. Ensure sufficient transparency in the use of algorithms and artificial intelligence.

2. Make sure people have the ability to use them and get information when interacting with them.

3. Ensure that algorithmic systems are based on sufficient data sets to avoid discrimination and enable people to monitor all outcomes that affect people.

4. Ensure that technologies such as artificial intelligence are not used to rob people of choice, for example, in health, education, employment and others.

5. Ensure that AI and digital systems are safe at all times and used with full respect for fundamental rights.

6. Ensure that AI research respects the highest ethical standards and relevant EU laws.

The United States argues that the core goal of regulation should be to promote responsible innovation in artificial intelligence. To achieve this goal, the United States believes that unnecessary constraints on AI development and deployment should be minimized through a range of regulatory and non-regulatory measures. In October 2022, the White House released the AI Bill Blueprint, which lists five core principles and proposes a roadmap for the responsible use of AI. The comprehensive document provides direction for guiding and managing the effective development and implementation of AI systems, with particular emphasis on preventing potential violations of civil and human rights.

As for the specific regulatory framework, the United States does not currently have an independent regulatory body specifically for artificial intelligence. Instead, it has adopted a sectoral division of labor, allowing competent authorities in various fields to carry out relevant management and supervision of artificial intelligence within their responsibilities. For example, the U.S. Food and Drug Administration (FDA) is responsible for AI products in the medical field, while the U.S. Department of Transportation regulates AI technology for self-driving cars.

Japan's governance of artificial intelligence has proposed the "human-oriented artificial intelligence social principle". The principles include seven principles: people-centeredness, education, privacy protection, ensuring security, fair competition, fairness, accountability and transparency, and innovation. However, Japan has not simply chosen the traditional governance method dominated by the government to set, supervise and enforce the rules related to AI. Instead, they opted for a model known as "agile governance," which encourages multi-stakeholder participation and decision-making. Under this governance framework, governments, enterprises, the public and communities work together to analyze the current social environment, clarify the goals they intend to achieve, formulate strategies to achieve these goals, and continuously evaluate and optimize the measures implemented.

With the deepening of artificial intelligence technology, we have jointly recognized the huge potential and equally prominent risks, from artificial intelligence illusion to deep fake, from privacy leakage to transparency in decision-making, which make us reflect and be alarmed. It is gratifying that there is a global consensus on strengthening the regulation of artificial intelligence. Countries are turning pen to legislate to draw clear boundaries for this technological behemoth and ensure that it powers humanity's future, rather than threatens.

We have reason to believe that humanity will be able to guide this scientific and technological revolution towards a brighter, more just and more beneficial future. Just as humans have successfully navigated the steam engine, electricity, and the Internet, we have the same power to ensure that AI becomes the next history-changing, beneficial tool.

(The author, Hu Yi, is a big data worker who likes to imagine the future.) "The Future Can Be Expected" is Hu Yi's exclusive column on The Paper. )

Read on