laitimes

How humans can coexist with AI

author:Wushan Melting

Panelists

Zhu Rongsheng (Special Expert, Center for Strategic and Security Studies, Tsinghua University, Senior Researcher, Meta Strategy Think Tank)

Liu Wei (Director, Laboratory of Human-Computer Interaction and Cognitive Engineering, Beijing University of Posts and Telecommunications)

Banyuetan: In the process of AI development, what do you think are the most difficult variables and the most prominent risks?

Wei Liu: The most difficult variables to control include uncertainty, data quality, and moral and ethical issues.

The complexity of AI systems makes their behavior unpredictable and often unexplainable, especially when the system is faced with new and unencountered situations. This uncertainty can lead to systems that are unable to make reliable decisions or produce unexpected results.

AI systems typically require large amounts of data to train and learn, and the quality of the data is critical to system performance. If the data is biased, wrong, or incomplete, the system's outcomes and decisions may also be affected.

Decisions made by AI systems may involve moral and ethical choices, such as how an autonomous vehicle chooses the object of impact in an emergency. How to ensure that the decision-making of AI systems is in line with social values has become a difficult problem.

The security risks faced by artificial intelligence are more prominent. AI systems can be hacked, leading to data breaches, system breakdowns, or misuse. Security risks are particularly prominent in critical areas such as critical infrastructure, the financial system, and healthcare.

How humans can coexist with AI

ChatGPT went offline in Italy due to alleged invasion of privacy

Zhu Rongsheng: From the perspective of global governance, while the disruptive technological innovation in history has greatly liberated the social productive forces, it has also triggered social and economic problems such as unbalanced global development, uneven distribution of wealth, and lagging governance mechanisms. The globalization driven by each round of scientific and technological revolution often pushes the world economy into a period of deep adjustment, which inevitably brings about the negative economic effects of the decline of traditional industries, the unemployment of a large number of laborers, and the intensification of social unrest, which puts forward new requirements for the reform of the global economic governance system.

The accelerated penetration and integration of advanced technologies such as artificial intelligence into all areas of human economic and military activities has amplified the challenges of maintaining international stability and non-traditional security. Intelligent warfare is beginning to take shape, and unmanned combat military platforms empowered by artificial intelligence have been widely used in the Palestinian-Israeli conflict, the Nagorno-Karabakh conflict, and the Russia-Ukraine conflict. Foreign policy circles are increasingly concerned about the rapid deployment, wanton proliferation and irresponsible use of military artificial intelligence technology on the battlefield, which will destabilize international strategic stability, and the most urgent concern is that the evolution of the form of warfare in the direction of intelligence may strengthen the advantage of the first attack.

From the perspective of inter-state relations, the development characteristics of AI "strong first" will also strengthen the pressure of international competition. Although each country has the need to safeguard its own "AI sovereignty" interests, large and small countries face different degrees of challenges due to the imbalance of power. Large countries have stronger technological reserves and anti-risk capabilities, and can occupy a core position in the international technology exchange network. In contrast, small countries are likely to become more dependent on large countries for technology supply due to a lack of sufficient resources, and thus be marginalized in the development of the global AI ecosystem. This will further intensify the intensity of digital geopolitical competition and reduce the already limited space for détente between major powers.

How to make AI a tool that is beneficial to human beings and social progress while controlling risks?

Wei Liu: Ensuring that the decision-making process of an AI system is transparent and provides explanations so that people understand how it works can increase trust in the AI system and help prevent potential adverse effects. At the same time, it is necessary to develop clear ethical guidelines in a timely manner to ensure that the design and use of AI systems conform to ethical principles and social values, including ensuring that AI systems do not discriminate, do not violate privacy and individual rights, and comply with relevant laws and regulations. Ensure that the data used by the AI system is of high quality at the source, and strictly protect personal privacy. The source, use and storage of data should comply with relevant regulations, and security measures should be in place to protect data from misuse or leakage.

Establish an effective regulatory mechanism to ensure that the development and use of AI systems comply with the regulations and standards set by the government, and strengthen the review and supervision of AI technologies and applications to avoid abuse and adverse effects. Encourage public participation in the development and decision-making process of AI technologies, and ensure that the interests of all stakeholders are balanced, which can be achieved through open consultations, consultations, and multi-stakeholder collaboration. Establish a monitoring and evaluation mechanism to discover and resolve possible risks and problems in AI systems in a timely manner, and continuously improve and optimize the performance and security of AI systems.

Zhu Rongsheng: From the perspective of regulating the potential risks of AI, countries should promote cooperation on global AI governance while safeguarding their interests. Governance in the field of AI is a serious issue facing global governance, which is crucial to the fate of mankind. The international community needs to pay attention to the vulnerability of developing countries in the wave of emerging technologies, and technological powers should not blindly take into account their own interests and adopt the means of "decoupling and breaking the chain". AI is a dual-use technology, and it is in the common interest of all countries in the world to guide the healthy development of the global AI industry, achieve the goal of "artificial intelligence for good", and then promote AI technology for the benefit of all mankind.

How humans can coexist with AI

In terms of more sensitive military security, the militarized application of AI has attracted more and more attention from international public opinion. According to international media reports, AI technology has been used in the new round of the Palestinian-Israeli conflict, causing civilian casualties. From the perspective of constructing global security governance rules for AI, the international community may need to examine whether the concept of "responsible AI" proposed by the United States and the West is too ideal, and whether it should explore the common idea of establishing some kind of "humane AI" to avoid a humanitarian catastrophe caused by algorithms determining life and death. In short, in order to promote the realization of the goal of "AI for mankind", it is necessary not only to ensure that each country is based on its own basic security and development interests, but also to have the awareness and action to promote the construction of a "global artificial intelligence" based on a community with a shared future for mankind.

Banyuetan: How do you think ordinary people should embrace the era of artificial intelligence?

Liu Wei: With the development of artificial intelligence, some jobs may be replaced by automation, so it has become particularly important to continuously learn and improve your skills, AI performs well in a lot of repetitive, mechanized work, but there are still limitations when it comes to creative thinking, emotional communication and problem solving, so it is important to cultivate your own cooperation and creativity, and develop the ability to collaborate with AI.

The application of AI requires the use of various technological tools and data analysis skills, and people can learn how to use various technological tools, as well as how to process and analyze data, so as to better work with AI. At the same time, as AI continues to evolve, the working environment and needs will also change. As a result, ordinary people need to remain flexible and adaptable, actively adapting to new ways of working and technological changes.

Although AI excels in some aspects, it still has many limitations in areas such as emotional communication, humanistic care, and ethical decision-making, so we can develop some emotional skills and humanistic care to develop the ability to interact with AI in a human way.

Zhu Rongsheng: We should rationally look at the impact of the application of artificial intelligence technology on social development. Although there are optimistic expectations in the technology and industry circles that artificial intelligence that can see the level of human intelligence will eventually enter the public eye, the current AI technology has not yet shown the effect of causing disruptive changes in the social structure. From ChatGPT to Sora, the main enabling role played by AI is to improve the efficiency of users, and it has not reached the level of change of "a large number of machines replacing human employment". Rather than being anxious about an uncertain future, it is better to look at the development of AI and its impact rationally.

Of course, it would be irresponsible to ignore the risks that may arise from the disorderly development of AI technology. If AI technology continues to improve its capabilities through the current "brute force computing" method of constantly superimposing computing power, and eventually achieves "intelligence spillover" that surpasses humans, then the potential risk of indulging the unlimited development of AI technology will increase.

Source: Half Moon Talk

Read on