laitimes

Research on the ethical risk prevention of artificial intelligence in Western academic circles

author:China Social Science Net

The intelligent revolution is profoundly influencing and shaping the production, lifestyle and development of social civilization of mankind. How to deal with the ethical issues and challenges brought about by the development and application of artificial intelligence technology has become a major issue that all mankind needs to face in the era of intelligence. The international academic community has conducted a lot of research on various ethical issues and prevention that may be caused by new technologies, which deserves our attention.

  A view of whether a machine can become a moral subject

  Ai clearly goes beyond traditional tools, has the ability to learn, make decisions, and adjust behavior to changes in the environment, resulting in corresponding ethical consequences. Therefore, how to determine or define the identity and status of ARTIFICIAL intelligence in human society, and whether it can become a moral subject or a legal subject, take responsibility for its own actions or be incentivized? Foreign scholars believe that the discussion of these problems should ultimately be reduced to the question of what is self-awareness and free will. From 1956 Turing (A. M. Turing's idea of a "Turing test" began with J. Turing. From R. Searle's "Chinese House" thought experiment, to H. Dreyfus's What Computers Still Can't Do: A Critique of Artificial Reason, early AI pioneers generally believed that AI was not humanoid based on its nature and its difference from human intelligence.

Research on the ethical risk prevention of artificial intelligence in Western academic circles

  In recent years, with the exponential development of new technologies represented by artificial intelligence, whether autonomous machines can become the main body has become an unavoidable topic. Among them, most scholars believe that machine intelligence relies on algorithmic programs, and it is difficult to derive self-consciousness and free will like humans, so it is difficult to become a moral subject. They believe that the human mind is composed of two parts, through formal logic, natural causal law and other understanding, grasping, transforming the object world of computing consciousness, through object activities and communication activities to confirm the nature and meaning of the subject world of social emotional consciousness, the autonomous consciousness shown by the machine is only a simulation of human computing intelligence. Such as Borden (M. Boben argues that it's hard for humans to design general AI because AI only focuses on intelligent rationality and ignores social-emotional intelligence, without wisdom. Žižek stressed that computers should not be imagined as adult brain models, but should be imagined as "flesh and blood computers", and the human brain cannot be completely reduced to computer models. However, some futurists believe that machines will derive different consciousness and beyond human intelligence in the future, and once super artificial intelligence appears, it will be difficult for humans to communicate with them and make it difficult to make them obey human moral rules. For example, Ray Kurzweil proposed the "technological singularity" theory in 2005, arguing that in the future human subjectivity will be challenged by machines. Mr. and Mrs. Michael Anderson wrote the book "Machine Ethics", which opened the way for the study of machine ethics with machines as the subject of responsibility. With the exponential development of artificial intelligence technology, whether the machine can break through the limitations of the causal law in the future and derive a dynamic consciousness requires continuous follow-up of theory.

  The debate among Western scholars about whether machines can become moral subjects has led us to re-focus on and examine the issues of "what is a person", "how should we treat people", and "what is the nature and limits of technology" under the trend of artificial intelligence.

  Moral and ethical risks arising from ARTIFICIAL intelligence

  Artificial intelligence technology has been inextricably linked to people since the beginning of its development, as early as 1950, Norbert Wiener, the founder of American cybernetics, believed that robot technology will be from good to evil, but robots will replace humans to do work, which may cause the "depreciation" of the human brain. Western scholars have conducted in-depth and systematic research and exploration of the moral and ethical risks that may arise from artificial intelligence.

  First, the discussion of artificial intelligence technology leads to unemployment of workers, the formation of new social injustices, technological gaps and other issues. Many Western scholars believe that artificial intelligence has caused a large number of unemployment, the gap between rich and poor in society and other risks. For example, Yuval Harari believes that with the evolution of technology, most people will be replaced by artificial intelligence because of their own work and become a "useless class", only a small elite class with technology and resources will evolve into superhumans, and social classes will be solidified and polarized. On how to better protect people's right to survival and development, Scholars such as James Hughes proposed to establish a comprehensive basic income system through taxation and public ownership of wealth to deal with unemployment and social injustice caused by smart technology.

  Second, the debate on the ethical risk of uncertainty in AI technology. "Who exactly should be held responsible for the behavior of machines" has become a growing ethical question of responsibility. Some scholars advocate that designers, makers, programmers, and users should exercise control over the social consequences of robots, emphasizing the ethical responsibility of robot engineers. Other scholars advocate designing algorithms in a morally embedded way, making machines moral bodies with built-in ethical systems to guard against the ethical risks that artificial intelligence poses at the design and application stages. In 2009, American scholars Wendell Wallach and Colin Allen co-authored the book "Moral Machines: Teaching Robots to Distinguish Between Right and Wrong", which made a more systematic analysis of how to design moral machines. However, ethical algorithms face value choices and conflicts. There are many moral norms and ethical principles in human society, and it is difficult to agree between various systems, and what kind of moral norms to design algorithms has become a problem. In addition, the ethical demands of designers are not monolithic, and how to make value trade-offs when designing moral machines has also become a problem. Based on this, some scholars such as J. Bryson Bryson et al. discussed how to rank values, resolve value conflicts, and seek universal ethical consensus as the theoretical framework for designing moral machines, and they generally regard machines as harmless and friendly to humans as the primary ethical principle.

  Third, there are concerns about artificial intelligence technology indiscriminately breaking through the boundaries of traditional human morality and ethics. In addition to the above questions, some scholars have expressed concern about the following issues. People's excessive dependence on intelligent technology can easily lead to technological hegemony and technical enslavement, resulting in social uncertainty risks and crises; the application of nursing robots has the risk of materializing the elderly and young children, weakening or violating their dignity, freedom, privacy and other risks; the application of autonomous combat robots has the risk of breaking the laws and regulations of the international community, increasing the possibility of regional conflicts and wars, and large-scale killing.

  Artificial intelligence ethical risk prevention ideas

  In view of the various ethical problems that may be caused by artificial intelligence, Western scholars generally believe that artificial intelligence ethical risks should be prevented and avoided in multiple ways such as machine ethical value-oriented design, industry standard setting and legislation.

  The greatest influence in the international academic community is the top-down ethical coding and the machine design ideas of bottom-up ethical learning. The former advocates embedding the moral rules of human society into algorithms in a procedurally coded manner, enabling machines to make moral decisions through calculations and reasoning. The latter believes that human moral behavior is learned in specific moral situations and interactions with others, so there is no need to pre-code, so that machines can become moral actors through moral case observation, interaction with other moral bodies, and so on.

  Both designs have limitations, with the former having the question of choosing what ethical values to embed and how to deal with complex moral scenarios, and the latter having no ethical guidance system, relying solely on machine learning, to enter the machine's morally sensitive data to produce what results will be obtained. Mark Coeckelbergh argues that current machine learning is essentially a statistical process, making it difficult for machines to be trained as fully ethical actors, arguing for the design of ethical machines from a relational approach to human-computer interaction.

  Set countermeasures to prevent and avoid the ethical risks of artificial intelligence through industry standards. In recent years, the European Robotics Research Network (EURON), nasa, and the National Science Foundation, the Ministry of Trade, Industry, and Energy of South Korea have guided AI ethics research at the national level. Some industry associations such as the British Standards Institute (BSI) have issued the "Ethical Design and Application Guidelines for Robots and Machine Systems", and the American Institute of Electrical and Electronics Engineers (IEEE) has proposed the "Ethical Design" (EAD) specification to deal with problems such as algorithm discrimination and social injustice caused by the cognitive preferences of designers.

  The practice of resolving the ethical risks of artificial intelligence through institutional norms. In April 2021, the European Commission passed the legislative proposal of the Artificial Intelligence Law, which distinguishes between unacceptable risk, high risk, limited risk and low risk for elements such as the function and use of artificial intelligence, and proposes a clear and specific corresponding classification governance and regulatory system.

  The international academic community's exploration of the feasibility of "ethically designed" ethical machines provides a methodological guide for us to develop and design reliable artificial intelligence. In the unpredictable present and future of the epidemic, it is the trend of the times that the fate of mankind is united, accelerating the deep integration of artificial intelligence and various fields, and making the world form a pan-intelligent global village. Therefore, it is necessary, urgent and possible to shelve controversy, seek global consensus, design ethical machines with the greatest common divisor, and jointly resist the risk of uncertainty in the future. However, these scholars have focused their research on how to avoid technological risks, ignoring the polarized ethical risks of the lack of humanistic values in the application of technology. In order to solve these problems, we should always adhere to the people-centered law of scientific and technological development, and always take the maintenance of human dignity and the protection of human values as the fundamental goal and precondition in the development and application of technology.

  (Author Affilications:School of Marxism, South China University of Technology; Institute of International Studies, Guangdong Academy of Social Sciences)

Source: China Social Science Network - China Social Science Daily Author: Cao Yanna Liu Wei

Read on