laitimes

Integrate the strengths of various ethical concepts to build an algorithm-friendly collaborative governance network

Zhongxin Jingwei, December 22 (Xue Yufei) Recently, the Department of Sociology of the School of Social Sciences of Tsinghua University and the Faculty of Chinese Academy of Sciences - Tsinghua University Science and Social Collaborative Development Research Center hosted a seminar on "Ethical Positions, Algorithm Design and Corporate Social Responsibility", in which more than ten experts and scholars from Tsinghua University, the Chinese Academy of Sciences, and Dalian University of Technology exchanged views on the ethical issues of algorithms, algorithm governance and corporate responsibility. At the meeting, the Department of Sociology, School of Social Sciences, Tsinghua University, released the research report "Ethical Position, Algorithm Design and Corporate Social Responsibility" (hereinafter referred to as the report) and the "Guide to the Ethical Practice of Enterprise Algorithms".

Participating scholars believe that while algorithms promote social progress, ethical issues such as algorithm fairness, algorithm bias, and algorithm discrimination are also of concern to society. At the same time, algorithms have the characteristics of value plasticity, and algorithm design needs to break through the traditional thinking structure and social concepts, promote social development with the value position of goodness, and promote the development of public order and good customs and new social ethics. For algorithmic governance, a "governance network" that integrates the strengths of various ethical concepts and the participation of multiple subjects should be established to achieve collaborative governance in which the government, professional institutions, groups, enterprises and the public participate together.

Ethical phenomena are associated with the application value of algorithms

Integrate the strengths of various ethical concepts to build an algorithm-friendly collaborative governance network

Seminar on "Ethical Positions, Algorithm Design and Corporate Social Responsibility". Courtesy of the organizer

Nowadays, algorithms have been applied to all walks of life and are closely related to people's production and life. The report points out that algorithms provide a new technological implementation path for people's thinking, decision-making and action, to a certain extent, avoid the limitations of subjective choices based on human emotions, and also help improve the quality of life and improve social management. At the seminar, Chen Changfeng, executive vice dean of the School of Journalism and Communication at Tsinghua University, pointed out that algorithm technology is no longer a simple tool, and it affects people's communication and information dissemination, "Algorithm and data technology are constantly improving people's cognitive ability and helping social life." ”

Integrate the strengths of various ethical concepts to build an algorithm-friendly collaborative governance network

Chen Changfeng, executive vice dean of the School of Journalism and Communication of Tsinghua University. Courtesy of the organizer

The report points out that the rapid development of algorithm technology is also accompanied by ethical issues, which are mainly reflected in the three levels of human subject rights, social ethics, and technical ethics. The rights of human subjects involve fairness and autonomy, the definition of fairness is largely based on context and subjectivity, and the transfer of decision-making power is also related to the autonomy of subjects; social ethics involves privacy and information security, to prevent the application of unendited, unrestricted and unauthorized algorithms, and to carry out automated data analysis without the knowledge of data subjects; technical ethics involves opacity and prejudice, discrimination, and algorithms may contain prejudices from designers, users and learned data.

Integrate the strengths of various ethical concepts to build an algorithm-friendly collaborative governance network

Li Zhengfeng, professor of the Department of Sociology at Tsinghua University. Courtesy of the organizer

Li Zhengfeng, a professor in the Department of Sociology at Tsinghua University, said at the symposium: "Algorithms may reproduce the ethical problems that exist in society, especially after mastering a large amount of data and digging deeper into the data, the ethical problems hidden in society will be more clearly displayed." For example, there are more fairness issues that people discuss now, as well as algorithmic bias, algorithmic discrimination, and privacy protection, which are ethical issues that already exist in society, but they will be more explicitly displayed through algorithms. ”

Integrate the strengths of various ethical concepts to build an algorithm-friendly collaborative governance network

Chen Ling, Associate Professor, School of Public Policy and Management, Tsinghua University. Courtesy of the organizer

The fairness of algorithms has always been one of the most concerned topics for all walks of life in the application process of algorithms, but there is no consensus on what is fair and how to achieve fairness. Chen Ling, an associate professor at Tsinghua University's School of Public Policy and Management, pointed out that there is no common and acceptable definition of algorithmic fairness. Fairness can be divided into fairness of starting point, fairness of process, and fairness of results, and corresponding fairness guidelines should be formulated for fairness at different stages. Among them, result fairness is the most intuitive and simple fairness appeal of people, but it does not mean that the final result is equal, but refers to the computable, predictable and interpretable results recommended by the algorithm.

For privacy and information protection, Zeng Yi, a researcher at the Institute of Automation of the Chinese Academy of Sciences, said that the policy and law stipulate that users have the right to authorize revocation, that is, after not using an application software, users can ask enterprises to delete data, which seems simple, but it is almost not feasible in technology landing. This is because the vast majority of data is not placed in the database to call, but through artificial intelligence and machine learning simulation to learn user features, in the form of parameters internalized in the form of artificial intelligence model, if the enterprise adopts a neural network model similar to deep learning, it is difficult to remove the impact of a single user data from the model.

Liang Zhengze, a professor at the School of Public Policy and management of Tsinghua University and director of the Artificial Intelligence Governance Research Center, believes that from the perspective of technical characteristics, machine learning is still in a "black box" state, and there are problems in transparency and interpretability. At present, China has issued a series of laws and regulations, including the Cybersecurity Law of the People's Republic of China, the Data Security Law of the People's Republic of China, the Personal Information Protection Law of the People's Republic of China, the establishment of a basic mechanism, some norms, guidelines, guidelines are also being introduced, the data governance system is being built, after the establishment of the basic system system, the next step should focus on the development of implementable and operational rules.

The report analyzes that the ethical problems of algorithms stem from the difficulty of explaining technology and the supremacy of the commercial interests of some enterprises, and the complexity, professionalism and closedness of algorithm technology make the public's cognitive understanding of design principles and structures, operating parameters, decision-making basis, and implementation mechanism insufficient, and some enterprises illegally collect data from commercial interests to infringe on user privacy and information security.

Integrate the strengths of various ethical concepts to build an algorithm-friendly collaborative governance network

Wang Yanyu, associate researcher at the Institute of Natural Science History, Chinese Academy of Sciences. Courtesy of the organizer

At the seminar, Wang Yanyu, an associate researcher at the Institute of Natural Science History of the Chinese Academy of Sciences, reviewed the development history of artificial intelligence, and he said that from a philosophical point of view, in the future, there may be a form of artificial intelligence machines producing knowledge by themselves, and with the increase of this amount of knowledge, it will bring new social impacts to interpersonal relations. But strong artificial intelligence with intentionality, at least until now, has not been feasible.

Adhere to the value position of algorithm for good

Integrate the strengths of various ethical concepts to build an algorithm-friendly collaborative governance network

The report points out that algorithms have the characteristics of value plasticity, which may bring about two types of "moral vacuum" situations, one is moral unconsciousness, which lacks moral or ethical dimension considerations in algorithm design; the other is that there are no rules for morality, and the ethical issues involved in the development of cutting-edge technology go beyond traditional moral norms, and new specific norms need to be re-established.

At the same time, algorithms are not only influenced by the value judgments of human society, but also have the role of value shaping. Because people's ethical value positions are diverse, including four levels of bad customs, following customs, public order and good customs, and changing customs and customs, and the distribution of people at the level of entering the countryside is the largest, unconscious, simple values and ethical positions tend to dominate, and algorithms may tacitly accept that they are the social code of conduct that should be generally observed.

Li Zhengfeng pointed out that after the unconscious, simple values and ethical positions dominate, there will be a lack of ethical responsibility, which will lead to the "abetting" of technology when reflected in the relationship between technology and society. Therefore, it is not possible to completely follow the tide, and good enterprises have a value position that is more worthy of affirmation, and will become the mainstay of maintaining public order and good customs. At the same time, it is also hoped that algorithmic governance can play a more important role, with a clean source and a clean wind.

The ethical position of algorithm design can be divided into four types: utilitarianism, obligatory theory, contract theory, and virtue theory. The utilitarian position holds that algorithm design should take its actual efficacy or interests as the basic purpose, without considering the motives and means; the deontology theory believes that the design of algorithms needs to formulate some kind of moral principle or act according to a certain legitimacy, emphasizing the moral obligations and responsibilities of algorithm design and the importance of fulfilling obligations and responsibilities; contract theory advocates the establishment of an open, transparent, equal and just contractual relationship between enterprises and users; moral theory believes that The ethical level of algorithm design is ultimately mainly determined by the main virtue of algorithm designers and algorithm companies.

Li Zhengfeng said that from the perspective of algorithm governance, the generation of new rules is also a kind of constraint and guidance, including the regulatory means issued by government departments, and more importantly, it is necessary to enable relevant actors to have a high sense of ethics and ethical responsibility, that is, to improve the "morality" of algorithm designers and enterprises, which rises to the ethical position of virtue theory. Nowadays, enterprises pay more and more attention to social responsibility, which is very commendable.

For example, ByteDance's ARTIFICIAL Lab has conducted technical basic research on the fairness of machine learning and other aspects of "responsible AI" (including but not limited to interpretability, controllability, diversity, authenticity, privacy and robustness) to ensure that ByteDance's machine learning-based intelligent system benefits all users.

The report proposes that algorithm design needs to break through the traditional thinking structure and social concepts, promote social development with the value position of goodness, and promote the development of public order and good customs and new social ethics. Driven by national policies and industry practices, "algorithm for good" has entered the stage of substantive ethical norms in China. The moral embedding of algorithmic technology can not only be satisfied with the existing ethical framework, but also requires iterative and adjusted normative systems to lead the value of society as a whole to good.

Chen Changfeng believes that new technologies do bring various risks and cause people to worry, just like all new technologies in the past, people will have doubts, which is inevitable. People should learn how to make good use of new technologies and use the most core beneficial parts, rather than "choking on food". She said that the development and use of algorithms should have value guidance, from the current point of view, China's enterprises with advantages in the use of algorithms, are very important to the use of algorithms.

Multi-party collaborative governance needs to uphold "algorithm friendliness"

The report said that the judgment of any single subject does not have universal validity, so the logic of algorithmic governance must be diversified, including the triple logic of technical governance, supervision and governance network. Algorithmic technology governance requires respect and play the subjectivity of enterprises in algorithm governance, while avoiding the simple pursuit of economic benefit maximization; regulatoryism requires enterprises and governments to pay attention to the protection of data security and privacy while playing the role of technical supervision, while not blindly following the user's improper questioning and judgment of algorithm problems; governance network requires the government, enterprises, and the public to participate in governance and negotiate important issues, not only to meet the public's reasonable value demands, but also to use algorithms as mediation intermediaries for various value biases. Solve algorithmic problems with the evolution of algorithmic techniques and the updated iteration of rule specifications.

Li Zhengfeng further explained that "technical governance" emphasizes the importance of the technical dimension, relies on professional institutions and personnel to play an important role in governance, and emphasizes autonomy. However, due to the immaturity of technological development and the influence of interest factors, there are certain difficulties in relying solely on the governance of the technical dimension, so it later developed into regulatoryism, emphasizing the role of regulators. Slowly, people have found that it is very difficult for both professional groups and government departments to play a regulatory role alone, so more emphasis is now placed on "governance networks", which emphasize the governance network with the participation of multiple subjects, and the collaborative governance in which the government, professional institutions, groups, enterprises and the public participate together.

Liang Zheng believes that "at present, the relevant laws issued by the state have drawn the red line of sensitive issues such as security and personal rights and interests, and then more specific requirements should be put forward for each special application field, the algorithm can be explained and accountable, and the hierarchical classification and sub-scenarios are realized in governance." At the same time, there is oversight in the process, remediation after the fact, and prioritization of governance, and the application of different governance tools in different areas, including the control of the basic bottom line. "Algorithm governance is an institutional issue, not a technical issue, and the governance of algorithms should focus on the process of algorithm use and its impact."

A number of participating experts called for giving enterprises a certain space for development while building a multi-party algorithm governance network. Chen Ling stressed that it is necessary to take into account the development of innovation and the digital economy, respect objective realistic conditions, and attract more engineers and experts in different fields to participate in the rule-making process.

Integrate the strengths of various ethical concepts to build an algorithm-friendly collaborative governance network

Cui Peng, Associate Professor, Department of Computer Science and Technology, Tsinghua University. Courtesy of the organizer

Cui Peng, an associate professor in the Department of Computer Science and Technology at Tsinghua University, also said that external supervision of algorithms should adhere to the concept of "algorithm friendliness". Because it is not to supervise a certain objective thing, but to guide the technical form that plays an important role in the economic development of the next two or three decades, to avoid a lose-lose situation due to some inappropriate regulation, to reserve enough space for enterprises to promote healthy competition between enterprises and promote the healthy development of algorithmic economic platform technology.

Promote the ethical practice of enterprise algorithms and the public's special intelligence literacy

How to make the application of algorithms more in line with the responsibilities entrusted by society requires enterprises to adopt more active strategies to respond to social needs, shape and enhance trust through the participation of multiple subjects, respect the initiative of individuals, strengthen human-machine collaboration, and build a people-oriented, situational algorithm system with positive promotion significance.

The "Guide to the Ethical Practice of Enterprise Algorithms" released at the seminar proposed that enterprises should formulate specific work measures in four aspects: building an algorithmic ethics governance system, strengthening the construction of ethics committees, formulating basic guidelines and norms for algorithmic ethical governance, and strengthening employee ethical literacy training, among which, in terms of the basic guidelines and norms of enterprise algorithmic ethics governance, we should pay attention to fairness and justice, safety and transparency, balance and diversity, value, traceability and accountability, and compliance.

Li Lun, director of the Department of Humanities and Social Sciences of Dalian University of Technology, believes that enterprises should fulfill their digital responsibilities, and algorithm ethics is the core of corporate digital responsibility, and algorithmic ethics should be embedded in the contract of corporate digital responsibility, so that corporate digital responsibility changes from self-restraint to social constraint.

Integrate the strengths of various ethical concepts to build an algorithm-friendly collaborative governance network

Dai Siyuan, assistant researcher at the School of Social Sciences, Tsinghua University. Courtesy of the organizer

In addition to corporate responsibility, how can individuals better participate in algorithmic governance? Dai Siyuan, an assistant researcher at the School of Social Sciences of Tsinghua University, proposed the concept of "artificial intelligence literacy", which can be disassembled into general intelligence literacy and special intelligence literacy, universal intelligent literacy is people's ability to consume products and services, and special intelligence literacy is artificial intelligence technology research and development and production capacity. In his view, universal intelligence literacy may directly increase information privacy concerns, which is due to the increase in personal privacy awareness, while dedicated intelligence literacy may have a easing effect on information privacy concerns because it comes from technical trust.

"In other words, to alleviate information privacy concerns, it is imperative for dedicated intelligence literacy to be popularized in the whole society." Dai Siyuan said that professional technology companies can further open the technical "black box" to the public, so that more people can understand the professional knowledge such as algorithms. "At the same time, we should also establish a social co-governance pattern of artificial intelligence technology governance and application and information privacy protection, popularize algorithm expertise to the public, and improve their technical trust in smart products." (Zhongxin Jingwei APP)

Zhongxin Jingwei copyright, without written authorization, any unit and individual shall not reprint, excerpt and use in other ways.

Read on