laitimes

Wang Yahui丨Women's employment discrimination and regulation in the context of algorithmic automated decision-making

author:Shangguan News

Wang Yahui

Master candidate, University of Chinese Academy of Social Sciences

Essentials

First, the dissolution of algorithmic discrimination in the era of weak artificial intelligence

2. The inadequacy of the current legal system in the mainland and the reference to comparative law

III. Key aspects of algorithmic discrimination regulation in women's employment

Fourth, the reconstruction of benign algorithms against female employment discrimination

Wang Yahui丨Women's employment discrimination and regulation in the context of algorithmic automated decision-making

Algorithmic automated decision-making has broad application prospects in the field of employment, but algorithmic technology is not neutral and unbiased, but embeds bias and discrimination into the decision-making system. Taking the algorithmic discrimination existing in the era of weak artificial intelligence as the background, through the comparative law analysis of the current legal system in the field of employment and algorithm in the mainland, it is clear that the regulation of algorithmic discrimination in female employment should focus on the underlying social discrimination logic, big data lies and algorithm technology criticism. Based on this, it is proposed to revise the underlying logic of society and release the female demographic dividend; Crossing the gender data divide and transforming from big data to high-quality data; Guided by the critical theory of technology, the whole process deeply regulates the suggestions of algorithmic gender discrimination in the employment field.

Wang Yahui丨Women's employment discrimination and regulation in the context of algorithmic automated decision-making

Under the background of the era of artificial intelligence, the application scenarios of algorithms are increasing. Algorithms use automated data-driven methods to make decisions by mining information on massive amounts of data, such as automatic recommendation, evaluation, and judgment. Algorithms and code, not rules, increasingly determine the outcome of various decision-making efforts. But algorithms are not neutral and unbiased, but embed bias and discrimination into the decision-making system, quietly eroding social fairness and justice. Women have been discriminated against in the job market for a long time, but in the context of algorithmic automated decision-making, there is a dilemma that discrimination is difficult to detect, difficult to prove, and difficult to blame, so it is more necessary to regulate.

First, the dissolution of algorithmic discrimination in the era of weak artificial intelligence

To regulate algorithmic discrimination in the era of artificial intelligence, we first need to clarify the target of regulation and its development stage. Regarding the rise of artificial intelligence, it should be traced back to 1950, Alan Turing put forward the question of "can machines think" in the paper "Computing Machines and Intelligence", and answered this question by using the "Turing test" to determine whether computers have human intelligence. Then, at the Dartmouth Conference in 1956, McKinsey and other scientists first proposed the concept of "artificial intelligence", marking the birth of the discipline of artificial intelligence. The development path of artificial intelligence is full of twists and turns, but with the vigorous development of information technology such as big data, cloud computing, the Internet, and the Internet of Things, artificial intelligence technology has also ushered in a period of rapid development, achieving a single point breakthrough in special artificial intelligence such as information perception and machine learning, and in some aspects can even surpass human intelligence, but it is still weak in the ability necessary for general artificial intelligence such as conceptual abstraction and reasoning decision-making. Therefore, it can be said that the overall development level of artificial intelligence is still in its infancy. According to the "three levels", we are still in the initial stage of development from weak artificial intelligence to strong artificial intelligence, and we can still make a difference in algorithm discrimination at this stage. Some scholars evade the governance of weak AI on the grounds that it is difficult to regulate the intelligent awareness in the era of strong artificial intelligence, and it is not advisable to improve the existing rules in law and technology as a palliative measure. On the contrary, the research should take the current and considerable period of time in the world will be in the era of weak artificial intelligence as the research background, focus on the understanding and regulation of one of the core elements of artificial intelligence, that is, algorithms, and take the particularity of artificial intelligence and its impact on existing legal theories as the research focus.

The operating logic of AI algorithms forms a dominant relationship on human behavior, which poses a great challenge to modern ethics characterized by emphasizing individual happiness and power priority. Algorithmic automated decision-making may appear to be gender-neutral, but it may embed existing gender biases, become a means of distribution that hides sexism, and cause imperceptible sexist consequences. A series of algorithms that pointed out sexism sounded the alarm about the application of recruitment and screening algorithms: in the 80s of the 20th century, St. George's Medical College in London screened students' applications through algorithms, but algorithmic rules created based on previous admissions data proved to be used to discriminate against women and people of color. In 2018, Amazon's AI recruitment tool reviewed candidates and scored job applicants by learning the company's resume data in the past 10 years, and was exposed to gender discrimination in practical applications, especially discrimination against women, and automatically downgraded candidates with the "female" logo. In August 2019, Apple's algorithm behind credit cards was also accused of gender discrimination, with a couple filing tax returns together, but the husband paid 20 times more than his wife. Artificial intelligence can be widely deployed and applied in various fields with its scientific and objective image, but the uncertainty of its development has brought new challenges to the legal regulation of the digitalization of discrimination, and the expression form, communication efficiency and impact scale of discrimination have been profoundly changed. Therefore, the discriminatory effect of artificial intelligence is an important issue that regulators will have to face in various fields in the future to ensure the safe, reliable and controllable development of artificial intelligence.

2. The inadequacy of the current legal system in the mainland and the reference to comparative law

In the era of weak artificial intelligence, algorithmic discrimination in the field of women's employment urgently needs to be regulated by law and the law should also make a difference. Women's employment discrimination in the context of algorithmic automated decision-making is related to employment discrimination in traditional fields and algorithmic discrimination in the digital age, so this article will separately explain the shortcomings of legal system regulation, but will focus on legal regulation in the field of algorithms.

Regulation of discrimination against women in the field of employment

Mainland legislation on combating gender discrimination mainly includes the Constitution, the Law on the Protection of Women's Rights and Interests, the Labour Law and the Trade Union Law. The fight against gender discrimination is governed by the provisions of articles 33, 42 and 48 of the Constitution, which implement equality between men and women and equal pay for equal work. Article 2 of the Law on the Protection of Rights and Interests of Women reaffirms that women enjoy equal rights with men, emphasizes the prohibition of discrimination against women, and devotes a special chapter to the protection of women's labor and social rights and interests. The Draft Law on the Protection of Women's Rights and Interests (Revised Draft) emphasizes the protection of women's equal right to employment, clearly lists the main situations of gender discrimination, increases the legal responsibility for implementing gender discrimination in employment, and establishes and improves the maternity leave system for employees. Article 3 of the Labour Code stipulates the labour rights enjoyed by workers, such as the right to equal employment and the right to choose a profession. Article 12 stipulates that workers shall not be discriminated against in employment on the basis of sex. Article 13 stipulates that women have equal employment rights with men and that no woman may be refused employment on the grounds of sex or the recruitment criteria for women may be raised. These provisions relate to equal protection for women in employment and the prohibition of discrimination in employment on the basis of sex.

The above-mentioned law still does not clearly define "discrimination against women" and does not clarify the distinction between direct and indirect discrimination. In the context of algorithmic automated decision-making, employment discrimination is more difficult to detect, and the traditional definition of direct discrimination is stretched thin when it comes to managing indirect discrimination in the era of artificial intelligence, which is not conducive to the regulation of discrimination in real life. In addition, the provisions are too principled, which is not conducive to victims' access to remedies and to operation and implementation in practice.

Conversely, the principles of equality and the prohibition of discrimination are enshrined in the nine core United Nations human rights treaties, with the Convention on the Elimination of All Forms of Discrimination against Women focusing on discrimination against women, defining "discrimination against women" in article 1 of the Convention, covering both direct and indirect discrimination, and urging States parties to adopt temporary special measures to accelerate de facto equality between men and women. Article 11 requires States parties to take all appropriate measures to eliminate discrimination against women in employment. Article 1 of the International Labour Organization Convention on Discrimination in Respect of Employment and Occupation (No. 111) also clearly defines discrimination as any distinction, exclusion or preference that has the effect of nullifying or impairing equality of opportunity or treatment in employment or occupation, and article 2 requires Member States to undertake to declare and pursue a national policy aimed at promoting equality of opportunity and treatment in employment and occupation in a manner appropriate to national conditions and practices. China ratified and acceded to the two conventions in November 1980 and August 2005, respectively.

From the perspective of other national or regional legislation, we can know that foreign countries have clearly stipulated direct and indirect discrimination on the issue of anti-gender discrimination, not only regulating in "form", but also paying more attention to equality in the sense of "results". The most important law in the United States that prohibits gender discrimination in employment is the Civil Rights Act of 1964, which establishes the national policy of equal protection of women's employment rights, and Chapter VII clearly defines issues related to employment discrimination, including sex discrimination, and sets basic standards for employment discrimination. The implementation of the bill, which calls for "affirmative action to ensure equal treatment of applicants of different sexes" and the repeal of so-called "protective" legislation specifically targeting women, has an important impact on the realization of women's right to employment. The most important anti-sex discrimination law in the UK is the Sex Discrimination Act 1975, which marked the formal recognition of the principle of anti-sex discrimination in the field of employment by national law. The law clearly regulates gender discrimination in the field of employment and prohibits discrimination on the grounds of sex and marital status. There are corresponding provisions on gender wage and treatment discrimination, gender employment treatment discrimination, as well as the burden of proof and the means of relief. EU directives dealing with discrimination in employment include 75/117, 76/207, 2000/43, 2000/78 and 2002/73, the most important of which is the 1976 Equal Treatment of Women and Men Directive, which is only principled. Therefore, in 2002, the Directive was amended in detail, clearly defining direct discrimination and indirect discrimination in gender discrimination, and stipulating corresponding judicial remedy procedures, which strongly guaranteed the realization of equal employment rights in the EU.

Regulation of discrimination against women in the field of algorithms

Regarding the regulation of gender discrimination in the field of algorithms, the data security law and personal information protection law implemented by the mainland in 2021 formed the basic framework for the regulation of algorithmic automated decision-making, such as according to the relevant provisions of the Personal Information Protection Law, the processing of personal information should have a clear and reasonable purpose, follow the principle of minimum necessity, and obtain the consent of the individual. Where personal information is used to conduct automated decision-making, transparency of decision-making and fairness and impartiality of results shall be ensured. Individuals have the right to refuse automated decision-making decisions. However, its provisions are more general, and personal information necessary for human resource management can be processed without the consent of individuals, so there are many difficulties in regulating gender discrimination in algorithmic automated decision-making. The Provisions on the Administration of Algorithm Recommendation for Internet Information Services, implemented in March 2022, mainly regulates the use of algorithm recommendation technology to provide Internet information services, and Chapter III provides that providers of algorithm recommendation services shall inform users of the situation of algorithm recommendation services, publicize their basic principles, purpose and main operating mechanisms, etc., and require algorithms to provide options that do not target personal characteristics or allow users to turn off algorithm recommendation services. However, the scope of application of this regulation and the types of algorithms regulated are more limited, and do not reflect the biased protection for women.

From the perspective of foreign legislation, the United States has always attached importance to the protection of privacy, emphasizing ethical education and privacy regulations to prevent artificial intelligence risks. In December 2017, the State of New York passed the Algorithmic Accountability Act, which aims to address algorithmic discrimination through transparency and accountability. In 2022, the 2022 Algorithm Accountability Act was proposed at the federal level, which requires companies applying automated decision-making to test algorithms for false data, bias, security risks, performance gaps, and other issues, and submit an annual impact assessment report and file it to promote the fairness, accountability, and transparency of algorithms. The European Union is actively promoting human-centered AI ethics and governance. On the one hand, in accordance with the General Data Protection Regulation, the information security of individuals is fully guaranteed, and automated decision-making is required to provide meaningful information about the operation of logic, explaining the importance of automated decision-making and the intended consequences; On the other hand, relying on the algorithmic responsibility and transparency governance framework, it is proposed to use algorithmic transparency and responsibility governance as tools to solve the problem of algorithmic fairness, and establish a regulatory mechanism at different levels. The European Commission has also published a proposal for uniform rules on artificial intelligence aimed at enabling trustworthy AI through governance and regulation. The bill implements classified and sub-obligation management of artificial intelligence, clarifies the mandatory obligations of enterprises in the process of artificial intelligence development and operation, and formulates inclusive and prudent technical supervision measures, which is of far-reaching significance for regulating the potential risks and adverse impacts of artificial intelligence. In addition, the Institute of Electrical and Electronics Engineers (IEEE) has issued the IEEE Global Initiative to Advance the Ethical Design of AI and Autonomous Systems, which proposes to ensure that stakeholders in the design and development of AI systems prioritize ethical issues through education, training, and empowerment. The global governance trend of artificial intelligence has been very significant, and "algorithm hegemony" is an urgent problem for regulators in various countries, but the problem of female discrimination in algorithmic automated decision-making has not been paid attention to, and the laws of various countries need to be adjusted and improved to meet the needs of algorithm discrimination regulation in the era of artificial intelligence.

III. Key aspects of algorithmic discrimination regulation in women's employment

To regulate algorithmic discrimination, especially in the employment of women who have not yet received attention or have received insufficient attention, it is necessary to trace the origin of female discrimination embedded in algorithms. The three core elements of artificial intelligence are data, algorithms, and computing power. The algorithm is a data-driven algorithm, data is the "feed" of the algorithm, and a good algorithm model can only be obtained after a lot of training on the labeled data and covering as many scenarios as possible. Big data is a kind of mirror image of society, and the existing patterns and characteristics, prejudices and structural inequalities of the current society will be mapped into the data. According to the above path, gender discrimination and prejudice in society will be embedded in algorithms through data and code, so this paper will analyze algorithmic discrimination in women's employment from a three-dimensional perspective of society, data and technology, in other words, the areas that should be focused on when regulating female employment discrimination in algorithmic automated decision-making.

Starting from the underlying logic of society, algorithmic discrimination does not originate from technology itself or technological innovation, but is the result of cutting-edge science and technology empowering traditional social contradictions. First, gender discrimination and prejudice are widespread. Second, the ideal worker norm of masculinization in the employer. From an employer's point of view, the ideal worker is an employee who starts working in early adulthood without interruption and has a full career in order to meet his or her goal of maximizing profits. Third, occupational gender segregation. The term gender segregation of occupations was first proposed by Gross, that is, due to social systemic factors, different genders are concentrated in different industries and positions, mainly manifested in the fact that most of the female labor force is concentrated in some "feminized" occupations, traditionally maintained by male-dominated occupations as "masculine" occupations, which have the characteristics of high skills, high income, and high status, while "feminized" occupations have low technical content, low wages and low social status. This can be seen in the fact that Google's algorithm provides women with fewer ads related to high-paying jobs than men.

Starting from the analysis of data lies, big data reflects the diverse behavior of subjects and human real society, but there is also a risk of distortion and alienation. On the one hand, the data of "big data" is only insignificant "microdata" and shallow "table data" compared to all the data released by everything at the same time, and most big data resources are incomplete. At present, the neural network applied to the algorithm only has the ability of correlation learning, and has no causal logical reasoning ability, which makes the algorithm extremely vulnerable to the influence of heterogeneity, false correlation and accidental endogenity in complex big data. Therefore, the "digital gender gap" caused by the contradiction of traditional social structure will aggravate women's discrimination and data bias in the era of artificial intelligence, and the phenomenon of scientific and technological processes excluding women's development is more serious. On the other hand, the data is objective, but the person applying the data has subjective intentions. At this stage, the data applied by artificial intelligence algorithms must be used through data cleaning and data annotation, and the data cleaning and labeling work is mainly completed manually. Manually classify, organize, edit, correct, annotate, annotate and annotate original data such as text, images, voice, and video, providing readable data for machine learning training. If the annotators have unilateral technical thinking, affected by subjective factors such as ignoring value presuppositions, sample selectivity bias and cognitive limitations, it will aggravate the obstacle effect of data bottlenecks and traps, and the development of algorithms and artificial intelligence will deviate from the original intention and normal track.

Starting from the correction of technological paradigms, algorithm technology has become a force that cannot be ignored in the digital age, and the risk of discrimination of algorithms has also caused social anxiety. From the perspective of upper-level technical theory, the theory of technological neutrality is full of optimism and confidence in the current situation and development prospects of science and technology, and advocates technological domination; Technological ontology is pessimistic about the development of technology, believing that technology rejects democracy and freedom, is a totalitarian social force, and will win over the existence of Angren and human culture; The theory of technology criticism is a purpose critique, process criticism and design critique of science and technology, and believes that the development of technology can move towards a path of technological democratization. We should abandon technology worship and technical panic, rationally and objectively view technological development and progress, and take technology critical theory as the technical philosophy guidance in the era of artificial intelligence. From the perspective of algorithm development, the algorithm is developed through the process of asking questions, data processing, dataset construction, model design, model training, model testing, etc., and the limitation of this process is that it requires a lot of manual intervention, such as manually setting neural network models, manually setting application scenarios, and manually collecting and labeling a large amount of training data. Therefore, the algorithm development process is the main position for algorithm discrimination invasion. In terms of algorithm application, algorithms are cloaked in "technical" and "objective", but because of their high professionalism, unexplainability and opacity, the existence of "algorithm black box" has occurred. Deep learning of artificial intelligence has further highlighted the technical barriers brought by the "algorithm black box", program errors and algorithm discrimination have become difficult to identify in the deep learning of artificial intelligence, and the legal subject system, transparency principle and accountability mechanism are difficult to apply.

Fourth, the reconstruction of benign algorithms against female employment discrimination

By analyzing the key aspects of the regulation of algorithmic discrimination in female employment, it can be seen that the governance of algorithmic discrimination will have little effect if it only focuses on the internal structure of the algorithm, and the underlying social logic and data lies rooted in it are also the focus of regulation. This paper focuses on the problem of algorithmic gender discrimination in employment, and hopes to put forward effective policy suggestions for this.

Revise the underlying logic of society and release the female demographic dividend

First of all, take the opportunity of the revision of the Law on the Protection of Rights and Interests of Women to clarify the definition of "discrimination against women", deepen people's understanding of gender discrimination, and provide a yardstick and normative basis for accurately identifying whether a measure or behavior is discriminatory, with a view to correcting gender prejudice and gender stereotypes. Only by explicitly defining "discrimination against women" to cover both direct and indirect discrimination, focusing not only on "gender" but also on the "consequences" or negative impacts it causes, can it play its due role in controlling implicit discrimination in the era of artificial intelligence. Otherwise, it may be difficult to regulate the widespread and hidden discrimination caused by artificial intelligence by relying only on existing legal provisions. How to understand and define discrimination against women is the core issue of anti-gender discrimination, and the clear definition of discrimination can govern the governance of gender discrimination in the employment field, which is conducive to the improvement of the operability and enforceability of the legal level.

Second, it deviates from the masculine ideal worker norm and affirms and shares the production responsibilities borne by women. In The German Ideology, Marx put forward two concepts of "production of life", including the production of one's own life through labor and the production of the life of others through procreation. Engels further interpreted the two kinds of production as "the production of material materials" and "the production of people themselves", fully affirming the role and contribution of women. From the perspective of gender and development, alleviating women's work-family conflicts is not only women's responsibility, but also requires the participation and sharing of diverse subjects, including government, society, the business community, and men and women. Government agencies should promote the construction of virtuous worker norms through legislation and policy guidelines, not based on uninterrupted careers; Employers can not only regard profits as "truth", but also bear corresponding social responsibilities.

Bridge the gender data divide and transition to high-quality data

On the one hand, in order to avoid the lack of female employment data in the era of big data, it is necessary to improve the status of women from the root to increase women's employment data and eliminate the "gender data gap"; It is also necessary to recognize the inherent heterogeneity, noise accumulation, false correlation and accidental endogenousness of big data. Therefore, it is necessary to shift the focus from big data to high-quality data, update data concepts and processing methods, fundamentally change one-way unilateral thinking, strengthen the gender equality test and narrow the scope of positioning of data, in order to ensure that the "gender perspective" is reflected in the data, so that the algorithm trained by the data can ensure that it has an equal and perfect gender ethics. Specifically, when grouping data, men and women are not simply used as classification criteria to exclude the impact of data heterogeneity on the analysis results, but as many groups as possible to explore the individual occupational characteristics and needs of women at the data level. When selecting variables, attention should be paid to independent screening to prevent false correlations and accidental homogeneity in past occupational data from affecting AI deep learning, and to avoid judgment analysis of erroneous correlations between occupation and gender.

On the other hand, strengthen data quality inspection to avoid gender discrimination that may be embedded in the process of manual cleaning and data identification. Since the data applied by AI algorithms at this stage is more manually cleansed and annotated, unconscious discrimination driven by implicit human bias may be embedded in it. Therefore, the quality inspection process of data is even more important when the data cleaning and labeling work is completed. Today's data quality testing mainly focuses on assessing whether the expected quality requirements are met in terms of data integrity, consistency and accuracy. From the perspective of the concept of paying attention to gender equality, enterprises should also pay attention to gender equality testing when conducting employment data testing, and determine whether women are unduly differentiated, excluded or restricted through female employment data analysis, or have the consequences of hindering or denying women's enjoyment and exercise of rights in accordance with the law.

Guided by technical criticism, in-depth regulation of algorithmic discrimination

Through the above analysis, we establish the technical philosophy of technical critical theory as the technical philosophy guidance for the regulation of algorithm discrimination in the era of weak artificial intelligence, rationally view algorithms and comprehensively regulate algorithm discrimination through the formulation and revision of laws. From a comparative perspective, foreign regulation of algorithm discrimination is carried out from the aspects of algorithm ethics education, transparency and interpretation rights, and algorithm accountability. Based on the above aspects, this article will also discuss how to effectively and specifically regulate algorithmic gender discrimination in the employment field with the whole process of algorithm design, application, supervision and responsibility as the regulatory object.

Ethics is the correction mechanism of artificial intelligence development, the prevention of ethical risks of artificial intelligence at this stage should start from the restraint of research and development personnel, if necessary, education and training and other means can be adopted to emphasize gender equality, respect for privacy and other ethical concepts to developers, so that developers are aware that their work has the risk of malicious purposes and gender discrimination, as a part of the design engineers of the future world of mankind, should bear the responsibility of designing algorithms that are gender-neutral and generally beneficial to mankind. The cultivation of algorithm ethics for gender equality is very beneficial to algorithms in the era of strong artificial intelligence and even ultra-artificial intelligence, only in this way can we ensure the healthy and sustainable development of artificial intelligence.

In order to avoid the algorithmic discrimination bred in the "algorithm black box", increasing the transparency of algorithms and giving individuals the "right to explain" has become a credible and attractive normative measure. Therefore, when making labor and personnel decisions through algorithmic automated decision-making, the purpose of algorithm design and whether the application process takes into account gender factors and the factors that may imply gender and their proportions should be publicized to ensure that the use and processing of gender information at this time meet the requirements of the principles of legality, reasonableness and minimum necessity. In addition, when individuals feel that they have been discriminated against algorithmically on the basis of gender discrimination in the decision-making process, they should be ensured that they have the right to request explanations from decision-makers on a case-by-case basis. Through the above-mentioned pre- and post-event regulations, women will not be discriminated against in the decision-making process of labor and personnel.

Even setting ethical norms for gender equality for algorithms and carefully running the algorithm logic of gender equality sometimes fail to effectively prevent the occurrence of gender discrimination. At this time, attention should be paid to the audit of algorithm operation, the operation test of artificial intelligence on gender discrimination, and the obligation of caution to employers. In addition, external audits of algorithmic gender equality can be strongly promoted, and employers who pass the audit can obtain certification marks to provide them with a competitive advantage in attracting talent.

Finally, on the issue of attribution of algorithmic sexism. Algorithm infringement has multiple subjects, difficulty in causal traceability, difficulty in determining subjective fault, and rule problems that lead to uncontrolled decision-making due to autonomy. In any case, existing legal subjects such as designers, sellers and users of AI should bear the corresponding responsibility for discrimination. Due to the professionalism and opacity of the algorithm, it is difficult for the victim to provide evidence, so the principle of presumption of fault should be referred to, and the algorithm designer or application party should prove that it did not discriminate on the basis of sex or tried its best to circumvent the algorithm gender discrimination but failed to achieve the expected effect in order to be exempted from liability.

This article does not cover enough about gender discrimination among women, and there is a problem of feminism that is difficult to eradicate in the deeper social structure. With the increasingly profound impact of big data algorithms on our lives, through the in-depth development of the digital economy and intelligent society, the development pattern of human rights has moved towards the era of digital human rights. However, the uncertainty of the development of artificial intelligence has also brought legal risks of discrimination and digitalization, constantly showing the discipline of algorithm power on people, and the algorithm "governance deficit" has become more and more serious. Article 11, paragraph 1, of the Convention on the Elimination of All Forms of Discrimination against Women states: "States Parties shall take all appropriate measures to eliminate discrimination against women in respect of employment in order to ensure them the same rights on a basis of equality of men and women". Therefore, as a signatory to the Convention, the mainland should focus on AI discrimination governance, accelerate the digital transformation of anti-discrimination laws, and build a people-oriented, fair and inclusive anti-discrimination legal system. This paper hopes to use the advent of the era of artificial intelligence to guide the development of algorithms for good, promote gender equality rather than aggravate gender discrimination, release the female demographic dividend instead of restricting women's progress and development, and respect and protect women's human rights instead of aggravating digital gender-based violence.

Wang Yahui丨Women's employment discrimination and regulation in the context of algorithmic automated decision-making