laitimes

Collection | 10,000-word long article detailed explanation: AI ethics practice of foreign mainstream technology companies

Collection | 10,000-word long article detailed explanation: AI ethics practice of foreign mainstream technology companies

Senior Researcher of Cao Jianfeng Tencent Research Institute

Liang Zhu Tencent Research Institute Assistant Researcher

During the two sessions of the National People's Congress in 2022, all sectors of society will discuss scientific and technological innovation and scientific and technological ethics. From the perspective of specific practice in the industry, with the increasing attention paid to AI ethics from all walks of life and the continuous promotion of AI regulatory policies and legislation in various countries, major technology companies have embraced AI ethics, created trusted AI, and regarded AI ethics and trusted AI as one of the core engines for creating market competitive advantages for AI products and services.

Microsoft, Google, IBM, Twitter and many other foreign mainstream technology companies have planned early, laid out a complete layout and deep practice in AI ethics and trusted AI, involving principles, governance institutions, technical tools and solutions, AI ethics products and services, action guidelines, employee training and many other aspects. This article systematically sorts out the practices of four representative companies, Namely Microsoft, Google, IBM, and Twitter, in order to enlighten them.

Slightly soft

1. Ethical principles

Microsoft is committed to promoting the development of AI technology human-centered, and puts forward six principles of fairness, security and reliability, privacy protection, inclusiveness, transparency and accountability in AI ethics.

2. Governance institutions

Microsoft has three main internal agencies responsible for the practice of AI ethics. They are the Office of Responsible AI (ORA), the AI and ethics in engineering and research committee (Aether committee), and the Responsible AI Strategy in Engineering, hereinafter referred to as RICE).

ORA has four main functions: to develop responsible AI rules within the company; to empower the team to help the company and customers implement the AI ethical rules; to review sensitive use cases to ensure that Microsoft AI principles are implemented in development and deployment; and to promote the development of legislation, norms, and standards to ensure that AI technologies contribute to the well-being of society. Through these actions, the Office of Artificial Intelligence puts Microsoft's AI ethical principles into practice.

Established in 2017, Aether is a committee of heads of product development, researchers, legal affairs, human resources and other departments, focusing on the areas of fairness and inclusion, security and reliability, transparency and interpretability, privacy guarantees, and artificial intelligence interaction and collaboration, actively formulating internal policies and deciding how to deal with problems that arise responsibly. When problems arise within the department, the committee responds with research, reflection, and recommendations that may evolve into general company philosophies, policies, and practices.

RAISE aims to integrate the requirements of responsible AI into the team's day-to-day development process. It has three functions: to establish responsible AI tools and systems to help companies and customers implement AI ethics; to help work teams implement responsible AI rules and integrate responsible AI requirements into their daily work; and to provide compliance tools for engineering teams to monitor and enforce the requirements of responsible AI rules.

Collection | 10,000-word long article detailed explanation: AI ethics practice of foreign mainstream technology companies

Third, ai ethics of technical solutions

For the ethical practice of AI, Microsoft has given a series of technical solutions. These technology solutions include technology tools and management tools throughout the AI lifecycle, as well as toolkits that integrate required features into AI systems according to the application scenario.

Collection | 10,000-word long article detailed explanation: AI ethics practice of foreign mainstream technology companies

1 / Technical tools

(1) Evaluation

Fairlearn: A python toolkit/library for assessing the score of a given AI model on a range of fairness metrics, such as whether the "predict personal income" model predicts better than the female group among male clients, and then identifies possible model discrimination and provides fairness constraints for model improvement.

InterpreteML: A python toolkit/library that integrates a range of cutting-edge methods of XAI (Interpretable AI). Both allow users to train an interpretable "glass box" model from scratch, and also help people understand/interpret some given "black box" model.

Error Analysis: A python toolkit/library that provides a range of features for "error analysis" of mainstream AI models. This includes, but is not limited to, creating visual heat maps for misclassified samples, constructing global/local interpretations, causal interference, and other analyses to help people better explore data and understand models.

Counterfit: A command-line-based, universal instrumentation tool for testing the stability and security of a given AI system as an open source platform.

(2) Development

SamrtNoise: A range of cutting-edge AI technologies based on "differential privacy": adding noise to the AI model training process in a specific way to ensure that the sensitive privacy data used by developers during development is not leaked.

Presidio: A Python toolkit/library. It can help users efficiently identify, manage and obscure sensitive information in big data, such as automatically identifying addresses and phone numbers in text.

(3) Deployment

Confidential computing for ML: On Microsoft cloud systems, the absolute security of models and sensitive data is guaranteed through system-level security means such as confidential computing.

SEAL Homomorphic Encryption: Uses open source homomorphic encryption technology that allows computation instructions to be executed on encrypted data while preventing private data from being exposed to cloud operators.

2 / Administrative Tools

AI fairness checklist: The AI fairness checklist research project explores how to design an AI ethics checklist to support the development of fairer AI products and services. The research team collaborated with the users of the list, THE AI practitioners, to solicit their input to form a checklist for the entire lifecycle of AI design, development and deployment. The project's first studies have produced a fairness checklist designed with practitioners, as well as insights into how organizations and team processes affect AI teams addressing fairness hazards.

HAX Playbook: A tool to proactively and systematically explore common AI interaction failures. Playbook lists failures related to AI product scenarios to provide developers with an effective way to recover. Playbook also provides practical guidance and examples of how to simulate system behavior at a lower cost for early user testing.

Datasheets for Datasets: The machine learning community does not currently have a standardized process for documenting datasets, which could lead to serious consequences in high-risk areas. To address this gap, Microsoft developed Datasheets for Datasets. In the electronics industry, every component, no matter how simple or complex, has a datasheet (datasheet) describing its operating characteristics, test results, recommended uses and other information. Correspondingly, each dataset should have a data table that records its motivation, composition, collection process, recommended uses, and so on. Datasheets for Datasets will facilitate communication between dataset creators and dataset consumers and encourage machine learning to prioritize transparency and accountability.

3 / Toolkit

Human AI eXperience (HAX) Toolkit: HaX Toolkit is a suite of practical tools designed to help AI creators, including entities such as project management and engineering teams, adopt this human-centered approach in their daily work.

Responsible AI Toolbox: Responsible AI Toolbox covers four interfaces: Error Analysis, Interpretability, Fairness, and Responsible, to improve people's understanding of AI systems and enable developers, regulators and other relevant personnel to develop and monitor AI more responsibly. And take better data-driven actions.

4. Guidelines for action

Collection | 10,000-word long article detailed explanation: AI ethics practice of foreign mainstream technology companies

In order to enable the project team to better implement the principles of AI, Microsoft has issued a series of action guidelines to provide specific action suggestions and solutions for the team during the project development process, such as "what data should be collected" and "how to train the AI model". The action guidelines are designed to save teams time, improve the user experience, and implement AI ethical principles. The action guide is different from the checklist ( checklist ) , may not be suitable for every application scenario, and does not need to be mandatory by the team, for special cases, specialized areas, will issue a special action guide.

Microsoft has issued 6 action guidelines for artificial intelligence interaction problems, security issues, bias problems, and robot development problems, which run through the evaluation and development of responsible AI. Among them, HAX Workbook, Human AI Interaction Guidelines, and HAX Design Patterns are designed to help solve AI interaction problems; AI Security Guidance provides solutions to the security threats that ARTIFICIAL intelligence may bring; inclusive Design Guidelines fully consider human diversity. To address the bias problems that AI can bring; Conversational AI guidelines focus on the problems that may arise in the field of robotics development.

Collection | 10,000-word long article detailed explanation: AI ethics practice of foreign mainstream technology companies

Valley Song

I. Ethical principles

Google defines the principles of AI design and use from both the positive and negative sides as the basis for the company's and future AI development. The principle, in the capacity of an "Ethics Charter," guides the company's AI research and the development and use of AI products. Google is also committed to adapting these principles in a timely manner over time. Specifically, these principles include:

On the positive side, the use of AI should:

(1) Conducive to the improvement of social welfare;

(2) Avoid creating or reinforcing discrimination or prejudice;

(3) Innovation for the purpose of safety;

(4) responsible to the public;

(5) Incorporate privacy-by-design principles;

(6) Adhere to the high standard of scientific excellence;

(7) Comply with these principles.

On the negative side, companies will not design or deploy AI in the following application areas:

(1) Technology that causes or is likely to cause harm;

(2) weapons or other technologies that cause harm to a person;

(3) Technologies that collect or use information for surveillance purposes in violation of internationally recognized norms;

(4) Technologies whose purpose is contrary to widely accepted principles of international law and human rights.

In 2018, google announced the principles of artificial intelligence and established a central Responsible Innovation team, which was originally made up of only 6 employees. Today, the team size has expanded significantly, with hundreds of Google employees forming dozens of innovation teams building an ecosystem of AI principles in human rights, user experience research, ethics, trust and security, privacy, public policy, machine learning, and more. Google implements innovative practices for responsible AI through this internal ecosystem of AI principles, helping Google technology developers implement responsible AI into their work. At the heart of this ecosystem is a three-tier governance architecture:

Collection | 10,000-word long article detailed explanation: AI ethics practice of foreign mainstream technology companies

The first layer is the product team, which consists of experts dedicated to user experience (UX), privacy, trust, and security (T&S) that provide expertise consistent with AI principles.

The second level is dedicated review bodies and teams of experts. By the Central Responsible Innovation Review Committee, the Privacy Advisory Council, the Health Ethics Committee, and the Product Area AI Principles Review Committees) consists of four departments.

(1) Central Responsible Innovation Review Committee

The team supports the implementation of AI principles across the company. The company encourages all employees to participate in the review of AI principles throughout the project development process. Some product areas have established vetting bodies to meet specific audiences and needs, such as hardware in enterprise products, devices and services in Google Cloud, and medical knowledge in Google Health.

(2) Privacy Advisory Council

The committee is responsible for reviewing all projects that may have potential privacy concerns, including (but not limited to) AI-related issues.

(3) Health Ethics Committee

Founded in 2020, HEC is a forum for guiding and decision-making in the field of health, providing guidance on ethical issues arising in areas such as health products, health research, or decision-making by health-related organizations, and protecting the safety of Google users and products. HEC is an integrated forum that includes subject matter experts in bioethics, clinical medicine, policy, law, privacy, compliance, research and business. In 2021, Google's Bioethics Project created the Health Ethics Cafe, an informal forum for discussing bioethical issues that can be discussed by anyone in the company at any stage of project development, where thorny issues encountered in the forum will be escalated to HEC for review.

(4) Product Area AI Principles Review Committees

PAAPRC is an examination committee established specifically for specific product areas. These include Google Cloud's Responsible AI Product Committee and The Responsible AI Deal Review Committee, which aims to ensure that Google Cloud's AI products and projects align with Google's AI principles in a systematic, repeatable manner and embed ethics and responsibility into the design process. The Product Committee focuses on products built by Cloud AI and Industry Solutions. A comprehensive review of social technology prospects, opportunities, and hazards based on AI principles and on-site discussions with cross-functional, diverse committees lead to an actionable coordination plan. The Transaction Review Committee is a committee of four cross-functional senior executive members. All decisions must be taken with the full consent of all four committee members and escalated as needed. Stakeholders in Google's AI principles ecosystem help the committee understand what is being discussed and avoid making decisions out of thin air.

The third layer of the AI governance structure is the Advanced Technology Review Council, a committee rotated by senior product, research, and commercial executives that represents the different opinions of multiple divisions of Google. ATRC handles upgrade issues as well as the most complex precedent-based cases and establishes strategies that affect multiple product areas, weighing potential business opportunities and the moral hazard of certain applications.

Case 1: Google Cloud's Responsible AI Product Review Board & Google Cloud Responsible AI Transaction Review Committee decided to suspend the development of credit-related AI products to avoid exacerbating algorithmic unfairness or bias

In 2019, Google Cloud's Responsible AI Product Review Board assessed products in the area of credit risk and creditworthiness. While we hope that ai will one day enter the credit space and play a role in improving financial inclusion and financial health, the Product Review Board ultimately rejected the product — a credit-reliability product built with today's technology, data, that could have a differentiating impact on gender, race, and other marginalized groups, and conflicted with Google's AI principle of "avoiding or reinforcing unfair biases." In mid-2020, the Product Review Committee re-evaluated and reaffirmed this decision. Over the past year, the Transaction Review Board has evaluated several proposed custom AI engagements related to credit assessments. Each application is evaluated against its specific use case, and the transaction review committee ultimately decides to reject many of these operations. Years of experience and lessons have convinced us that the development of custom AI solutions related to credit should be suspended until risks are properly mitigated. This policy came into effect last year and continues to this day.

Case 2: Based on technical issues and policy considerations, the Advanced Technology Review Committee rejected the facial recognition review proposal

In 2018, the Advanced Technology Review Board dealt with a review proposal for Google Cloud products, decided not to provide a universal facial recognition API until major technical and policy issues were addressed, and recommended that the team focus on dedicated AI solutions. After years of effort by the team, Google Cloud developed a highly constrained Celebrity Recognition API2 and sought approval from the ATRC, which eventually agreed to release the product.

Case Three: The Advanced Technology Review Board reviewed research involving large language models and concluded that it could proceed cautiously

In 2021, one of the topics reviewed by the Advanced Technology Review Board is about the development of large language models. Following the review, the Advanced Technology Review Committee decided that research involving large language models could proceed cautiously, but that the model could not be formally rolled out until a comprehensive review of AI principles was undertaken.

Third, technical tools

(1) Fairness Indicators: Published in 2019 to assess the fairness of products. Min-Diff14 technique: Remediate the growing number of product use cases to achieve optimal learning scale and proactively address equity issues.

(2) Federated learning: Federated learning, used in products such as Gboard, helps models to be trained and updated centrally based on real user interactions without the need to collect centralized data from individual users to enhance user privacy.

(3) Federated analytics: Uses techniques similar to federated learning to gain insight into product features and model performance for different users without collecting centralized data. At the same time, federated analytics also allows project teams to conduct fairness tests without accessing raw user data to enhance user privacy.

(4) Federated reconstruction: A model-agnostic approach that enables faster, large-scale federated learning without accessing user privacy information.

(5) Panda: A machine learning algorithm that helps Google evaluate the overall content quality of a website and adjust its search ranking accordingly.

(6) Multitask Unified Model (MUM): Enables search engines to understand information in various formats, such as text, images, and videos, and to make implicit connections between concepts, themes, and ideas in the world around us. Applying MUM will not only help people around the world find the information they need more efficiently, but will also enhance the economics of creators, publishers, start-ups, and small businesses.

(7) Real Tone: Provides dark skin users with features such as face detection, automatic exposure and auto enhancement to help artificial intelligence systems perform better.

(8) Lookout: An Android application developed for blind and low-vision people, using computer vision technology to provide information about the user's surrounding environment.

(9) Project Relate: Use machine learning to help people with language barriers communicate more easily and use technology products.

(10) Privacy Sandbox: Partnering with the advertising industry to support publishers, advertisers, and content creators while enhancing user privacy through AI technology and providing a more private user experience.

4. Products and Services

(1) Google Cloud: Provide reliable INFRASTRUCTURE and efficient deployment solutions for large-scale application of reliable AI models in various industries, and provide services such as employee training and integration of related development environments, so that people in various industries can more easily grasp and use trusted AI tool models.

(2) TensorFlow: One of the world's most popular ML frameworks, with millions of downloads and a global developer community, tensorFlow is used not only in Google, but also globally to solve challenging real-world problems.

(3) Model Cards: A scenario-based analysis tool that provides a visual explanatory document for the operation of AI's algorithms. This document can be read by the user to fully understand the working principles and performance limitations of the algorithm model. From the technical point of view, the original intention of the model card setting is to let humans understand and understand the operation process of the algorithm in a popular, concise and easy-to-understand way, which realizes the "visualization" of two dimensions: one is the basic performance mechanism of the display algorithm; the other is the key limiting element of the display algorithm.

(4) Explainable AI: With the help of this service, customers can debug and improve model performance, and help others understand the customer's model behavior. You can also generate feature attribution to make model predictions in AutoML Tables and Vertex AI, and leverage what-If tools to investigate model behavior in an intuitive way.

Fifth, governance innovation: pay attention to staff training

Compared to other companies, one of Google's features in AI ethics practice is the Technology ethics training for employees, which aims to guide employees through the philosophy of technology to follow ethics and how to assess potential stakes, as well as courses to explain Google's ai principles and internal governance practices. Not only that, but in 2021, Google will also equip new employees with AI Principles and responsible innovation training courses to help them understand Google's ethical code and available resources. In 2021, Google also launched interactive online puzzles designed to help employees build awareness of AI principles and test their level of memory.

IBM

IBM proposes three principles and five pillars for AI ethics. The three principles are: the purpose of artificial intelligence is to enhance human intelligence, data and opinions belong to their creators, and technology must be transparent and explainable. The five pillars are: fairness, explainability, robustness, transparency, and privacy.

IBM is primarily responsible for the AI Ethics Board in the practice of AI ethics, and all the core content of the company's AI governance framework is under the AI Ethics Committee. The committee is responsible for developing guidelines and supporting the design, development, and deployment of AI, with the aim of supporting all project teams across the company in enforcing ai ethics and urging the company and all employees to adhere to the values of responsible AI.

The committee is an interdisciplinary central body that includes representatives from various departments of the company and makes decisions on the work of the business, research, marketing, publicity and other departments. In addition, the committee helps business units understand expectations for technical characteristics and helps all departments of the company to become familiar with and understand each other in the field of AI ethics in order to better collaborate.

At the same time, the AI Ethics Committee will also review the proposals for new products or services that the business department may provide to customers based on the company's AI principles, specific core content, and technical characteristics. When reviewing possible future transactions with customers, the Committee focuses on three main areas: first, the technical characteristics, followed by the application areas of the technology, and finally the customer itself, that is, to examine whether the customer has properly followed the principle of responsible AI in the past.

Case 1: During the COVID-19 epidemic, the AI Ethics Committee participated in the review of the development and deployment phase of the digital health pass.

To assist in COVID-19 governance, IBM has developed a Digital Health Pass. The development team of the pass, which was from the earliest conceptual stage, consulted the committee on the pass. Generic "vaccine passports" can lead to privacy issues or unfair access, so IBM's solution is to share personal information only with the consent of individuals and benefit everyone. The committee participates in the development phase and continues to review the solution as it is deployed.

Third, technical solutions

IBM proposes five targeted technical solutions based on the five pillars of AI ethics: explainability, fairness, robustness, transparency, and privacy. Correspondingly, they are: AI Explainability 360 toolkit, AI Fairness 360 toolkit, Adversarial Robustness 360 Toolbox v1.0, AI FactSheets 360, IBM Privacy Portal.

(1) AI Explainability 360 toolkit: From ordinary people to policy makers, from researchers to engineers, different industries and roles need different interpretability. To effectively address the strong need for interpretability diversity and individualization, IBM researchers proposed the integrated interpretability toolbox AI Explainability 360 (AIX360). This open source toolbox covers eight cutting-edge interpretability methods and a two-dimensional evaluation matrix. It also provides effective classification methods to guide various types of users to find the most appropriate method for interpretable analysis.

(2) AI Fairness 360 toolkit: The problem of bias in artificial intelligence algorithms is getting more and more attention, and AI Fairness 360 is an open source solution to this problem. The tool provides algorithms that enable developers to scan maximum likelihood models to find any potential bias, which is an important job in combating bias and of course a complex task.

(3) Adversarial Robustness 360 Toolbox v1.0: Originally released in April 2018, ART is an open source library for adversarial machine learning that provides researchers and developers with state-of-the-art tools to defend and validate AI models in the face of adversarial attacks. ART addresses concerns about growing trust in AI, particularly in mission-critical applications.

(4) AI FactSheets 360: Automated documentation represented by AI fact lists is an important way to enhance the interpretability of AI, which can serve as a communication medium between technicians and users in a clear and concise manner, thus avoiding ethical and legal issues in many situations. The AI fact checklist does not attempt to explain every technical detail or disclose proprietary information about algorithms, its fundamental goal is to enhance human decision-making when using, developing, and deploying AI systems, while also accelerating developers' acceptance of AI ethics and encouraging them to adopt a more widely interpretable culture of transparency.

Collection | 10,000-word long article detailed explanation: AI ethics practice of foreign mainstream technology companies

IBM has released the Everyday Ethics for Artificial Intelligence to implement IBM's ethical principles for AI. The guideline aims to enable designers and developers of AI systems to systematically consider AI ethics issues and implement ethics and ethics throughout the life process of AI.

Twitter

1. Governance institutions

Meta Team (Machine Learning Ethics, Transparency & Accountability): This is a dedicated group of engineers, researchers, and data scientists within the company whose primary job is to assess the unintentional harm caused or likely to be caused by the algorithms used by the company, and to help Twitter prioritize pending issues. The META team is working to study how AI systems work and improve people's experience on Twitter, such as removing an algorithm that gives people more control over the images they post, or setting new standards for designing and policy when those images have a huge impact on a particular community. The results of the META team's work may not always translate into visible product changes, but they give us a higher level of understanding and discussion of important issues in the construction and application of machine learning.

Case One: An in-depth study of gender and racial bias

The META team is conducting an "in-depth analysis and study" of gender and racial bias in the image cropping algorithm, which includes a gender and racial bias analysis of the image cropping (saliency) algorithm, a fairness assessment of the "homepage" timeline recommendations for different ethnic subgroups, and a content recommendation analysis for different political ideologies in seven countries.

2. Governance Innovation: Algorithm Bounty Challenge

Interestingly, to address the fairness of ML image cropping, Twitter hosted the Algorithm Bounty Challenge, which used a community-led approach to build better algorithms and gather feedback from different groups. In August 2021, Twitter held its first algorithmic bias bounty challenge and invited a community of AI developers to disassemble algorithms to identify biases and other potential harms. The Algorithm Bounty Challenge helps companies identify algorithmic biases against different groups in a short period of time, and becomes an important tool for companies to solicit feedback and understand potential problems.

A few lessons

In such a digital era in which new technologies, new applications and new formats are growing exponentially, due to the deepening interaction and mutual influence between technology and people, and the increasing autonomy of technology, technology ethics has become the latest proposition of digital business ethics. As Brad Smith, president and vice chairman of Microsoft, wrote in his book Tool or Weapon? "If you have the technology to change the world, then you have a responsibility to help solve the problems facing the world you create." "The relevant top-level policy documents and legislation of the mainland have put forward new requirements for the ethics of science and technology, emphasizing the importance of scientific and technological ethics review, and shaping the cultural concept of science and technology for good." In this context, the practices of foreign technology companies such as Microsoft, Google, IBM, and Twitter in AI ethics and trusted AI can provide a lot of meaningful inspiration.

First, in a highly technological, digital society, in the governance of the company, technology ethics will become as important as the existing sectors such as finance and legal affairs. We have seen that technology ethics is a new puzzle piece of business ethics, and more and more technology companies have begun to internalize mechanisms such as chief ethics officers and ethics committees into a normalized organizational structure and coordinate the promotion of related work.

Second, AI ethics and trusted AI need to be systematically built, abstract principles and top-level frameworks are important, but actions speak louder than words, and more importantly, ethical principles are translated into concrete practice, and integrated into technology design to create responsible technology applications. Traditional and innovative approaches such as internal governance mechanisms, technology solutions, ethical training, ethical hacking communities (similar to white hat hackers in cybersecurity), and technical standards are increasingly playing an important role in this regard. Because trusted AI and AI ethics are not only the principles of philosophy, but also the course of action.

Third, as characterized by the concepts of trusted AI and AI ethics themselves, we need to reflect on the technology development application and deployment process led by technicians, and emphasize more on the multiple backgrounds and multiple participations in technology development applications. Bringing people from the fields of policy, law, ethics, society, philosophy into the development team is the most direct and effective way to embed ethical requirements into technical design and development. Good technology is not only concerned with the result, but also with the process.

Technology for good is the ultimate vision of a highly technocratic society. The concept of tech for good includes at least two paths, which requires technology to solve various social problems and challenges outward, and inwardly needs to focus on technology itself to create "good/good technology" (good tech). AI ethics and trusted AI focus on how to build "good/good technology", and ultimately build the foundation for "technology for good" that is outward-

bibliography:

[1]https://www.microsoft.com/en-us/ai/responsible-ai?activetab=pivot1%3aprimaryr6

[2]https://www.microsoft.com/en-us/ai/our-approach?activetab=pivot1%3aprimaryr5

[3]https://www.microsoft.com/en-us/ai/responsible-ai-resources

[4]https:://azure.microsoft.com/en-us/solutions/devops/devops-at-microsoft/one-engineering-system/

[5]https://ai.google/responsibilities/

[6]https://cloud.google.com/responsible-ai

[7]https://www.tensorflow.org/responsible_ai

[8]https://blog.tensorflow.org/2020/06/responsible-ai-with-tensorflow.html

[9]https://github.com/microsoft/responsible-ai-toolbox

[10]https://www.ibm.com/artificial-intelligence/ethics

[11]https://aix360.mybluemix.net/?_ga=2.38820964.651461218.1639109085-1605157021.1638780204

[12]https://www.ibm.com/blogs/research/2019/09/adversarial-robustness-360-toolbox-v1-0/

[13]https://blog.twitter.com/en_us/topics/company/2021/introducing-responsible-machine-learning-initiative

[14]https://blog.twitter.com/engineering/en_us/topics/insights/2021/algorithmic-bias-bounty-challenge

Read on