laitimes

Microsoft's first AI Transparency Report: There are seven keys to building responsible generative AI

author:Smart stuff
Microsoft's first AI Transparency Report: There are seven keys to building responsible generative AI

Compile | A pen

Edit | Yun Peng

Zhidong reported on May 8 that recently, Microsoft released its first annual "Responsible Artificial Intelligence Transparency Report". The report outlines the various measures that Microsoft has developed and deployed in 2023, as well as its achievements in the secure deployment of AI products, such as the creation of 30 responsible artificial intelligence (RAI) tools, the expansion of the RAI team, and more. In addition, Microsoft has introduced new tools in Azure AI to improve the quality of its AI output while preventing the system from being used maliciously.

Generative AI has made huge strides in the last year, allowing people to generate photorealistic visuals using text and Microsoft's assistive tools that can be used for a variety of purposes, such as summarizing meeting content, helping write business proposals, and even suggesting dinner menus based on ingredients in the fridge. While Microsoft has struggled to establish the principles and processes for building AI applications that provide users with the experience they need, deploying generative AI products at scale also presents new challenges and opportunities.

At Microsoft, Natasha Crampton, Chief Responsible AI Officer, is responsible for defining and managing the company's RAI methodology, and Sarah Bird, Chief Product Officer for AI Responsibility at Microsoft, is driving RAI implementations across the portfolio.

1. Make RAI a foundation rather than an afterthought

According to Crampton, RAI is not the responsibility of a single team or a single expert, but of the entire Microsoft workforce. For example, every employee working on the development of generative AI applications must follow the company's RAI standards. These criteria include assessing the potential impact of new AI applications, developing a plan to manage unknown failures, and identifying limitations or changes so that customers, partners, and those who use AI applications can make informed decisions.

"In RAI work, it's important to add afterthought or requirements before delivering a product, and these should be considered and included in the checklist during the development process." "Everyone in the company should think about how to make AI applications more responsible when they first develop a product." ”

2. Continuous interaction with customers

In Bird's view, AI product development is a dynamic process. Achieving generative AI at scale requires rapid integration of customer feedback from dozens of pilot projects and ongoing customer engagement. At the same time, understand the problems that may arise when people first use new technologies, and think about what can be done to make the user experience better.

Microsoft's first AI Transparency Report: There are seven keys to building responsible generative AI

As a result, Microsoft has decided to offer different dialogue style options in the Copilot feature on its Bing search engine, including more creative, balanced, or precise modes to meet the needs of different users.

"We should be working with our customers to run trials and let them try out some new products during the experimental phase, and in the process, we can learn and adapt our products accordingly," Bird said. ”

3. Build a more centralized system

Bird believes that as Microsoft launches Copilot and integrates AI capabilities into its products, Microsoft needs to build a more centralized system to ensure that everything it publishes meets the same standards. That's why Microsoft has developed the RAI technology stack in Azure AI so that teams can apply the same tools and processes.

"Technology is evolving so fast that Microsoft has to get it right the first time and apply that experience to the best of the future work." ”

In response, Microsoft AI experts have developed a new method for centrally evaluating and approving product releases. Using a consensus-driven framework, they reviewed the steps taken by product teams at all levels of the technology stack, as well as before, during, and after product launches, in order to map, measure, and manage the potential risks of generative AI. In addition, they considered data collected from testing, threat modeling, and Red Teaming. Red teaming is a testing method that stress-tests new generative AI technologies to ensure their safety and reliability by attempting to revoke or manipulate security features.

With a centralized review process, it's easier to identify and address potential issues in your portfolio, including vulnerabilities and security implications. At the same time, ensure that information is shared in a timely manner with customers and developers outside of the company and Microsoft.

4. Inform users of the source of AI-generated information

As AI systems are able to generate artificial video, audio, and images that are difficult to distinguish from the real thing, it is becoming increasingly important for users to be able to identify the source or source of AI-generated information.

In February, Microsoft, along with 19 other companies, agreed on a series of pledges to combat the potential abuse of deceptive use of AI and "deepfake" in the 2024 election. These commitments include preventing users from creating fake images to mislead the public, embedding metadata to identify the source of an image, and providing a mechanism for political candidates to declare themselves being deepfaked videos.

Microsoft's first AI Transparency Report: There are seven keys to building responsible generative AI

In addition, Microsoft has developed and deployed Content Credentials, which enable users to verify that an image or video is AI-generated. Microsoft's AI for Good Lab is also working to address the challenges posed by deepfakes, with a focus on identifying deepfakes through technology, tracking down actors who create and distribute undesirable content, and analyzing their strategies.

"These issues are not only a challenge for tech companies, but for society as a whole," Crampton said.

5. Deliver the RAI tool to the customer

According to reports, in order to improve the quality of AI model output and prevent it from being abused, Microsoft is not only committed to providing tools and protections to customers, but also encouraging them to take responsibility in the process of using AI. These tools and measures include open-source and commercial products, as well as guidelines for creating, evaluating, deploying, and managing generative AI systems.

"Our focus is to make security the default choice for users," Bird said. ”

In 2023, Microsoft released the Azure AI Content Security Tool to assist customers in identifying and filtering hate, violence, and other inappropriate content in AI models. Microsoft has added a series of new tools to Azure AI Studio to help developers and customers improve the security and reliability of their generative AI systems.

6. Detect vulnerabilities to prevent malicious "jailbreaking"

As people experience more sophisticated AI technologies, some may try to challenge the system in a variety of ways. This has given rise to a phenomenon known as "jailbreaking," which in tech refers to attempts to bypass the security tools built into AI systems.

"We didn't design our products with these perverse uses in mind, but in the process of pushing the boundaries of technology, people can take advantage of the edge capabilities of technology for unintended or illegitimate purposes," Crampton explains.

As a result, not only does Microsoft detect possible vulnerabilities in a new AI product before it is released, but they also work with customers to ensure that those customers also have access to the latest tools to protect their custom AI applications built on Azure.

7. Inform users of the limitations of AI

While AI can make life easier in many ways, it still has problems. It's a good practice for users to verify the information they've received. As a result, when a user interacts with Microsoft's AI system in a chat, a link to the source is provided at the end of the message generated by Microsoft's system.

Microsoft's first AI Transparency Report: There are seven keys to building responsible generative AI

Since 2019, Microsoft has been publishing a document called "Transparency Notes," which provides customers of Microsoft's platform services with detailed information about product features, limitations, intended use, and responsible use of AI. Microsoft has also added User-Friendly Notifications to its consumer-facing products, such as Copilot. These notifications cover topics such as risk identification, error content generation, and more, while reminding people that they are interacting with AI.

"We don't know how users will use this new technology, so we need to listen to them." Bird believes that as generative AI technology and its applications continue to expand, Microsoft must continue to strengthen its systems, adapt to new regulatory requirements, and update its processes in order to create AI systems that deliver the experience users expect.

Conclusion: To achieve true RAI, more social participation and supervision are needed

With the continuous advancement of AI technology, AI brings convenience to people, but also brings new challenges and responsibilities, such as risk identification and deepfakes, which are not only problems faced by technology companies, but also challenges that the whole society needs to deal with together.

Against this backdrop, Microsoft released its first annual Responsible AI Transparency Report, which outlines their achievements and challenges in the deployment of AI products, highlights the importance of risk identification, security and other issues, and calls on industry and society to work together to ensure the healthy development of AI.

In the future, as AI technology continues to mature and application scenarios expand, how to provide users with safer and more reliable AI experiences and ensure that their applications meet ethical and legal requirements may become an important challenge for society. Microsoft's efforts are commendable, but more social participation and oversight is still needed.

Source: Microsoft's official website