laitimes

The United States announced new actions to promote responsible AI innovation and protect human rights and safety

author:N-dimensional spacetime
The United States announced new actions to promote responsible AI innovation and protect human rights and safety

The United States announced new actions that will further responsible American artificial intelligence (AI) innovation to protect people's rights and safety. These steps build on the government's strong track record of leadership to ensure that technology improves the lives of the American people and breaks new ground in the federal government's ongoing efforts to advance a unified and comprehensive approach to advancing the risks and opportunities associated with AI.

The United States announced new actions to promote responsible AI innovation and protect human rights and safety

AI is one of the most powerful technologies of our time, but in order to seize the opportunities it presents, we must first reduce its risks. President Biden has made clear that when it comes to AI, we must put people and communities at the center and support responsible innovation that serves the public good while protecting our societies, security, and economies. Importantly, this means that companies have a fundamental responsibility to ensure that their products are secure before they are deployed or made public.

The United States announced new actions to promote responsible AI innovation and protect human rights and safety

Vice President Harris and senior administration officials will meet today with the CEOs of four U.S. companies at the forefront of AI innovation — Alphabet, Anthropic, Microsoft, and OpenAI — to underscore this responsibility and the importance of driving responsible, trustworthy, and ethical innovation safeguards to mitigate risks and potential harms to individuals and our society. The conference is part of a broader, ongoing effort to engage with advocates, companies, researchers, civil rights organizations, nonprofits, communities, international partners, and others on key AI issues.

This work builds on the numerous steps taken so far by governments to promote responsible innovation. These include the landmark AI Bill of Rights blueprint and related executive actions announced last fall, as well as the AI Risk Management Framework released earlier this year and the Roadmap for Establishing National AI Research Resources.

The administration has also taken important action to protect Americans in the age of artificial intelligence. In February, President Biden signed an executive order directing federal agencies to root out bias when designing and using new technologies, including AI, and to protect the public from algorithmic discrimination. Last week, the Federal Trade Commission, the Consumer Financial Protection Bureau, the Equal Employment Opportunity Commission, and the Department of Justice's Division for Civil Rights issued a joint statement underlining their collective commitment to use existing legal authority to protect the American people from AI-related harm.

The government is also actively working to address national security issues arising from AI, especially in key areas such as cybersecurity, biosecurity, and security. This includes enlisting the support of government cybersecurity experts from across the national security community to ensure that leading AI companies have access to best practices, including securing AI models and networks.

The United States announced new actions to promote responsible AI innovation and protect human rights and safety
  • New investments to support responsible U.S. artificial intelligence research and development (R&D). The National Science Foundation announced a $140 million grant to launch seven new national artificial intelligence institutes. This investment will bring the total number of institutes nationwide to 25 and expand the network of organizations involved to almost every state. These institutes facilitate collaborative efforts between higher education institutions, federal agencies, industry, and others to pursue transformative AI advancements that are ethical, trustworthy, responsible, and serve the public interest. In addition to promoting responsible innovation, these institutes strengthen the AI R&D infrastructure in the United States and support the development of a diverse AI workforce. The new institute announced today will advance AI research and development to drive breakthroughs in key areas, including climate, agriculture, energy, public health, education,
  • Public assessment of existing generative AI systems. The government announced that leading AI developers, including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI and Stability AI, have independently committed to participate in the public evaluation of AI systems, in line with the principles of responsible disclosure – developed by Scale AI at DEFCON 31's AI Village on the evaluation platform. This will enable these models to be fully evaluated by thousands of community partners and AI experts to explore how they align with the principles and practices outlined in the Biden-Harris Administration's blueprint for the AI Bill of Rights and AI Risk Management Framework. This stand-alone event will provide researchers and the public with important information about the impact of these models and will enable AI companies and developers to take steps to address the issues identified in these models. Testing AI models independently of governments or the companies that develop them is an important part of their effective evaluation.
  • Ensure U.S. government policies that lead by example in mitigating AI risks and taking advantage of AI opportunities. The Office of Management and Budget (OMB) announced that it will release draft policy guidance on the U.S. government's use of AI systems for public comment. The guidance will develop specific policies for federal departments and agencies to ensure that they develop, procure, and use AI systems centered on preserving the rights and safety of the American people. It will also enable agencies to responsibly use AI to advance their mission and strengthen their ability to serve Americans equitably — and serve as a role model for state and local governments, businesses, and others to follow in their own AI procurement and use. OMB will release the draft guidance this summer for public comment on the results of solicitation from advocates, civil society, industry,

Read on