laitimes

Former White House think-tank: ChatGPT, like nuclear energy, will bring all kinds of possibilities to the world, good and bad

Focus:

  • 1 Nelson, former head of the U.S. Office of Science and Technology Policy, acknowledged in an exclusive interview that AI will cause a lot of foreseeable and unforeseeable harm, and which view is preferred depends on the specific use case.
  • Nelson compares the risks of AI to the impacts of nuclear technology and climate change, arguing that ChatGPT and automation offer a variety of possibilities, both good and bad, but that can help us reflect on the many options presented to us.
  • 3 Nelson said that in the process of creating the framework and allocating resources, it is necessary to consider how to put safety barriers around automation systems and automation technology. This requires adhering to the principle of advancing innovation as quickly as possible while minimizing harm as much as possible.
  • 4Generative AI has been effectively introduced into the Microsoft and Google office suites, and will also be used in advanced decision-making systems. This means that artificial intelligence and related automated systems and tools will increasingly be integrated into society as a whole and into our daily lives.

Tencent Technology News on April 17, as the so-called generative artificial intelligence (AIGC) becomes more and more popular, people are increasingly worried that this technology may cause discriminatory harm or exacerbate the spread of false information, so the Biden administration has begun to study whether artificial intelligence tools such as ChatGPT need to be reviewed.

In fact, before this AIGC tide hit, the Biden administration had paid close attention to it. Back in 2022, the White House released a document more than 70 pages long called the Blueprint for the Artificial Intelligence Bill of Rights. For the most part, this document does not have the force of mandatory law, but its release indicates that the US government may introduce relevant laws in the near future.

If this blueprint is realistic, it will revolutionize the way AI works, looks and functions. But no AI system has come close to meeting these rules, or even technically possible.

The blueprint was developed under the leadership of Alondra Nelson, a science and technology scholar who later became head of the Biden administration's Office of Science and Technology Policy. If anyone in the U.S. government has seriously thought about the future of artificial intelligence and tried to get the huge entity of government to reach some kind of consensus on artificial intelligence, it is Nelson.

Former White House think-tank: ChatGPT, like nuclear energy, will bring all kinds of possibilities to the world, good and bad

Nelson has now stepped down from public office and is now a senior fellow at the Center for American Progress, a think tank. She recently spoke to Ezra Klein, anchor of The Ezra Klein Show, a New York Times podcast channel, about the original intention of the blueprint and her vision for the future of artificial intelligence.

The following is the full text of the interview:

Q: I want to start this conversation with how you and the U.S. government view AI itself. From a government or public perspective, what are the problems or challenges we are trying to solve in this area?

Nelson: First of all, let me clarify that I no longer work for the government. My previous unit was the White House Office of Science and Technology Policy, which was founded in the '70s with a mission to spur innovation and mitigate foreseeable harm. I think that, to some extent, this phrase actually encapsulates the government's science and technology policy and attitude towards innovative technology.

The Biden administration has recognized that science and technology should do good things in people's lives. Therefore, innovation should set goals based on mission and values, and should aim to improve people's lives. I think this is also the basis for the government to think about AI at this moment.

Q: I would like to discuss the concept of "foreseeable harm". In the current situation, there are two views on AI: the first is that it will cause a lot of foreseeable harm, that it may be biased, that it is opaque, and that it may be wrong. The second view is that AI is a self-contained technology, and we don't really have experience dealing with similar technologies yet. Therefore, the hazards are unforeseeable and these technologies and systems are unexplained. We are currently in a position between the known and the unknown, which makes regulation very difficult. Which view are you in favor of?

Nelson: Actually, I don't fully agree with either view. I am a scholar and researcher, and I would like to have more information. In a way, it's an empirical question, a question that we need to get more information before we choose which camp. Of course, I think there is some truth in both of the above points, after all, there are always some harms that we can't foresee, and there are use cases that we may have thought of but haven't seriously considered.

So I think in the current situation, exactly which view is preferred depends on the specific use case, but this process may bring hazards that we can foresee, or even the harm that we are currently suffering. For example, facial recognition technology has hurt black and brown communities to some extent, and it has been for several years. There are other broader risks.

With regard to the latter, I would say that we are already living in a time of high uncertainty and imminent risks, so I also want to put the known and unknown risks we are thinking about now in the context of our current lives. This reminds me of the fact that we have been living in the shadow of a potential nuclear catastrophe for more than 60 years. It's something we face every day.

Of course, I have also considered the potential harm of climate change to human survival and the catastrophic risks it may entail. In this sense, we live in a world full of dangers. So, despite the new risks, I think humanity is no longer familiar with the ever-increasing and often unknown risks that we have to deal with.

Q: I love this analogy, and I'm curious about what analogies you've used or heard in the field of AI, and which ones do you find more convincing?

Nelson: I think when people think of risks similar to AI, they probably think of climate change and nuclear catastrophe first. But AI is self-contained, and I'm not sure if this analogy is accurate, but I think they do have some similarities. These analogies can help us reflect, not only on imminent potential harm, but also on possible solutions, which is what interests me more.

In the nuclear field, non-proliferation has been active for decades. We know these things are dangerous, and we know they can destroy the world. We are working together to collaborate across sectors and globally to try to mitigate the damage. In addition, we have seen the damage, even fatal, that these things can do when they are released. Of course, in the field of national security involving AI, I think there are similar potential risks.

So, I think life right now is full of great uncertainty, some of which we are familiar with and some of which are unfamiliar. However, the same thing is that automation technology is actually created by humans. While they are evolving rapidly, they don't have to be released. This is a conservative choice.

ChatGPT and automation can bring a variety of possibilities to the world, both good and bad, but they can help us reflect on the choices that are presented to us, or the outcomes we are willing to accept.

Former White House think-tank: ChatGPT, like nuclear energy, will bring all kinds of possibilities to the world, good and bad

Q: One of the things I've noticed in these analogies is that most of them are about risk and disaster. In the case of nuclear disasters, for example, while many support nonproliferation, many oppose putting the technology on hold forever. If we were less afraid of negative effects, cheaper and richer nuclear energy might already be available today. How do you look at these analogies, because many people don't want to block technologies that can bring great benefits, potential economic growth, more scientific discoveries, more creativity, more empowerment. What do you think of the positive aspects of these analogies? Are the analogies you offer too biased towards the negative?

Nelson: I think nuclear energy is a good example, and we still have hope in nuclear technology. If we can harness fusion energy and do fusion research and development, it is possible to have an inexhaustible supply of green energy, and the vision is very exciting. But we have to do a lot of things, like where should these facilities be built? How can disasters be prevented? How do you engage the community in this effort and let the community know that we're moving into areas where we could have huge opportunities?

So I think all of these innovations are meant to drive more innovation, but they actually have limitations. Let's go back to artificial intelligence, which has a lot of potential in science and health. For example, when President Biden wants to reduce cancer mortality by 50% over the next 25 years, you may be closely watching the development of artificial intelligence technology in cancer screening, radiology and imaging. There are clear benefits to doing so, and this is clearly an opportunity to save lives. This is a very positive use case for artificial intelligence.

NASA also has its own ambitious plans for applying artificial intelligence. People may already be familiar with so-called DART missions, which are attempts to change the orbit of asteroids that may be crashing into Earth by impact. Artificial intelligence has been at the heart of this work for years, helping to model asteroids, simulating their shape, velocity, acceleration, and rotation speed.

So the future of Earth's defenses, and possibly even our planet, depends largely on the success of such research and the ability to use AI in research to help create actions and interventions. So, AI has a wide range of uses in modeling and forecasting in science, and automation has a lot to offer.

Q: You just mentioned automation, and these specific machine learning algorithms already seem to have a lot of specific use cases. For example, we're building systems to calculate protein folding patterns, predict asteroid trajectories and shapes, or better read radiology reports.

Then general-purpose systems emerged, what people call general-purpose AI, but whether you believe we'll achieve that goal or not, these systems that learn from huge databases are showing unexpected, generalizable capabilities. Especially when you're building something for universal interaction with humans, it can also fool people into thinking it's more like an autonomous intelligent agent.

What do you think of the difference between these more accurate automated prediction algorithms and this balancing of large learning networks that seem to be increasingly dominant?

Nelson: You used the phrase interaction with humans, and I think that's the main difference for me. In a way, as we build systems that interact with humans, I think we need to adopt different value propositions to reflect on how we view this work. If we're dealing with work that's about opportunities, like people's access to services and resources, access to health care, I think government, industry, and academia need to think about these tools differently.

Q: I think this is a good bridge to the AI Bill of Rights Blueprint. Tell me about the origin of this document! Clearly, the government is always thinking about rolling out a regulatory framework. How did the process go?

Nelson: It's a process of creating a framework and allocating resources to think about how we put safety barriers around automation systems and automation technology. This brings us back to the founding mission of government science and technology policy: to advance innovation as quickly as possible while mitigating harm as much as possible.

The AI Bill of Rights Blueprint offers a rosy vision. If our values are for the best outcomes for humanity, if our policies and value propositions take into account what technologies should do and what they mean to the people who use them, then those claims should be true whether we're talking about the AI we might have used four years ago, or the new AI models that will be released in four months.

AI systems should be secure, your data privacy should be protected, and algorithms should not be used to discriminate against you. I would say that there are two areas of guardrails to consider before deploying an AI system: the first is that when conducting a risk assessment, you can recruit "red teams" for adversarial testing. Second, you can conduct a public consultation.

Q: For those who are not familiar with the field, can you explain what a red team is?

Nelson: The so-called red team is your colleagues or other systems where you send them your AI tool and ask them to defeat it. You have to stress test these systems so that they reinforce each other in an adversarial way. You can try to get the red team to test their system in the best and worst cases, predict the way it shouldn't be used at all, and try to figure out how it crashes. This experience is then used to improve the tool or system.

After the system is deployed, you can conduct ongoing risk assessments, and you can try to monitor the tools in an ongoing way. So back to your original question, about what governments can do, it can provide a positive vision for how to build automated systems for generative AI. For example, the National Institute of Standards and Technology has developed an AI risk assessment framework.

Q: Under the Bill of Rights, I can sue if my rights are violated. And does the current document have legal effect?

Nelson: While this document is not yet a law, it is a vision, a framework for how to enforce the laws that we have in place and how to use the rule-making powers that are in place. It can remind us that some laws are enduring, and it will give ways to adapt our existing laws, policies, and norms to new areas of technology.

As we have experienced over the past few months, the pace at which technology is evolving is phenomenal. But we don't need to stop everything and reinvent everything every time a new technology comes along. So when new technologies come along, some social contracts don't change. We can return to the basic principles, which can be reflected in technology policy, in the work of technological development, and in specific practices, such as algorithm risk assessment, such as auditing, such as red teams, and so on.

Former White House think-tank: ChatGPT, like nuclear energy, will bring all kinds of possibilities to the world, good and bad

Q: But I would like to know more about the current status of this document, as it may be introduced as a legislative proposal by the government. But to concretize it into a bill of rights, quite a bit of work is required. This is a framework that needs to be discussed and that companies can adopt voluntarily. Your point is that Congress should turn this document into law and sue companies if they don't comply? Like the Bill of Rights that we introduced when we were founded, it's powerful not because it's just a vision, but because if you violate it, I may sue you. Should I sue OpenAI if GPT-5 is illegal?

Nelson: The audience for this document is very large. It seems to me that if I were still working in the administration, the president would probably have called on Congress to act in these areas, to protect people's privacy, to act in the areas of competition and antitrust. So there are a lot of interesting draft legislation that revolves around the implementation of algorithmic impact assessments, around prohibiting algorithmic discrimination, and in a way that continues to ensure innovation.

Therefore, it is certain that different audiences such as legislators and developers will be involved in the document. As I said, what this document tries to do is distill best practices, best use cases, that we learn from and discuss with developers, business leaders, and practitioners. Again, this document is for the general public.

As I said earlier, we live not only in a world where risks are increasing in some respects, but also in a world where there will be more and more different new technologies. We need to consider key emerging technologies when developing policy. In fact, because innovation is so abundant and innovation cycles are shortened, this means we have to think differently about the roles of governments and policymakers. It also means that new laws are not always enacted. In other words, even as technology evolves, some basic principles won't change.

We don't have the ability to create a whole new way of life, a whole new American society, or the government can't imagine American society every time disruptive technology comes along. But what we can do is continue to stick to fundamental values and principles as technology evolves rapidly.

Q: Now I want to say that there is a consensus: even developers don't fully understand how these AI systems work and why conclusions are often drawn. So, in keeping with what I think is a catch-all principle, we'll ask developers to stop and let these systems explain. I would like to ask, do you think this practice should become law?

Nelson: Let me give you a more specific example. The EEOC has the power to defend citizens' rights around employment practices and employment decisions. There are vendors who are creating algorithms that companies are using to make hiring decisions, and we can know how the algorithms were created. Developers of algorithmic systems can also tell us how those systems make decisions. If someone makes a claim for discrimination in the employment field, we should be able to know what algorithm they used. To some extent, this isn't trade secrets or other proprietary information that needs to be made aware of the rules, just like you would do in any other hiring decision-making process.

Q: I think, to some extent, I don't think the narrower and broader systems should be separated. With the advent of more generic systems with trillions of parameters, and attempts to provide plug-ins for millions of different uses, we know this will become a reality, and we can expect it to be carried into business strategy and decision-making. But you need to be able to tell us how the system made its decisions. I mean, companies will have to build them differently. I'm chatting with these companies all the time, but they don't know why the algorithm came to its conclusions.

The question I want to ask is, whether in this bill or in your perception of these systems, should Congress demand that you not continue to build larger systems and roll them out until you have given a clear explanation for these systems? We believe that to ensure the security of AI systems, to protect our rights, we need explainability.

I think there are precedents in this regard, such as the Bill of Rights, which not only regulates the way cars are made, but the enactment of those regulations actually greatly increases innovation in the field of automobile manufacturing. To achieve the same goal, we still have more work to do, such as setting higher fuel standards and so on. Should algorithms have a higher level of interpretability?

Nelson: If someone builds general AI in a lab environment, it doesn't need to be clear and understandable. Where legibility needs to emerge, where Congress and rule-makers can act, are certain areas that are not suitable for AI to make decisions. Therefore, legibility rules do apply to specific use cases, such as auto safety, employment cases, housing discrimination, and access to housing, medical services, etc. When it is transformed into a different tool, the use of generative AI in employment tools will have to comply with labor and civil rights laws regarding employment. As you said, developers will have to figure out if they want to use these tools and how to comply with the law.

Q: One thing you might hear from people who are more optimistic about AI is that the fairly high level of opacity and ambiguity in the system is a necessary price to pay for making them really work. Many people may not care about this, but in other cases they are not.

For example, a patient is screened for some kind of cancer. The healthcare industry may be a rather rigid field, albeit a fairly rigorous field, where the focus is on the system being as accurate as possible, not whether it can explain its reasoning. In fact, if you slow down the development of these systems and try to get them to explain, people may be dead before you can roll them out. What do you think about this trade-off?

Nelson: Actually, I'm not against these use cases. I don't think there is necessarily a need here for the kind of legibility that we always talk about in other areas. Only when these things become real use cases will the law, the government, work in a specific way.

Q: In the safety section of the AI Bill of Rights, it is required that "the development of automated systems should be in consultation with different communities, stakeholders, and domain experts to determine the concerns, risks, and potential impacts of the system." Let's talk about the consultation. For example, OpenAI issued a policy or a vision document saying "we believe there needs to be an open consultation on how to build these systems." But now there is no uniform standard. How do you get enough opinions? How do you ensure that the consultation is sufficiently representative? What does it mean if Google announces that it solicited public comment at a town hall meeting, but it doesn't comply? How do you ensure that such inquiries are meaningful?

Nelson: We face many challenges in developing and deploying AI technologies, but I think any area that intersects with policy and expertise is becoming more and more abstract. So what the AI Bill of Rights Blueprint is trying to do is to expand the consultation base around this issue to some extent. You mentioned the case of OpenAI, where they did a lot of consultation and wanted to do more. I think we can already see that with the introduction of certain chatbots and generative AI tools, more human participation could change our current predicament.

But because these tools are being released to consumers, it can sometimes be challenging for us as policymakers to try to explain to people what automation technology is and the impact 5 to 10 years from now. But because of this, we know that hundreds of millions of ChatGPT users have expressed their opinions on these technologies. I think it's very important that people who aren't experts understand that they have a role and a voice to play and speak in this regard.

Q: Mark Zuckerberg and other Facebook executives have now appeared before Congress multiple times. They may have been asked hundreds of questions during these lengthy hearings. Among them, Olin Hatch, a lawmaker in his 80s, once asked, how do you maintain a business model where users do not pay for services? Zuckerberg laughed and said, "Senator, we advertise. "This has happened many times and gives the impression that the public sector lacks sufficient technical expertise. I think that's one of the reasons why these exchanges always fail because they raise a lot of public questioning and lead to a loss of confidence in the ability of the public sector to deal with technology.

Nelson: I describe these things as political drama, and these political hearings obviously don't do much to make much substantive help in shaping technology policy. I mean, it takes more conversations with voters, consultations with business leaders, and more other things to be more effective. So I think it's a puzzling move to have a political hearing, and it's not good for us. We may have different views on social media, generative AI, and automated systems, but the people who help deal with these systems should also have higher levels of expertise, and we need more professionals to work in the public sector.

Q: OpenAI has always claimed that what they're doing is understanding the safety of AI, and for that it takes running these very powerful models under real-world conditions, doing experiments, testing, and research, and trying to understand how to control it. So you can imagine that maybe the government wants to build its own system with a lot of money and manpower so that they have the ability to really understand it. Much of what I've seen so far revolves around the idea of regulating the private sector. Is the public sector also building its own AI systems?

Nelson: Actually, this work has been going on in government for more than 1 year. Just this past January, the National Artificial Intelligence Research Resource Task Force released a report in which they made a number of recommendations to the White House and the National Science Foundation. In fact, when it comes to AI innovation, I think some solutions are harmful, and striking the right balance between driving innovation and mitigating damage will fall on the public sector.

Q: Why isn't there more discussion about the public vision of these systems? What puzzles me is whether a potentially transformative technology is really going to be left to Microsoft, Google, and Meta to compete and make money with each other? I find that there isn't much discussion from the forefront about this type of situation, which frustrates me. I wonder if there are more such conversations in the field of artificial intelligence, in which you are already engaged, but I don't know? How would you consider establishing such a goal or plan?

Nelson: I think developments in the field of climate change are a good example of how people from many different sectors are involved. We are also building a positive vision around AI. With so much debate going on around generative AI in the last few weeks, one of them is whether the end of social civilization is looming? I find a lot of debate discouraged.

For us, we need to harness this most powerful technological force ever, this power of generative AI and all the things that surround it, whether it's at the level of general AI or not. We need to imagine the impact it has on work and how people can work with these technologies and tools. The dream of the 21st century is that people will have more leisure time and all kinds of possibilities. Even though I'm no longer working in government, I do think the efforts that are taking place together are beginning to move toward the kind of positive vision that you and I aspire to.

Q: The EU has also been working on AI regulations, and its EU Artificial Intelligence Act is quite broad legislation. Can you talk about any similarities or differences in the practices of Europe and the United States?

Nelson: The EU Artificial Intelligence Act has been a long process and I think it will be finalized later this month. Their approach is actually based on risk cases and considers whether an AI system or tool rises to a specific level of risk, such as for surveillance or intervention. But the risks involved include issues such as national security, rather than addressing low risks.

The U.S. is still waiting for formal legislation, just like the proposed and upcoming AI bill. But the EU may have borrowed a lot from this, such as the National Institute of Standards and Technology's AI Risk Assessment Framework, which provides tools to think about how risky a particular use of a technology might be, and ways to mitigate that risk. On the other hand, the EU AI Act also has values and principles regarding civil rights, which are similar to the AI Bill of Rights Blueprint.

Q: How do you see the risks of AI? The survey found that when talking to AI researchers, they tended to think: If they can create general intelligence, there is a 10% chance that this technology could become extinct or severely weaken humanity. Many people fear that this could be a technology that ends the fate of mankind. Do you think these concerns are justified?

Nelson: As I said, we already live in a world where the possibility of human destruction is no longer zero. And I use the term AI alignment, which refers to aligning the goals, values, and behaviors of AI systems with the expectations of human society, but often this can only be achieved through technical means. In reality, though, it's clear that it's not just technical issues.

Q: As a scholar of science, you study how science works in the real world. Can you give an example of how something destructive can be turned into something productive?

Nelson: A lot of the research and writing I've done has been about marginalized communities, about African-American communities. One thing that is both inspiring and surprising is that communities that have often been deeply hurt by technology throughout history and now are sometimes its most devout believers, such as early adopters and innovators in the field.

So what I'd say is that all of us believe in the power of technology and innovation to explore how we can use them to improve people's lives. We need to believe that technology is a tool that provides opportunities for people, such as the possibility of helping to extend our lives and make us healthier. (Golden Deer)

Read on