laitimes

The three giants of generative AI are broken by regulation, and the London speech of the father of ChatGPT was protested

author:The Paper

Thierry Breton, the EU's head of digital policy, discussed the introduction of an "AI Convention" — a set of informal guidelines for AI companies to comply with before formal rules come into force.

There has probably never been an industry in human history that has been so eager to get themselves regulated.

After a congressional hearing called for a new body to regulate AI, OpenAI CEO Sam Altman embarked on a European tour to meet with heads of government and participate in dialogue at universities to reaffirm regulatory recommendations. In an interview with the media, he warned that OpenAI could withdraw its services from the European market because of the AI regulations being developed in the European Union.

Not to be outdone, other tech giants are not to be outdone. On May 25, local time, Microsoft President Brad Smith hosted a high-profile event in Washington with US lawmakers to launch Microsoft's recommendations on how governments should regulate AI.

The day before, Google CEO Sundar Pichai, who met with EU officials in Brussels to discuss AI policy, stressed the need for technology to be properly regulated and not stifling innovation.

Ultraman is optimistic about "alignment."

On May 24, local time, hundreds of people lined up at University College London to watch Ultraman's conversation. Waiters discussed OpenAI's experience with using ChatGPT under the sun, with a handful of protesters issuing stark warnings in front of the gate: OpenAI and similar companies need to stop developing advanced AI systems or they will harm humanity.

The three giants of generative AI are broken by regulation, and the London speech of the father of ChatGPT was protested

Outside University College London, where OpenAI CEO Sam Altman held a conversation, there were signs protesting against GMI.

One of the protesters, Gideon Futerman, a student at the University of Oxford who studies solar geoengineering and existential risks, told tech outlet The Verge, "He's hyping up systems with enough known harms." In any case, we should probably stop them. ”

Still, Ultraman was warmly welcomed when he stepped onto the podium. He reiterated previous arguments in the conversation, saying that people are right to worry about AI, but that the potential benefits of AI are much greater. He welcomed the prospect of regulation, but only in the right way. He said it would be desirable to see "something somewhere between the traditional European approach and the traditional American approach," that is, a little regulation, but not too much. He stressed that too many rules could hurt small companies and the open source movement.

Altman touched on the topic of misinformation, saying he was particularly concerned about the "interactive, personalized, persuasive power" of AI systems in spreading misinformation. Azeem Azhar, the writer he spoke to, suggested that an AI system could impersonate an artificial voice to call someone. Altman said: "I think it will be a challenge and there is still a lot of work to be done. ”

But Altman believes that even current AI tools will reduce inequality in the world and that "there will be more jobs on the other side of this technological revolution." "I think the basic model of the world is that the cost of intelligence and the cost of energy are two finite inputs... If you can make these things cheaper and more accessible, then frankly, it helps the poor more than the rich. "This technology will elevate the world." ”

Ultraman is optimistic about scientists' ability to control increasingly powerful AI systems through "alignment." "We have a lot of ideas, and we've published ideas about the alignment of superintelligent systems, but I believe it's a technically solvable problem." "And I feel more confident about that answer now than I was a few years ago." There are some paths that I don't think are very good, and I hope we avoid those. But honestly, I'm pretty happy with the current trajectory of things. ”

There are many concerns about the EU's AI bill

So far, Altman has met with French President Emmanuel Macron, British Prime Minister Sunak, Polish Prime Minister Morawiecki and Spanish Prime Minister Sanchez during his trip to Europe. The purpose seems to be twofold: to quell the fear of AI caused by ChatGPT and to take the initiative in the conversation about AI regulation.

The three giants of generative AI are broken by regulation, and the London speech of the father of ChatGPT was protested

OpenAI CEO Sam Altman met with French President Emmanuel Macron in France.

The three giants of generative AI are broken by regulation, and the London speech of the father of ChatGPT was protested

OpenAI CEO Sam Altman met with British Prime Minister Sunak, DeepMind CEO Demis Hassabis and others in London.

In an interview with the media, Altman said he had "a lot of concerns" about the EU's AI bill.

The bill, which is currently being finalized by lawmakers, has expanded its provisions in recent months to include new obligations for developers of "base models" such as OpenAI. One provision in the draft requires creators of the base model to disclose details about their system design, including "required computing power, training time, and other relevant information related to model size and power," and provide a summary of the copyrighted data used for training. Generative AI systems such as ChatGPT and DALL-E are trained using vast amounts of data scraped from the web, much of which is copyrighted, and companies face legal challenges when they disclose the source of that data.

According to Time magazine, Altman said the concern is that systems such as ChatGPT will be designated as "high-risk" under EU legislation, meaning OpenAI must meet many security and transparency requirements. "Either we can address these needs, or we can't." "It's a limitation on potential technology," he said.

"We will try to comply, but if we can't, we will stop operating (in Europe)." Altman told the Financial Times.

However, according to Reuters, Ultraman said on the 26th that OpenAI has no plans to leave Europe.

OpenAI has recently been trying to show a "responsible" attitude, following its leadership co-authored an article on May 22 local time calling for the establishment of an AI regulator similar to the International Atomic Energy Agency, the company posted on its official blog on the 25th that it is launching a plan to provide 10 grants of $100,000 to establish a democratic process to determine which rules AI systems should follow "within the scope of the law". Specifically, OpenAI is seeking funding for individuals, teams, and organizations to develop proofs of concept for "democratic processes" that answer questions about AI guardrails.

In other words, OpenAI proposes to "crowdsource" AI regulation: it wants a "broadly representative" group of people to exchange ideas, engage in "thoughtful" discussions, and ultimately decide the outcome in a transparent way.

Google and Microsoft executives met with European and American lawmakers

During Altman's visit to Europe, Google CEO Pichai also visited European capitals to try to influence policymakers as they identify the "guardrails" that regulate AI.

Pichai met with a number of officials in Brussels, including Brando Benifei and Dragoş Tudorache, MEPs in charge of the AI bill. According to the 3 people present at these meetings, Pichai stressed the need to properly regulate technology without stifling innovation.

Pichai also met with Thierry Breton, the EU's Commissioner for the Internal Market and head of digital policy, who oversees the AI bill. Bretton told the Financial Times that they discussed introducing an AI Convention — a set of informal guidelines for AI companies to comply with before formal rules come into effect, because "there is no time left to waste in building a secure cyber environment in the AI race."

On May 19, Kent Walker, Google's president of global affairs, published an article on Google's official blog, titled "Policy Agenda for Responsible AI Progress: Opportunity, Responsibility, Security," encouraging governments to focus on three key areas – unlocking opportunity by maximizing AI's economic commitment; promoting accountability while reducing the risk of abuse; Strengthen global security while preventing malicious actors from exploiting the technology.

The three giants of generative AI are broken by regulation, and the London speech of the father of ChatGPT was protested

Microsoft President Brad Smith believes the tech industry "may have more concrete ideas than Washington currently has" when it comes to AI regulation.

Microsoft President Smith delivered a speech to 6 lawmakers from both parties on the 25th, urging Washington to pass five new proposals on AI policy. Among them, Microsoft wants the White House to urge widespread adoption of the "AI Risk Management Framework" released earlier this year by the National Institute of Standards and Technology, and wants lawmakers to require AI tools that control critical infrastructure such as power grids and water systems to be equipped with "safety brakes." Call on policymakers to promote AI transparency and ensure that academic and nonprofit researchers have access to advanced computing infrastructure. Microsoft also wants to work with governments in public-private partnerships, specifically the public sector to use AI as a tool to address "inevitable societal challenges." He also mentioned Microsoft's use of AI to help document the devastation suffered during the war in Ukraine.

Smith believes the tech industry "may have more concrete ideas than Washington currently has" when it comes to AI regulation.

Russell Wald, policy director at Stanford's Human-Centered Artificial Intelligence Institute, recently said he was concerned that some policymakers, particularly in Washington, were too focused on AI governance proposals in the tech industry. During last week's Senate hearing, he said: "It's a bit disappointing ... The industry sector is purely focused. Wald suggested that academia, civil society and government officials should all play a bigger role in shaping AI policy than is currently the case.

Read on