laitimes

AI earthquake! The United States launches a comprehensive investigation into OpenAI! The regulatory storm is coming!

author:Qingbo Intelligence

Remember OpenAI's confident, candid, and comfortable Q&A at the Capitol Hill hearing in May?

At that time, Sam Altman was very spirited, in front of patient and friendly congressmen, talking about AI legislation, large model security, and even proposed his own AI regulatory guidelines, "pleading" with congressmen to supervise.

AI earthquake! The United States launches a comprehensive investigation into OpenAI! The regulatory storm is coming!

Under Sam Altman's superb rhetoric and sincere attitude, the hearing that should have been-for-tat and full of tension became a two-way scene, and the "ambiguity" can be described as envious.

Since everything has been said about this, AI supervision naturally has to be put on the agenda!

So, with the strong combination of public reports and official investigations, a big fish finally "caught the net".

On July 13, the U.S. Federal Trade Commission (FTC) officially launched an investigation into OpenAI. The investigation focused on whether ChatGPT harmed people by publishing false information, and whether OpenAI had "unfair or deceptive" privacy and data security practices.

In fact, this isn't the first time OpenAI has been questioned about data privacy.

On June 28, Clarkson Law Firm in Northern California filed a class-action lawsuit against OpenAI and its partner Microsoft. The reason is that the data captured by OpenAI when training large models seriously infringes the copyright and privacy of the 16 plaintiffs.

In late June, writers Mona Awad and Paul Trembla also filed a lawsuit in federal court in San Francisco, alleging that ChatGPT had called up their novels for training without their consent.

AI earthquake! The United States launches a comprehensive investigation into OpenAI! The regulatory storm is coming!

Regulation seems to be a debuff (negative effect) on AI companies, but under the inevitable trend of regulation, becoming the rule-maker first can gain a relative competitive advantage. This is why OpenAI shouts "safe" and "legal."

AI earthquake! The United States launches a comprehensive investigation into OpenAI! The regulatory storm is coming!

But now, the "AI regulatory pioneer" OpenAI has become the number one violator on the edge of the law, and this "AI drama", Sam Altman's true face as "Gao Qiqiang in the AI world" has also been revealed.

01

Loopholes in the hearing

In the face of angry infringers and menacing regulators, OpenAI CEO Sam Altman, although his tone is still restrained, expressing his views on infringement, prosecution, investigation and other issues, but judging from his high-frequency replies on Twitter, he may be a little panicked.

AI earthquake! The United States launches a comprehensive investigation into OpenAI! The regulatory storm is coming!

In the face of skepticism, Sam Altman sent three tweets in a row:

  1. He lamented the FTC's investigation and failure to build trust, and stressed that he has always taken technical safety seriously and believes that everything OpenAI does is in accordance with the law.
  2. He went on to emphasize the safety of GPT-4, saying that OpenAI built GPT-4 on years of security research and spent more than 6 months after completing initial training to make it safer and more compliant before releasing it.
  3. Altman said he has been transparent about the limitations of the technology and said OpenAI has no incentive to pursue unlimited returns because their profit structure is limited, implying that social ethics are ahead of profit.

To sum it up, we've been trying not to be detected, but unfortunately we've been exposed.

In fact, the questions under investigation were foreshadowed as early as the May hearing.

Although Sam Altman's performance on Capitol Hill was extraordinary, it was not without loopholes.

AI earthquake! The United States launches a comprehensive investigation into OpenAI! The regulatory storm is coming!

Gary Marcus, a professor of psychology and neuroscience at New York University who was sitting on the witness stand at the time, asked two fatal questions:

  1. Why align with Microsoft?
  2. Why is GPT-4 training data not transparent?

After two months, the original two problems have become an important source of OpenAI's trust crisis.

The Clarkson law firm of Northern California is fiercely rhetoric in the indictment, saying OpenAI's entire business model is based on theft and alleging that OpenAI and Microsoft illegally collected, used and shared the personal information, including children's information, of hundreds of millions of Internet users when developing, marketing and operating their AI products. In addition to ChatGPT, its lawsuit involves many OpenAI products such as ChatGPT, Dall-E, and Codex. Based on this, many netizens called it "the first case of OpenAI".

AI earthquake! The United States launches a comprehensive investigation into OpenAI! The regulatory storm is coming!

A more recent FTC survey of OpenAI was more detailed, listing a list of 49 big questions and more than 200 small questions, asking OpenAI to provide detailed answers and statements on all the questions on the list. The questions cover many aspects such as model development and training, risk assessment and response, privacy and prompt risks and measures, and API integration and plug-ins.

After answering these questions, OpenAI, which was not originally open, must have to be open.

02

Google saw the opportunity

Update Privacy Policy

While OpenAI and Microsoft were busy dealing with a succession of infringement lawsuits, Google keenly smelled the crisis and chose to plug the loophole first.

AI earthquake! The United States launches a comprehensive investigation into OpenAI! The regulatory storm is coming!

On July 1, Google updated its privacy policy to make it clear that Google has the right to collect any publicly available data and use it for training its AI models. The move means that whatever Google can get publicly available can use to train its own Bard model or anything else in the future.

In this clause, Google seems to reserve the right to collect and exploit all data published on public platforms, as if the entire Internet were the company's own AI playground.

However, in response to privacy concerns, Google spokeswoman Christa Muldoon stressed that Google has incorporated privacy principles and safeguards into the development of AI technology to ensure that it is consistent with Google's AI principles.

In addition, Google has also taken a series of measures to deal with the security and privacy of user data. For example, Google has committed to collecting and using users' data only with their explicit consent, and will take strict technical and administrative measures to prevent data breaches.

AI earthquake! The United States launches a comprehensive investigation into OpenAI! The regulatory storm is coming!

And Google is communicating with news organizations in the United States, Britain and Europe that they are willing to pay for news content. Bard, their AI tool, is also being trained on "publicly available information," which could include websites that require payment. If the agreement to pay for news information is reached, Google will take a step worth learning from the copyright of information, and Google's database will also have a richer source of data.

However, the new policy does not clarify how to prevent copyrighted content works such as books and works of art and paintings from entering the training database.

03

Data "public and private"

The future direction of AI regulation

From the "first case of OpenAI" to the data privacy issues commonly faced by AI companies at this stage, the next direction of AI regulation has been relatively clear, that is, the division between "private data" published on social platforms and "public data" used for training.

At present, the legislation on large model training data in the United States has not yet been passed, and the judge's decision is largely based on existing cases of privacy and copyright. However, with the continuous integration and immersion between AI and human society, the future of so-called "embodied intelligence" is approaching, and the protection and use boundaries of private data must be gradually clarified.

AI earthquake! The United States launches a comprehensive investigation into OpenAI! The regulatory storm is coming!

However, it is not easy to distinguish between the public and private boundaries of data at the legislative level.

As the social media with the largest amount of information and data, the public nature of the platform itself and the private nature of user activities undoubtedly constitute a pair of complex contradictions, and the definition of the protection and use boundary of data between users and platforms involves many complex subjective and objective factors, making it extremely difficult to clearly define the boundary between public and private.

However, even though the unexplored gray areas of the law have become a source of profit for AI giants from time to time, in the absence of clear legal rules, settlements are often the common way to deal with similar privacy violation cases. In the settlement, AI companies such as OpenAI will not necessarily have an advantage.

Therefore, the frequent increase in privacy violation cases will inevitably force the government and legal departments to make important measures in AI supervision, and will also force AI technology companies to rein in their power and update and improve existing information policies.

Author: Watermelon Typesetting: Sun Keying

The picture comes from Q Zai Internet surfing, if there is infringement, background contact, Q Zai kneeling to delete~

Read on