laitimes

High-end atmospheric Google AI, academic dark "Vanity Fair"

In American academic circles, all kinds of dark things around the publication of papers are not uncommon. In order to escape this academic darkness, coupled with the high-paying olive branch thrown by technology companies, some people choose to join the industry and jump ship to a technology company like Google. However, what is unexpected is that even Google has defected and become a dark corner of the AI academic community.

According to a New York Times exclusive report, Google quietly fired an AI researcher in March because he had long been working against his colleagues and questioning and criticizing the company's high-profile papers.

Google AI published a paper in June last year, A Graph Placement Methodology for Fast Chip Design, proposing that the ability to use EdgeGNN reinforcement learning algorithms to design some chip components has surpassed that of humans. This paper (hereinafter referred to as the "chip paper") was published in Nature and has a great influence in the industry, and Jeff Dean, the general director of Google's AI business, is also one of the authors.

High-end atmospheric Google AI, academic dark "Vanity Fair"

The researcher, Satrajit Chatterjee, was suspicious of the chip paper, so he led a team to write a paper (referred to as the "rebuttal paper") to try to falsify some of the important claims in the paper.

However, according to four anonymous Google employees, just as the rebuttal paper was finished, the company first refused to publish it, and then quickly expelled Chatterjee from the company.

"We conducted a rigorous examination of some of the claims made in the refutation paper and ultimately determined that it did not meet our publication standards." Zoubin Ghahramani, vice president of Google's research division, told The New York Times.

Chatterjee also appears to have stepped down from the front line of AI research to join a venture capital firm (unconfirmed by himself).

Delisting authors, sealing threats: Google Academics is so dark?

Here's how it went:

Prior to the aforementioned chip paper published in Nature, Google published a preprint paper on the essentially same topic in April 2020, Chip Placement with Deep Reinforcement Learning.

High-end atmospheric Google AI, academic dark "Vanity Fair"

According to the New York Times, citing several anonymous insiders, Google attached great importance to the research direction of AI design chips at that time, and had a very urgent desire to monetize the technology it was studying as soon as possible.

When the preprint paper was published, Google approached Chatterjee to ask if it could sell the technology directly or license it to chip design companies.

However, the researcher, who once worked at Intel and has extensive experience in the chip industry, directly poured cold water on Jeff Dean. He emailed colleagues that he had "reservations" about some of the claims in the preprint paper and questioned that the technology had not been rigorously tested.

Chatterjee wasn't the only Google employee on the team to question the study. In this preprint paper, two other co-authors, Anand Babu, founder of Google's AI Kernel team, and Sungmin Bae, a software engineer, support Chatterjee's opinion.

Meanwhile, Google can't wait to make money with the paper.

High-end atmospheric Google AI, academic dark "Vanity Fair"

Satrajit Chatterjee Image source: Personal website

Google AI repurposed the preprint paper, changed its title, submitted it directly to the most prestigious journal in academia, Nature, and successfully published it (the aforementioned chip paper).

However, according to the understanding of silicon star people, the reissue of the paper has caused a lot of controversy within Google AI. Some employees felt that things were weird:

First of all, why did this paper change the topic and post it again? Second, if a new version is to be reissued, why has it not been re-reviewed by the company's internal thesis review committee? Finally, and most bizarrely, why did this new edition, which was sent to Nature, remove the names of the two authors who disagreed with the study? Meaning that the two of them didn't contribute to the new version, so they simply eliminated the traces of their existence, just like they never contributed to this research?

High-end atmospheric Google AI, academic dark "Vanity Fair"

Above: Preprint version of the paper for April 2020; Below: The June 2021 edition of Nature (the "Chip Paper"), which removes the names of the two authors Image source: arXiv, Nature

In order to quell the controversy, Jeff Dean approved employees including Chatterjee, Bae, Babu, etc. to challenge chip papers, and promised that their post-incident reports (i.e., refutation papers) would follow the company's established policy of following the process of the publication approval committee.

It didn't take long for Chatterjee et al. to write the rebuttal paper, titled Stronger Baselines for Evaluating Deep Reincorcement Learning in Chip Placement.

High-end atmospheric Google AI, academic dark "Vanity Fair"

In refuting the paper, the authors proposed several new baselines, that is, the benchmark reference algorithm, which means that those that are worse than this baseline are unacceptable and there is no need to publish papers.

As a result, the authors proposed better than the implementation of the algorithm used in the Google chip paper, and the computing power relied on for operation was much smaller. The results of the ablation study point to weaknesses in the algorithm in the chip paper.

Not only that, the authors further point out that the design ability of human chip designers cannot be used as a strong baseline, that is, it is very incompetent to compare reinforcement learning algorithms with humans in chip papers.

With these findings in mind, Chatterjee and others submitted the rebuttal paper to Google's publication review committee, where they waited for months before being rejected for publication. The response from Google AI executives was that the refutation paper did not meet the publication criteria.

The authors even approached CEO Sundar Pichai and Alphabet's board of directors, pointing out that the rejection of the paper may have violated the company's principles of AI research publication and ethics.

However, their resistance was soon suppressed. It didn't take long for Chatterjee to receive notice of her dismissal.

Meanwhile, Anna Goldie, co-first author of the chip paper, has a different voice. She told The New York Times that Chatterjee had tried to seize power three years earlier and that she had since been the victim of the latter's "disinformation" attacks.

We don't know what the direct reason for the dismissal of Chatterjee, who spoke differently, was fired by the company. But Silicon Star people learned from Google employees that some employees believe that the real reason for Chatterjee's dismissal is that it is on the opposite side of the company's interests and the core executives of Google's AI part of the project.

In the eyes of some people, even a large company with a flat structure and a fair system like Google will inevitably temporarily change its rules and kick away the people who oppose it in order to protect the interests of the company and the face of the executives.

Conflicts of interest were dismissed, and employees "changed their names" in protest

This is indeed not the first time that Google AI has been caught up in different academic opinions and office politics.

Timnit Gebru, a former Google researcher and a member of the Stanford AI Lab who has a lot of influence in the industry, was suddenly fired by Google two years ago, which has left a very bad impression on many of his peers.

And coincidentally, Gebru was fired from Google for exactly the same reason as Chatterjee: he was in opposition to the company's interests and was refused to publish papers by the company.

(Disclaimer: Timnit Gebru herself is an expert on AI bias in the industry, but she herself is somewhat controversial.) Many people believe that her "social justice warrior" personality is stronger than her fairness as a scholar, which has been questioned by some peers. )

High-end atmospheric Google AI, academic dark "Vanity Fair"

Timnit Gebru Image source: Wikipedia Commons Creative Commons license

In 2020, Gebru launched a standoff online with Turing Award winner Yann LeCun, known as one of AI's "Three Godfathers."

At that time, someone used a low-resolution face restoration model PULSE to restore Obama photos, and there was a white result. LeCun expressed his opinion on this, arguing that the inherent bias of the dataset leads to the result of AI bias.

This statement has been criticized by many people, including Gebru. Gebru said he was disappointed with LeCun's statement because bias in AI algorithms doesn't just come from data. She herself has done a lot of research in this area and published a number of papers. Her view has always been that AI biases don't just come from datasets, and solving data sets alone doesn't completely solve ai bias problems.

High-end atmospheric Google AI, academic dark "Vanity Fair"

LeCun went on to tweet more and more to explain his point, only to be treated as a "banmon axe" by Gebru and her supporters — although LeCun is the "godfather of AI," Gebru himself is an authoritative expert on AI bias.

The war of insults between LeCun and critics, including Gebru, lasted for half a month, and the former "pushed back" to an end.

Gebru's outspoken attack on social networks against LeCun, a veteran expert in machine learning, was seen internally by some senior figures at Google as undermining the friendly relationship between the company and academia/industry. Although Gebru achieved a staged "victory", she was not fully aware of the seriousness of the matter and the clouds were already hanging over her head.

Everyone should know how hot the big model (represented by the language model with a large number of parameters) in the field of AI research in recent years, including Google, OpenAI, Microsoft, Amazon, BAAI and other institutions have invested heavily in this area, giving birth to a series of ultra-large-scale language to neural network models and related technologies including BERT, T5, GPT, Switch-C, GShard, etc.

Also in 2020, Gebru's team wrote a paper on the Dangers of Stochastic Parrots: Can Language Models Be Too Big? He hopes to expose the dangers of hyperscale language models in real-world use and criticize the impact they may have on AI bias.

High-end atmospheric Google AI, academic dark "Vanity Fair"

Research in this direction is not a small minority, after all, there have been studies that have found that super language models such as GPT-2/3 will strengthen the existing social prejudices and discrimination (including gender and ethnicity) when used in real scenes, causing harm to actual users.

The main points expressed in this article by the Gebru team are really nothing wrong. However, in Jeff Dean's view, the short length, more narratives and quotations than the results of experiments, the lack of scientific empirical elements, does not constitute a condition for Google to publish the paper under the public title, so it is rejected and not published.

The reason that may be closer to the essence is that if this paper is published, it is equivalent to opposing Google's efforts in the big language model in recent years, which will greatly affect morale in the eyes of Google's AI management.

Gebru insisted that even if the company didn't approve it, he would have to find a way to send the paper out. Google asked her to remove the author's Google affiliation in the paper, meaning that the article was done privately by several authors and the company did not approve of it. This request was also sternly rejected by Gebru.

Regarding Gebru's departure, Google said she resigned herself (internal employees revealed that Gebru did threaten to resign at the time). But Gebru revealed that he had been fired from the company.

Samy Bengio, Gebru's Google reporter, said he was shocked at the time. Bengio, a 14-year-old of the company and one of the founding members of the original Google Brain team (and the younger brother of Yoshua Bengio, one of the three godfathers of AI), left Google in 2021 directly because of his dissatisfaction with the dismissal of Gebru.

High-end atmospheric Google AI, academic dark "Vanity Fair"

Later, gebru's team's paper was later published in March 2021 at the ACM-owned interdisciplinary conference FAccT (Fairness, Responsibility and Transparency Conference), but only two of the four authors could not appear on the list of authors as Google employees.

It is worth mentioning that although Gebru broke up with Google before the paper was published, another author, Margaret Mitchell, still worked for Google at the time of publication (and was later expelled).

In the published version of the paper, she "changed her name and surname" and added "Sh" in front of her name to satirize the company's silence about herself:

High-end atmospheric Google AI, academic dark "Vanity Fair"
High-end atmospheric Google AI, academic dark "Vanity Fair"

Image source: Wikipedia Commons Creative Commons license

But something even more outrageous is yet to come.

Just earlier last month, Google AI published another paper describing PaLM, a brand new hyperscale language model with 540 billion dense activation parameters developed by the team.

In the Model Architecture and Ethical Considerations sections, the PaLM paper cites at least two gebru's team papers that Google rejected the previous year.

In the section on ethical thinking, the paper writes that since it is not feasible to completely eliminate social biases from training data and models, it is crucial to analyze the relevant biases and risks that may arise in the model, and also cites and references the analytical methods adopted by Gebru et al. in that rejected paper.

Not to mention that Jeff Dean is also the author of the PaLM paper. This is really embarrassing.

High-end atmospheric Google AI, academic dark "Vanity Fair"
High-end atmospheric Google AI, academic dark "Vanity Fair"

Above: Ironically, the citation list also leaves records of former employee Mitchell alluding to the company.

Gebru said,

"These [Google's] AI bigwigs can do whatever they want. They didn't have to think about the fact that I was fired from the company and that my paper was passed by the company. They don't have to think about the consequences at all, and they probably forgot about that year. ”

Finally, many people may wonder: Why has there been so much farce in Google's AI research department in recent years, and it is all related to the research direction of employees and the company's conflicts of interest?

A former Google employee who understands google AI makes the following comments about silicon star people:

"On the one hand, we must rely on satellites to attract more HR and PR attention, on the other hand, we must put the research results of AI into production as soon as possible, and on the other hand, we must improve the sense of social responsibility because of some controversial projects." Fish and bear paws cannot be combined. ”

(Note: Regarding the "satellite release" section, the former employee was referring to some of Google's very large model studies that did not reach the level of State-of-the-Art at the time of release.) For example, Google's 1.6 trillion parameter switch transformers model does not exceed similar models with fewer effective parameters, and the ease of use of the API is also very poor, so it cannot make a very impressive demonstration like GPT-3. )

There is no doubt that Google AI has become a benchmark for basic and applied research in AI among technology companies in the industry.

Considering that many of the research results of Google AI can be more quickly invested in various Core Google products, and the number of users of these products is hundreds of millions or even billions, it can be said that Google AI research is also of great significance to the world.

At the same time, it is undeniable that Google/Alphabet is still a for-profit public company that needs to be accountable to shareholders and needs steady and sustained growth. And AI is not new today, the degree of commercialization and feasibility has been very high technology, Google's internal expectations for the combination of AI industry, education and research are definitely increasing.

Given the above background, it's not hard to see why jeff dean and other bigwigs in research departments are desperate to protect their investments and reputations in AI research.

It must be admitted that these big men were also pioneers in ai academia, and it is also an insult to say that they do not recognize academic ethics. But unfortunately, in the face of the interests of the company today, it can only seek its own government in its place. When the great calamity comes, perhaps the academic integrity has to be temporarily reduced to the side.

High-end atmospheric Google AI, academic dark "Vanity Fair"

Jeff Dean speaks at the Google Developer Conference Image source: Chen Du | Silicon Star Man/Pin Play

*Title image source: Google

Read on