laitimes

The person who was robbed of his job by the AI decided to rebel against the AI

The person who was robbed of his job by the AI decided to rebel against the AI

Text: Lin Weixin

Editor|Su Jianxun

AI = Cancer?

At a time when most people were rejoicing at the progress of AI, there was a group of people who decided to rebel against AI.

Over the past year, generative AI has made humanity lose ground in painting. Shi Lu, a game concept artist who has been in the industry for five years, said that this year's changes are more than any previous year. Many game development companies are using AI to shrink their art teams. Within a few months of the AI boom, the manuscript fee for the original artist dropped from 20,000 yuan per piece to 4,000 pieces per sheet.

It has become an established fact that some low-level original artists work for AI - most of the time, Party A generates more than 100 AI character drawings a day, while Shi Lu and his colleagues are responsible for retouching the AI pictures.

"In the first two weeks, I was able to change the face or face of the character," Shi Lu said, "and this week's task is to paint nose hair and blackheads. Years ago, when she was an intern, she was already able to draw game characters, but now "the days are getting worse and worse." She believes that AI is the culprit behind the artist's difficult situation.

"That's cancer. She said. On the Internet, opponents have also nicknamed those who support AI, "Cancer Brother."

The person who was robbed of his job by the AI decided to rebel against the AI

Resist the logo of AI painting (picture from the Internet)

The conflict between humans and AI is intensifying. Many painters are worried about being infringed by AI. They believe that the principle of the Wensheng diagram is "corpse fragmentation and splicing" - the developer feeds a large number of human-created paintings to the AI model, and the model breaks them up, then splices them, and finally generates a new diagram. (Although some technicians came forward to refute the rumors, it didn't help.) )

Opponents describe AI mapping as soulless and corpses, and liking AI mapping is "necrophilia". The "splicing theory" is their weapon, and from the intuition of ordinary people, the process of AI drawing has already involved infringement.

The big companies that promote AI are also caught up in the vortex of controversy.

On November 29, 2023, four artists jointly sued Xiaohongshu for infringement of the AI model library, on the grounds that Trik, Xiaohongshu's AI image creation tool, was suspected of using their works to train AI models.

One of the artists named "Genuine Qingtuanzi" posted two illustrations, which are her work and Trik AI-generated pictures. "Both the color elements and the picture style are very similar to my pictures, and I feel that my heart and soul have been plagiarized. She wrote, "I hope everyone will defend their rights together." ”

The person who was robbed of his job by the AI decided to rebel against the AI

Comparison of Qingtuanzi's work (left) and Trik AI's work (right) (picture from screenshot)

A fan said that Trik AI's so-called Chinese-style pictures "are a large number of pictures of domestic ancient wives, and many (Trik) generated antique scenes have the feeling of familiar wives." ("Mrs." is a nickname for a powerful painter by fans in the circle)

The artist "It's a Snow Fish" compared the drawings generated by Trik AI with his own work, thinking that the two are very similar, and even have similar elements, he shouted to Xiaohongshu, "Are you happy to feed the AI with my pictures?"

The person who was robbed of his job by the AI decided to rebel against the AI

Snow Fish Weibo

The person who was robbed of his job by the AI decided to rebel against the AI

Comparison between Xueyu's work (left) and Trik's AI work (right) (picture from Xueyu's Weibo)

He announced that he would stop updating on Xiaohongshu on the grounds that Xiaohongshu had "fed" his works to AI without permission, and the platform's user agreement, which was close to the overlord treaty, caused him uneasy.

The case against Xiaohongshu has been filed with the Beijing Internet Court. It is understood that this is the first case of infringement of AIGC training dataset in China.

Xiaohongshu declined to comment, citing the fact that the case was in the judicial process. Qingtuanzi revealed that before the case was filed, people from Xiaohongshu had come to me and hoped to negotiate. But they have agreed not to accept mediation and insist on filing a case. Qingtuanzi said that he hoped that this case could provide a reference case for copyright disputes over AI drawings.

NetEase's LOFTER is a common communication platform between artists and fans, and illustrator Gao Yue often shares his works on LOFTER and has accumulated a certain number of fans. LAST YEAR, SHE NOTICED THAT LOFTER WAS TESTING AN AI FEATURE, "LAOFU PIGEON PAINTING MACHINE", WHICH ALLOWS USERS TO GENERATE PAINTINGS BY SIMPLY USING KEYWORDS. She can't help but suspect that the platform will secretly turn their work into training material for AI.

The person who was robbed of his job by the AI decided to rebel against the AI

New function "Old Fortune Pigeon Drawing Machine" (picture from screenshot)

"My trust in the platform plummeted and I shuddered. Gao Yue said.

Seeing that more and more users expressed dissatisfaction, LOFTER officially deleted the event notice of the new feature a few days later, and stated that the training data came from open source and did not use the user's work data. However, this feature was not taken offline immediately, which is also an important reason why users cannot be convinced and reassured.

After some struggle, Gao Yue chose to respond to the call of some creators, empty his account, and then log out as a protest. In Gao Yue's view, retreating from the platform is one of the few ways to protect oneself.

"I don't want to release the work publicly, to feed the monster that might replace me. She said.

The controversy is getting bigger and bigger in the designer circles. Last month, the mascot of the Spring Festival Gala in the Year of the Dragon, "Long Chenchen", was also questioned as an AI drawing - each dragon claw has a different number of toes, the mouth is hooked to the nose, and the dragon scales are sometimes single and sometimes double. The official Weibo of the Spring Festival Gala responded late at night, "The design teacher's head is bald", and also posted the design process of the mascot.

The person who was robbed of his job by the AI decided to rebel against the AI

CCTV's response to the Spring Festival Gala (picture from Weibo screenshot)

Illustrator Ma Qun recalled that AI painting was popular for a while in 2022, when AI paintings often overturned, such as drawing people as dogs or horses, which were easy to identify. Later, AI has advanced faster than expected, but there are still traces to follow, such as AI not knowing how to draw hands.

The bad news is that the above vulnerabilities are gradually disappearing in a short period of time. Ma Qun believes that the identification of AI is increasingly relying on people's subjective feelings, "without spirituality", or "with an inhuman texture".

Ma Qun also resisted AI because it made her lose her passion for painting. She didn't come from a professional background, she taught herself to paint with enthusiasm, and after graduating from university, she became a full-time painter. The process of painting is sometimes torturous, but the moment she sees the result, her joy overpowers everything.

Now, the AI has erased most of the drawing steps and everything is automated, meaning that the skills and experience that the horses have spent years learning are eclipsed by efficient, powerful algorithms.

"Creation has become cheap. The herd said to 36 Krypton.

Go to the courthouse

The creators took the AI company to court for almost the entire year of 2023.

In Silicon Valley, new technological outlets are not uncommon, and the metaverse and cryptocurrencies have attracted much attention before, and they will soon disappear. Generative AI has been able to make waves in a short period of time because it seems so useful that it can upend the old world to the point of making people feel so threatened, such as content creators who hold the rights.

The first to attack the AI was an illustrator named Carla Ortiz. She is also a believer in the "splicing theory", and in the face of Stability AI, which conquers the city in the field of painting, only two words come to her mind: exploitation and disgust. During that time, some job opportunities were ruthlessly taken away by AI, and she fell into anxiety and decided to rebel.

She turned to lawyer Matthew Barttrick for help. In the winter of 2022, Bartrick provided legal help to a group of programmers who believed that Microsoft's GitHub Copilot was allegedly infringing their copyright. Accusing GitHub Copilot of "stealing the work of programmers," Bartrick actively prepared the lawsuit. In January 2023, Batrick represented Ortiz in his case against Stability AI.

Similar lawsuits are not uncommon. Photo firm Getty Images sued Stability AI in the U.S. and U.K., alleging that it illegally copied and processed 12 million images of Getty Imagas. Novelists led by Franzen have attacked OpenAI, while a group of nonfiction writers have taken aim at OpenAI and Microsoft, and music labels such as Universal Music have claimed that Antropic illegally used their copyrighted works during training and illegally distributed lyrics in model-generated content.

On December 27, 2023, the New York Times formally sued Microsoft and OpenAI, claiming that millions of articles were being used as training data for AI, which was competing with newspapers as a news source.

An all-out war has begun over AI copyright, and difficult questions have been thrown at the courts of various countries.

In China, an AI drawing infringement case lasted several months from filing to adjudication, which is several times longer than other image infringement cases.

In February 2023, plaintiff Li Yunkai's article on Baijiahao found that the pictures created by AI were used as accompanying pictures, but the author of the article used them without permission, and cut off Li Yunkai's signature watermark. As a result, Li Yunkai sued the author in the Beijing Internet Court on the grounds of infringement of the right of authorship and the right of information network dissemination.

The case itself is not complicated, however, because the infringing picture was generated by the AI model Stable Diffusion, it has attracted a lot of attention and has been called "the first case of AI drawing" by netizens.

The person who was robbed of his job by the AI decided to rebel against the AI

The infringing image was generated by Li Yunkai using AI (picture provided by the interviewee)

Li Yunkai is not a designer, he is an intellectual property lawyer who has been in the industry for 10 years, and has spent most of his career studying the game between new technologies and copyright law. In September last year, he started researching AI painting tools and tried to generate images with Stable Diffusion, and shared AI-generated images on Xiaohongshu in addition to research.

One day, Li Yunkai found that the picture he drew with AI appeared in an article on Baijiahao. This is a clear infringement. Out of professional interest, he wanted to find out the copyright ownership of AI mapping, and more importantly, what would the court think?

The person who appropriated the picture of Li Yunkai, who is also the defendant in the case, is a woman in her fifties and sixties, who claimed to be seriously ill and was confused when she received the notice from the court. At the trial, she explained that the image had been obtained through an internet search and that the exact source was no longer available.

She also said that AI drawings are the crystallization of human wisdom and cannot be considered the work of the plaintiffs. This is also the focus of the dispute in the case, whether the AI-generated picture constitutes a work, and whether Li Yunkai enjoys the copyright of the image.

This is an open legal issue for all countries. Last August, a U.S. court ruled that machine-created content was not copyrighted because "human authorship is a fundamental requirement of copyright." But this conclusion was quickly questioned. Some people pointed out that if you take a picture with a mobile phone, the photo can be protected by copyright, and if you use an AI model to generate a picture, you should also have copyright protection, right?

In response to this issue, the Beijing Internet Court made a diametrically opposite judgment.

The judge who heard the case asked Li Yunkai to demonstrate in detail the whole process of using AI to generate pictures, including downloading software and writing prompts. In order to let the judge understand AI technology, Li Yunkai consulted a lot of materials and tried his best to explain the principle and creation process of AI painting to the judge.

The judge finally held that Li Yunkai had done a lot of intellectual intervention in the creation process of the AI pictures involved in the case, which reflected the originality of the work. Therefore, this AI picture is identified as Li Yunkai's work and enjoys copyright protection.

The person who was robbed of his job by the AI decided to rebel against the AI

The trial will be broadcast live on the whole network on August 24, 2023 (picture from video screenshot)

Li Yunkai told 36Kr that this verdict has made some domestic AI companies that are waiting and watching "mixed blessings", happy that AI-generated pictures have copyrights, and sad is that the copyright seems to belong to users.

AI companies aren't panicking

On the other end of the copyright controversy, AI companies clearly have no intention of fighting. Objectively speaking, copyright disputes will not be resolved in the short term, so it is the safest way to avoid or even remain silent.

In December 2023, Meitu launched a new generation of large models, which boast more powerful video generation capabilities. While company executives emphasize that AI is an assistive tool, not a replacement for professionals, many practitioners believe that these new features will further threaten their job opportunities over time.

During the Q&A session, 36Kr raised the issue of how AI could lead to copyright disputes and wondered how they would respond.

"The copyright issue of AIGC-generated images is controversial in practice and depends on the further regulation of the law," said one executive, "although the current law in this area is not very clear, we will protect the copyright of users in general, especially professionals." ”

Another executive referred to the verdict of the "AI Painting First Case" and expressed his agreement with the court's judgment, "If this is the case like the 'AI Painting First Case', we also believe that the AI company and the AI model do not own the relevant copyright." ”

At this year's developer conference, OpenAI announced in a high-profile manner that it would bear the cost of litigation for those who have been involved in legal disputes over the use of GPT. In the eyes of some creators, this seems like a provocation.

In a way, AI companies have nothing to fear. The overarching claim that suing AI – that AI models are already infringing when using training data – is not unassailable. Some AI companies liken AI training to a human learning process, where a new apprentice needs to read and even imitate a teacher's work in order to master the technology. If the court adopts this view, it does not constitute infringement.

One lawyer said it was likely that AI companies would use the "fair use doctrine" to defend themselves. The "fair use doctrine" presumably means that while your conduct is technically infringing, your conduct is an acceptable borrowing to promote creative expression. For example, scholars can quote excerpts from others in their own works, authors can publish book adaptations, and ordinary people can take clips from movies for film reviews.

In other words, if copyright is too restricted, the creativity of civilization may stagnate.

Tech companies have long used this principle to circumvent copyright disputes. In 2013, when Google was sued by the Writers Guild for copying millions of books and uploading fragments online, a judge ruled that the fair use doctrine was legal because it created a searchable index for the public and created public value.

In the era of large models, fair use doctrines may still play a key role. Proponents of AI non-infringement argue that the process of generating content from large models is similar to that of human creation — when you try to draw a picture or make a video, you will have a picture or movie in your mind. Human creation has advanced on the basis of its predecessors, and so have large models.

In the U.S., a judge has already upheld Meta's claim, rejecting allegations that LLaMA-generated text infringes the author's copyright.

The judge hinted that the writers could continue to appeal, but that would stretch the line of litigation, which would also be a boon for AI companies. As AI products become more popular, public acceptance of AI is increasing, which will lead to courts having to be more cautious in deciding, one scholar said.

More importantly, the strategic position and commercial value of AI are rising, and believers of AI technology are generally worried that if copyright restrictions are too strict, it will restrict the development of AI technology.

Li Yunkai has spoken to some large model developers, and they said that the authorities are relatively cautious because in addition to considering individual cases, they also need to take into account the technological development of AI in China and the competition between countries about AI technology.

Another obstacle is that AI companies have little transparency when it comes to model training data.

Just to cite a recent example: On December 7, 2023, Google published a 60-page report that repeatedly emphasized the criticality of training data — "We found that data quality is critical to a high-performance model," but provided little information about where the data came from, filtered, or what it was.

An algorithm engineer told 36Kr that the way they find training data is nothing more than to crawl the content of the Internet with crawlers, find some open-source datasets, and buy them on the black market if they can't do it, "you can always buy it".

Some scholars ridiculed that instead of competing for the performance score of the model in the evaluation list, AI companies should compete to see who can have the most legitimate training data.

However, criticizing AI companies for not making their training data transparent is also a bit harsh. After all, the training data will greatly affect the performance of the model, which is a trade secret of each AI company. Just imagine, Coca-Cola kept their recipe strictly secret for 137 years, no one has cracked it so far, and AI companies won't hand over their hole cards easily.

Under the existing law, AI companies are also not obligated or motivated to disclose training data.

Li Yunkai told 36Kr that "there is no evidence discovery system in China" (which stipulates that as long as the evidence is relevant to the facts of the case, the parties have the right to ask other parties in possession of the evidence to present and disclose it), which means that AI companies can not disclose the training data of the model. "Under the current system, as long as the company does not disclose it, no one knows whether it has used user data for training. It's a dead knot. He said.

As of press time, the case of the four artists suing Xiaohongshu AI has not yet been heard. Li Yunkai, who is following the case and talks about the victory or defeat, said: "There is a possibility that the painter's appeal will not be supported because they can't prove that their work has been trained." ”

Unfortunately, it's highly unlikely that creators will win this copyright war.

But that doesn't mean AI companies can be completely desperate. The influence of public opinion is also important. In November 2023, Kingsoft Office launched the public beta of AI functions, and soon it was discovered that the product privacy policy mentioned that in order to improve the accuracy of AI functions, the documents and materials uploaded by users will be desensitized and used as the basic materials for AI training. This clause has caused a large number of users to be dissatisfied.

A copyright lawyer speculated that the in-house counsel had written the clause in order to "try to make this as compliant as possible." When they avoided legal risks, they fell into the risk of public opinion, which was somewhat black humorous.

A few days later, Kingsoft Office responded by promising that "all user documents will not be used for any AI training purposes" to quell the turmoil. CEO Zhang Qingyuan said in an interview with LatePost that the terms are old, and for the beautification of PPT boards, they do not involve user documents, and they have not had time to update, resulting in user misunderstanding.

Li Yunkai said that the definition of AI-related intellectual property can only be explored in judicial practice on a case-by-case basis, "The general consensus is to respect business practices, that is, the law will not excessively interfere with the autonomous behavior of enterprises." If there is intellectual property, the general principle is that it is distributed by the developer. ”

In the relevant clauses, Tencent Hybrid Model stipulates that the copyright of the generated content belongs to the user, but "it is only for personal learning and entertainment, and it shall not be used for any commercial purposes". Li Yunkai said, "Other companies are not so generous. ”

The person who was robbed of his job by the AI decided to rebel against the AI

The relevant terms of the hybrid model (from the screenshot)

One of the most popular compromises is that AI companies should have a solution to compensate content creators (see Spotify's compensation for musicians) that would allow creators to get a fee if their work is used as AI training data. In the short term, this will protect the interests of creators. As for what will happen in the longer term, no one can say for sure.

2023 has been called the "Summer of AI", and AI companies have succeeded in convincing more people that AI is the way to go, including some content creators. Not long ago, Zhang Lixian, editor-in-chief of "Reading Library", said on Weibo that all the pictures on the title page of the magazine in 2024 will be painted by AI.

The person who was robbed of his job by the AI decided to rebel against the AI

The Weibo comment area has caused a lot of controversy (picture from Weibo screenshot)

An illustrator left a message: "It is of great significance that the most quality-oriented reading library accepts AI paintings. ”

(At the request of the interviewee, Shi Lu, Gao Yue, and Ma Qun are pseudonyms in the article)

Read on