Yi Ge originated from the Temple of Cave Fei
Qubits | Official account QbitAI
Artists who have been dissatisfied with painting AI for a long time have finally taken action!
This time, the artists jointly launched a class-action lawsuit against Stability Al, DeviantArt, and Midjourney, alleging that their training data infringed the copyrights of the artists.
After the news came out, it can be said that one stone stirred up a thousand waves, and the Reddit popularity instantly reached 1700+.
However, when netizens analyzed the content of the lawsuit word by word, they found many jaw-dropping low-level mistakes, and even predicted that the lawsuit would be lost.
How outrageous is it?
For example, the indictment describes Stable Diffusion as "a 21st-century collage tool" that "can remix the copyrighted work of millions of artists," and so on.
Some netizens ridiculed that even such simple mistakes will be made, and this lawyer might as well let ChatGPT help him write.
So what exactly is the complete circumstance of this lawsuit, and what is the specific content of the indictment?
Let's look down.
Who fought this lawsuit?
The class-action plaintiffs are three well-known artists in the industry: Sarah Andersen, Kelly McKernan and Karla Ortiz.
I believe that many netizens have brushed Sarah Andersen's work "Sarah's Scribbles" on the Internet.
Their lawyer, Matthew Butterick, was also responsible for a class-action lawsuit against GitHub Copilot alleging infringement of the copyrights of many of the code's creators and the privacy of many users.
In the lawsuit against Stable Diffusion, the lawyer first defined Stable Diffusion as "a 21st-century collage tool."
He noted that Stable Diffusion contains millions (and possibly billions) of copies of copyrighted images, all obtained without the artist's knowledge or consent.
Even assuming a nominal loss of $1 per image, the value of this misappropriation would be $5 billion.
The article also makes a comparison: the largest art theft of all time was the theft of 13 works of art at the Isabella Stewart Gardner Museum in Boston in 1990, and all the lost art was valued at $500 million.
Compared to the $5 billion stolen by Stable Diffusion, it's a small deal.
The next focus is on the lawyer's list of Stable Diffusion's problems. We've picked a few key takeaways for you:
1. Diffusion models are a way for AI to figure out how to reconstruct copies of training data through denoising. Because of this, in terms of copyright, it's no different from MP3 or JPEG – a way to store a compressed copy of a particular digital data.
2. When the user generates an image based on the prompt, Stable Diffusion uses the training image to generate a seemingly new image through a mathematical software process.
These "new" images are based entirely on the training images and are the output of a derivative of a particular image in the Stable Diffusion training set. At the end of the day, it's just a complex collage tool.
3. The plaintiffs want to put an end to this flagrant and massive violation of their rights before their profession is eliminated by computer programs driven solely by their hard work.
In generative AI systems like Stable Diffusion, text prompts are not part of the training data. It is part of the tool's final user interface. Therefore, it is more similar to a text query passed to an internet search engine.
5) Just as an internet search engine looks for queries in its vast web database and shows us matching results, a generative AI system uses a text prompt to generate output based on its vast training database.
Three main reasons to sue
On this basis, the artists filed lawsuits against Stability Al, DeviantArt and Midjourney, on the following grounds:
Stability Al has funded an organization called LAION, which is creating larger and larger image datasets for AI companies to use, without the consent or authorization of the original artists, and without any compensation.
In addition, Stability Al also released DreamStudio (a paid app), which is based on the principle of Stable Diffusion.
DeviantArt, as one of the largest artist communities on the web, did not choose to maintain their community by protecting artists from AI onslaught, but chose to release DreamUp (a paid app based on Stable Diffusion), which is a betrayal of the artist community.
Although Midjourney bills itself as a "research lab," it has produced a large number of paying users of painting AI.
When asked about the ethics of replicating training data at scale, David Holz, founder of Midjourney, replied:
There is no specific law in this regard.
The artists said David Holz desperately needs to be popularized and they look forward to "helping Mr. Holz learn more about state and federal laws that protect artists and their work."
As soon as the lawsuit was released, it sparked a lively discussion online.
However, netizens almost unone-sidedly ridiculed the lawsuit.
It was pointed out that many of those formulations were naïve misunderstandings of basic facts and that the proceedings would have been dismissed if the courts had been fair.
Another user even set up a new website to refute each piece of the lawsuit, including that Stable Diffusion is not a simple collage tool. (Link attached at the end of the article)
It is also upset that these claims attribute Stable Diffusion entirely to the hard work of the artists and ignore the efforts of the programmers who wrote the code for it.
Even some extreme netizens believe that this is the trouble of Persimmon only looking for soft pinching, only looking for Midjourney and Stable Diffusion, and it is clear that many models of large companies such as Microsoft and Google are also involved in this problem.
However, from the artist's point of view, some people say that this is a good way for people to come together against unethical exploitation.
Finally, what do you think about this lawsuit? Do you think you have a good chance of winning the case?