laitimes

Taylor Swift, the entry is blocked

author:South wind window NFC
Taylor Swift, the entry is blocked

In the face of AI, "there are pictures and truths" is becoming more and more untenable, but when "intentional fraud" meets the ambiguous attitude of "believing it to be true" and "it doesn't matter whether it is true or not", the reality becomes subtle and harmful.

On January 25, hundreds of indecent and violent images of international superstar Taylor Swift were leaked, and people quickly discovered that these photos were fake, generated by people with malicious intentions and vulgar tastes, but these shoddy and despicable fakes were still spread online and viewed hundreds of millions of times.

Social media platform X is the main channel for the circulation of these fake photos. On January 26, after the photos were disseminated for more than a dozen hours, X issued an announcement saying that the posting of non-consensual nude images is strictly prohibited and that the team is actively removing all identified images and taking action against the accounts responsible for posting them.

On the same day, this entertainment incident even attracted the attention of the White House.

Asked about the incident, White House spokesman Pierre said that the fake pornographic images were very worrying, that women are the main targets of online harassment and bullying, and that social media needs to play a regulatory role to prevent the spread of misinformation or non-consensual intimate images of real people.

Taylor Swift, the entry is blocked

On January 26, White House spokesman Pierre responded to the indecent photos of Taylor Swift at a press conference

On the X platform, it is true that it was once impossible to use Taylor as a keyword to search for results. After the restoration, the screen was swiped with positive information and ordinary reports from fans, and those false and indecent photos seemed to have really been erased.

But the problem cannot be solved.

The combination of AI technology and the Internet has made it easier to produce and disseminate pornographic content. But whether it is generated and disseminated with "consent" and voluntariness is the key, but it is most easily ignored, and the harm comes from this.

In this sense, the fake Taylor pornography incident highlights the multiple challenges brought about by the misuse of AI technology, and how to make the technology compliant and ethical, and how to reduce the harm caused by the misuse of technology, has once again become an issue.

It's not just about celebrities, it's about more ordinary women.

Deepfake,被滥用的AI

The fake and indecent photos targeting Taylor were generated using an AI technique called "deepfake".

It first appeared in 2017 and has been used in pornography since its inception. In 2017, an online user named Deepfakes appeared and faked a series of female films, including Wonder Woman actress Gal Gadot.

But after all, female celebrities never lack gossip, and this incident did not cause too many repercussions. Since then, people have felt the "novelty" of this new technology in the application scenario of "face swapping", but the B-side of grafting is the secret growth of AI pornography, and Gadot and Taylor are not the only victims, and the main use of this technology is to produce pornographic content.

Taylor Swift, the entry is blocked

Natalie Portman has her face swapped by AI

Home security heroes tracks deepfakes and conducts statistics and research, and researchers found that in 2023, deepfake pornography accounted for 98% of all online deepfake videos.

In 2022, there were about 4,000 deepfake videos on the internet, and in 2023, that number rises to more than 20,000.

Although the technology can generate faces that do not exist in the real world, the open-source nature of the technology makes it impossible to prevent people from using it in real people.

Its "deepfake" name illustrates its technical principle: the use of machine "deep learning" to generate or fake images.

This process is actually a kind of simulation and simulation of human beings.

In layman's terms, it is like a baby who has just started learning and will put all the objects he sees in his mouth to taste, but after a period of learning and feedback, the baby will gain the ability to distinguish whether an object can be eaten or not without blindly tasting.

Taylor Swift, the entry is blocked

AI face-swapping technology learns and gives feedback by continuously acquiring more image materials

The process of deepfake involves two algorithms, one algorithm generates a fake copy of the real image, and the other algorithm detects or discerns the authenticity of the image. The two models were constantly trained, fed back and contested, and eventually came up with a model capable of generating "realistic but fake" images.

Too often, people tend to think that AI technology has a high technical barrier, but once the tool is mastered and abused, the barrier to use is actually very low, and the productivity is extremely high — even a 60-second deepfake video can be created in less than 25 minutes using just a clear facial image.

One of the challenges posed by the misuse of AI technology is that pornography has never been easier to produce, and because intimate relationships involve personal privacy and reputation, they have also become a weapon to harm others.

In order to reduce the risk of misuse of technology, some tech companies are also putting up defenses at the source.

OpenAI's Dall-E, an AI program that can generate images from text descriptions, minimizes nude images in training data, and prevents the input of certain prompt words during actual use, as well as scanning the output before the image is displayed to the user, to prevent risks.

Taylor Swift, the entry is blocked

OpenAI旗下的Dall-E

But unless the developer community and internet platforms are more involved and collaborative, open source technology can still let it get out of control.

From actresses to ordinary girls, 99% of the victims were women

Deepfake出现,女性深受其害。

According to multiple tracking reports, the fake pornography generated by deepfakes is mainly aimed at women.

According to Home security heroes' statistical report "The State of Deepfakes in 2023", the proportion of anchored women accounts for 99%. Most of them work in the entertainment industry, and more than half of the content is aimed at Korean singers and actresses.

"The celebrity's popularity makes them more likely to be targeted, and their extensive footage provides ample fodder for deepfake users. The report analyzed.

Taylor Swift, the entry is blocked

According to Sensity's survey analysis, the entertainment industry accounts for the largest proportion of fake content generated by deepfakes

The report also provides the statistic that by 2023, pornography generated with deepfakes will account for 90% of the relevant websites — suggesting an even more surprising phenomenon: Deepfakes are the first to change the adult industry than the face swap of artists in film and television productions.

Regardless of whether this will help reduce the reality of women being forced to sell their bodies, deepfake pornography threatens the rights of ordinary people in a wider audience.

Perhaps that's why, even those in the adult industry consider "deepfake" pornography to be taboo. "Everything we do, as well as the adult industry, is built on the word 'consent'. Deepfakes are by definition against 'consent'. Adam Grayson, chief financial officer of Evil Angel, said.

Sensity AI, a research firm that also tracks deepfakes, reports that more than 90% of deepfakes are non-consensual sexual content, and 9 percent of them are non-consensual content about women.

The "non-consent" here can be counted from being fed to generative AI such as deepfakes, and extends to the dissemination link.

Taylor Swift, the entry is blocked

AI-generated fake Taylor Swift delivery ads go viral on Facebook

Compared to women in the adult industry, who are more or less forced to get involved, deepfake-generated pornography is more brutal, skipping the crucial step of "obtaining consent" and leaving them completely ignorant of the content of the generated images.

In Taylor's case, researchers at Reality Defender (an organization that also aims to fight deepfakes) found dozens of different AI-generated images, some of which depicted Taylor being painted or covered in blood, not only objectifying her, but even violently harming her in the deepfake photos.

This shows a new risk that it could evolve into violence against women, rather than just "satisfying physical needs" as commonly assumed, which has led some researchers to refer to non-consensual deepfake pornography as image-based sexual abuse, and another term as "digital rape."

An article from the Annenberg School of Communication that focuses on this issue argues that the Internet, while a method of exploring "sex," becomes very harmful when fake nude images are created and distributed without people's consent.

Author Sophie Maddox notes that these algorithms are trained to remove clothing from images of women and replace them with images of nude body parts. Although they can also "deprive" males, these algorithms are usually trained on images of females, so much so that when encountering males, they cannot be synthesized properly, but are simply attached to the sexual organs in disobedience...... As can be seen from their target demographic, these abusive deepfakes are trying to silence and humiliate women by spreading false information about them.

Taylor Swift, the entry is blocked

After the abuse of deepfakes, a similar software Deepnude "one-click undressing" was also developed

Not only female celebrities, but also ordinary women and even underage girls will be targeted.

Indian journalist Lana's phone was flooded with harassing messages after her face was implanted with pornographic videos, she received nude photos of men, someone threatened to "rip her clothes and drag her out of the country," and 18-year-old girl Noel Martin discovered that the fake photos and videos had her name and home address attached to them.

Helen Mott, a British writer and broadcaster, discovered that since 2017, other netizens had been incited to make violent and explicit images of the photos she shared on social media, upload them on pornographic sites, and tag her name, while she had never taken or shared such intimate photos.

In 2020, in search of more support, Helen petitioned online appealing: "My ordeal has left me scared, ashamed, paranoid, and depressed. But I'm not going to be silent – I want to ask the government to act quickly to outlaw 'deepfakes' and similar malicious content, and to enact a clear law to prohibit the shooting, production, and falsification of these harmful images." ”

Even today, some countries and regions have declared it illegal to generate and share deepfake pornography without consent, but in practice there is still a need for more effective and institutionalized regulation to adequately limit and hold accountable those behaviors.

Litigation rights protection, on what charges

In 2020, Anne Pecnick, in an opinion piece dedicated to the legal issues of deepfake pornography, argued that the current law does not adequately deal with deepfake pornography.

Because the user collects images that have been published on social networks, it is difficult for the injured party to "invade privacy" to protect their rights.

Portrait rights claims require infringers to obtain commercial benefits from the use of portraits, but they cannot effectively restrain individuals who only pursue self-satisfaction and do not profit from them.

Anne worries that dealing with people who are only for personal gratification and do not try to be discovered by the deepfake will also affect the level of accountability for "defamation".

In addition, no matter what content users post, internet platforms are not responsible for their content and the damage caused, and the law is at best limited to holding publishers liable for harm, which also weakens the line of defense for regulation or governance.

Deepfakes do pose challenges to the legal applicability of litigation and rights enforcement, but a case in Taiwan, China, and its judgment provide some ideas and hope.

In 2020, overseas video blogger "Xiaoyu" Zhu Yuchen transplanted the portraits of 119 celebrities and Internet celebrities to AV actresses without consent through deepfakes, made pornographic videos, uploaded them to the cloud, and made profits by recruiting members and paying for browsing.

In 2021, Zhu Yuchen and his assistants were arrested. In July 2022, the judicial authorities issued a criminal verdict for violating the Personal Information Protection Law, and Zhu Yuchen and his assistant were sentenced to fixed-term imprisonment ranging from 3~5 years in the first trial.

Taylor Swift, the entry is blocked

Zhu Yuchen was arrested

However, this penalty allows for a fine, that is, a fine in lieu of imprisonment – compared to the confiscated proceeds of crime of 12 million yuan, Xiaoyu's fine is estimated to be less than 2 million yuan.

In February 2023, the injured party argued that the indecent image synthesized by the face swap had been widely circulated on the Internet and could not be completely deleted, causing great damage and suffering to the victim's social image and spirit, and that no settlement had been reached with the defendant, and the prosecutor later appealed on the grounds that the sentence was too light.

In December 2023, the local court held that the sentence in the first instance was too light, Zhu Yuchen was sentenced to five years in prison and not fined Yike, and the other one year and eight months of sentence could be fined by Yike, Zhu Yuchen's assistant Zhuang Xinrui was also the defendant, and the sentence was increased to four years and six months in the second instance, and Yike was allowed to be fined, and the case could be appealed.

The second-instance judgment is still based on the Personal Information Protection Law, and through this judgment, personal names, stage names, online nicknames, and facial features show that they are no less important than personal privacy information.

The court held that Zhu Yuchen and his assistants had collected and used such personal information beyond the scope of necessity, allowing people viewing obscene videos to identify specific individuals, thereby damaging the reputation and social evaluation of others. At the same time, they also violated the criminal law for the crime of trafficking in obscene images.

In fact, these two crimes were also reflected in the first-instance judgment, compared with the second-instance judgment, there was another crime: aggravated defamation, which differs from ordinary defamation in that it is disseminated in the form of words or images, rather than oral transmission. Zhu Yuchen not only abused the fame of a public figure, but also seriously affected the reputation of the injured party, and this damage will last for a long time on the Internet.

Taylor Swift, the entry is blocked

Many public figures were made into videos by Zhu Yuchen using AI to swap faces and sold for profit

In the civil judgment of "damages for infringement" issued at the same time, it was recorded that the court also did not focus on the issue of whether the authenticity of the deepfake synthetic video was easy to distinguish, and directly supported the plaintiff's claim on the grounds of intentional infringement of the right to reputation and personality rights.

"The plaintiff's facial features were synthesized into the faces of AV actresses in Japanese pornographic films, and then uploaded to the Internet cloud to share with paid members, which of course was seriously derogatory to the society. The verdict is clear.

But even so, these indecent images could not be completely eliminated, and the two defendants, who had no other stable income, were confiscated and had no other stable income, and the compensation upheld earlier by the court was not paid to the injured party.

Social platforms, can they be exempted from liability?

Although deepfakes are still not sophisticated, it is not difficult to distinguish the traces of AI with technical means or the naked eye, and the law is not impossible to hold accountable and punish. But what victims need most is to eliminate the impact, and we also need effective means to stop its spread.

The platform is in a better position. It's like X blocked Taylor Swift's keyword search and deleted the relevant content at one point.

Taylor Swift, the entry is blocked

Platform X blocked Taylor's entry for a time

But making up for it afterwards is still not ideal, so much so that two comments questioned under X's statement: Why are pornography and nudity allowed on X?

Why do we need to overcome so many difficulties to "properly" report violations, why do we limit the number of disgusting posts we report when so much clear, blatantly involuntarily created AI pornography is reported with the feedback of "no violation found"?

Anne points out the paradox: these platforms are commercial companies that rely on advertising for their revenue.

But that's not the whole story.

In fact, whether it is Facebook, X, or domestic social software, mainstream social platforms have their own content security review teams, or departments where employees need to sign confidentiality agreements in advance, and they have a set of detailed guidelines for filtering and blocking risky content, including pornographic content, of course, and there are multiple checks between machine review and manual review.

But even so, the party with the need for drainage will also try to bypass the review in the way of "rubbing the edge", and the effectiveness of the platform's supervision also depends on the report of ordinary users, and the process and success rate of the report also involve the right to express speech, which also needs to be prudent, and there are complex and subtle pulls.

Even so, more effective prevention of technology misuse is still a problem that we must address and solve, and platforms need to strike a balance, but they cannot be exempted.

Returning to the Taylor incident, although the protagonist has not yet publicly responded and defended his rights, fans have given the incident a following.

One X user, believed to be one of the rumor-mongers who uploaded these deepfake images, once posted a provocative post: "No matter how strong Taylor's fans are, they will never find me, I am like a clown in a mask, using fake numbers and addresses." ”

Taylor Swift, the entry is blocked

Rumormongers provoke: You can't find me

But soon he tasted the consequences of his conceit.

A Taylor fan who doxxed the 28-year-old man's personal information responded in the air: I also hope that I can find this beautiful house with an accurate address in Toronto, Ontario, Canada. This is "my" (doxed man's) phone number for anyone who can help.

The man later posted another text and raised the white flag: "We live in the same society, where Taylor's fans can, and will, doxx you to the point where you have to retreat and forget." ”

The last comment on the account reads: "Right now I'm dealing with Taylor fans, they're a completely different species, and I need a tactical retreat. Soon, his account was only visible to the individual.

It's a backlash to Taylor fans, and we're all in a race to be involved in this technology that has already been abused, and we need to be vigilant and guard against it before anyone who tries to use it.

Helen, the British writer who was also victimized and who recited her new poem in a petition video, said that turning the ugliness into art was a catharsis: "It's about taking back power." ”

Author | Lu Ming

Edit | to the by

Duty Editor | Zhang Lai

Typography | Qianwen

Read on