“AI Porn: Taylor Swift and Others are Victims of Deepfake Nudes. Can the Spread be Prevented?”
What’s happening
Late last month, a series of AI-generated, sexually explicit images of pop superstar Taylor Swift circulated across the social media, drawing outrage from her fans and renewing calls for a crackdown on so-called deepfakes.
Fake nude pictures of celebrities are not a new phenomenon, but thanks to advanced and widely available artificial intelligence tools, it is now possible to quickly produce high-quality images or videos featuring anyone’s likeness in any scenario imaginable. While a lot of attention has been paid to how deepfakes could be used to spread misinformation, research shows that 98% of all AI-generated videos online are pornographic and nearly all of the individuals targeted are women.
Celebrities like actresses, musicians, and social media influencers are most frequently featured in deepfake porn, but there are many examples of average women and girls also being targeted. Last year, administrators at a New Jersey high school discovered that some students had used AI to create fake nude images of more than 30 of their classmates. Similar incidents have been reported at other schools in the U.S. and abroad.
Jh
It’s illegal to share real nude images of someone without their consent in almost every state, especially if they’re a minor. But the laws around artificial porn are much weaker, with only about 10 states having statutes banning it. Most social media sites prohibit AI porn, but the scale of the problem and lax moderation mean it can still be rampant on their platforms. One post featuring Swift deepfakes was live on X, formerly Twitter, for 17 hours and gathered more than 45 million views before it was taken down.
Why there’s debate
Like so many other harmful things online, it may be impossible to completely eradicate AI porn. But experts say there are plenty of things that can be done to make it dramatically less prevalent and limit the damage it causes.
Several bills have been proposed in Congress that would create nationwide protections against deepfake porn, either by creating new legal penalties for those who create or share it or by giving victims new rights to seek damages after they’ve been targeted. Supporters of these plans say that, even if the new laws didn’t sweep up every bad actor, they would lead to some high-profile cases that would scare others away from creating deepfakes.
Outside of new laws, many tech industry observers argue that the public needs to put pressure on the various mainstream entities that allow people to create, find, spread, and profit from AI porn — including social media platforms, credit card companies, AI developers, and search engines. There’s also hope that fear of lawsuits from someone like Swift could create enough financial risk that these groups will begin taking deepfakes more seriously.
At the same time, some experts make the case that the war against AI porn has effectively already been lost. In their view, the technical problem of finding and blocking so many deepfakes is basically unsolvable and even the most aggressive new laws or policies will only capture a tiny fraction of the flood of fake explicit content that’s out there.
What’s next
Swift is reportedly considering legal action in response to the deepfakes of her, but experts say her options may be limited with so few laws on the books. Despite the new attention brought to the issue, there appear to be no current plans for Congress to vote on any of the various anti-AI porn proposals.
Perspectives
Congress needs to finally outlaw deepfakes nationwide
“This is a rare bipartisan issue that lawmakers should seize upon to do some good before bad actors use AI to wreak more havoc on more innocent people’s lives.” — National security expert Frank Figliuzzi, MSNBC
We don’t have the tools to stop AI porn
“On the one hand, our technologies and the human teams behind them aren’t up to the task. On the other, government overcorrection might leave us with heavily restricted social networks that close off legitimate forms of commentary.” — Miles Klee, Rolling Stone
Tough laws could create examples that scare everyone else off
“There is hope for a solution. Some of the measures going through Congress are a start, and while the long-term rules are still being ironed out, authorities can get a handle on the situation for now by making examples of some of the worst perpetrators. Deterrents can work, even for people who think they can hide behind the cloak of online anonymity.” — Parmy Olson, Bloomberg
The public can force Big Tech to take deepfakes more seriously
“Swift and her fans could advocate for legal changes at the federal level to pass. But their outrage could do something else: lead platforms to take notice.” — Amanda Hoover, Wired
Any new laws should focus on helping victims as much as possible
“[Proposed bills are] not necessarily always centered around victim needs. Usually the victim’s most important need is to get content removed. So if we had a legislative landscape that was victim-centered. … It would focus on how we make online environments safer for people to actually use.” — Sophie Maddocks, AI researcher at the University of Pennsylvania, to Slate
The infrastructure that supports the AI porn economy needs to be torn down
“It’s not only the deepfake porn creators who are profiting off this abuse. Deepfake porn sites are facilitated and enabled by search engines that drive web traffic toward deepfake content. Internet service providers host them and credit card and payment companies facilitate transactions on their sites, while other companies advertise their products under these videos.” — Sophie Compton and Reuben Hamlyn, CNN
We need new laws to combat all of the dangers of AI, not just deepfakes
“Now is the time to call on Congress and state legislators to act – not just on deepfake porn and not just for Taylor Swift, but on the perils of AI more broadly, and for a more secure future for every person on the planet.” — Jill Filipovic, The Guardian
Raising awareness of the terrible harms deepfakes cause is a good place to start
“Deepfake porn and other forms of digital sexual abuse are sometimes dismissed as a ‘lesser’ harm than physical sexual assault because of their online nature. A high profile case like Swift’s could help draw attention to the genuine impact of these images on victims.” — Jade Gilbourne, The Conversation
Photo Illustration: Yahoo News, photo: Getty Images