Your trusted source for contextualizing the news. Sign up for our daily newsletter.
The White House outlined earlier this month how some tech companies plan to combat image-based sexual abuse. But experts and advocates are skeptical of self-regulation without an official mechanism for accountability.
The announcement, timed to the 30th anniversary of the Violence Against Women Act, included a series of voluntary commitments to curb the spread of image-based sexual abuse, which includes nonconsensually generated or shared intimate imagery, as known as “revenge porn” or explicit deepfakes.
Voluntary commitments can be a way for companies to set the exam they are then graded on, according to Brandie Nonnecke, founding director of the CITRIS Policy Lab at the University of California, Berkeley. But at the moment, private-sector commitments and executive orders are some of the most efficient tools available for combating tech-facilitated harms. Congress has failed to pass meaningful tech policy in the past two decades, though recently there has been momentum in the Senate, particularly around online harms.
The fight against image-based sexual abuse can attack many points of the technical ecosystem. Domain names can be withheld, search engines can suppress results for apps meant to create nonconsensual intimate images, and payment processors can refuse to serve those apps, according to Nonnecke.
Cutting off payments has been an effective method to spur action, and in the White House announcement, both Cash App and Square said they would curb payments to companies that host or create image-based sexual abuse.
The White House announcement is a follow-up on an official call-to-action in May for industry and civil society solutions to image-based sexual abuse.
Some of the larger tech companies reiterated actions they had made public before. The announcement re-shared news from Google that it will be adjusting its search ranking for queries related to nonconsensual explicit deepfakes. Meta, the parent company of Facebook, Instagram and WhatsApp, cited its partnership with StopNCII, dating back to 2021, and its tool, Take It Down, launched in 2023. Meta also pointed to its removal in July of 63,000 Instagram accounts that were tied to financial sextortion.
This is not the first time the White House has asked tech companies to commit to managing risks related to artificial intelligence development. Last year, companies including Adobe, Amazon, Anthropic, Cohere, Google, Meta, Microsoft, and OpenAI agreed to eight commitments tied to developing AI systems rooted in safety, security and trust.
An analysis one year later by MIT Technology Review showed significant progress on many attempts to mitigate AI harms, with less progress on transparency. There is a tendency in AI safety research to devote more resources to hypothetical threats, like the existential risks to humanity, than to the practical and visible harms like image-based sexual abuse.
Microsoft was heavily scrutinized after its image generation tool was revealed to be the source of sexually explicit digital forgeries of Taylor Swift, which went viral on X in January. X was not mentioned in the White House roundup, but did sign on to “Principles for Combatting Image Based Sexual Abuse,” which published the same day. The principles were developed by the Center for Democracy & Technology, the Cyber Civil Rights Initiative, and the National Network to End Domestic Violence.
Buy-in from these major players is useful, but specialized “nudify” apps are often the preferred tools to make nonconsensual sexually explicit deepfakes. The San Fransisco Attorney’s Office announced a lawsuit against 16 of these types of apps in August.
It can be difficult to purge inappropriate material from open-source AI models because they have been widely copied and shared. Research from the Stanford Internet Observatory found that one of the largest open-source datasets used for training generative AI contained thousands of images of child sexual abuse material. While the dataset was taken down, many people had already used it to train models or downloaded it.
The White House announcement shared that GitHub, a popular open source code repository now owned by Microsoft, updated its policies to prohibit the sharing of software tools meant to create nonconsensual intimate imagery. Again, it is a useful step but does not cut down on the number of tools that have been in circulation for years.
While bills like the DEFIANCE Act, which seeks to create a federal right of action for victims of image-based sexual abuse to sue creators, have garnered bipartisan support, focusing exclusively on perpetrators can’t be the only solution.
A senior administration official told The 19th that these actions from tech companies are a first step to tackling a complex problem. They also highlighted the significance of these public agreements, as not all of the companies in the announcement have publicly addressed how they are working to prevent image-based sexual abuse.
At the same time, the Biden-Harris administration continues to call on Congress to pass tech legislation that will protect everyone.
The stalemate on tech policy derives from several factors. Big tech companies have extensive lobbying operations to push back against regulation, and there is a tension between regulating to offset harms while still allowing room for innovation.
Crafting policy is also tricky, as “there is a lot of confusion on terminology and how to best ensure that the legislation is scoped appropriately,” Nonnecke said. It can be difficult to make sure laws properly target the harms they purport to address, and then there is an onus on the private sector to implement technical solutions.
Many times, tech companies adhere to regulations via overcompliance or cutting off access to their product completely. Nonnecke said she is interested in how Adobe, Anthropic, Cohere, Microsoft and OpenAI will implement the removal of specifically inappropriate nude images, as it is difficult to remove harmful image-based sexual abuse and still allow content of scientific or artistic value among models constantly scraping the web for training data.
Critics of voluntary commitments have many ready examples to inform their skepticism.
“Big Tech has been resolving to be good actors when it comes to image based sexual abuse for a decade. Yet platforms like Instagram, Facebook, Google, and Snap are the single largest culprits for the dissemination of [child sexual abuse material], nonconsensual nude images, sextortion,” Carrie Goldberg, partner at C.A. Goldberg, PLLC, which specializes in representing victims of tech-facilitated abuse, wrote over email. “The amount of suffering these companies have caused in the last decade because of the abusive content proliferating on their platforms is incomprehensible.”
Arielle Geismar, co-chair of the youth-led Design It For Us campaign, said that the White House bringing these companies to the table is a step in the right direction. “Frankly, I would have loved to see them come to the conclusion by themselves, but I think it’s pretty clear that these tech companies are not quite able to regulate themselves,” she said.
Geismar cited Meta’s vehement pushback against whistleblower Frances Haugen, who shared documents showing the company knew its products negatively impacted the mental health of teenage girls. Haugen is an adviser to Design It For Us.
“We’ve seen time and time again that they’re prioritizing their bottom line and the things that are making the money,” Geismar said.
She was in the audience at the Senate Judiciary Hearing in January, where the heads of Discord, Meta, TikTok, Snap and X faced a grueling series of questioning over children’s safety on their respective platforms. Geismar said it felt like she was being gaslit when companies talked about how there was no link between their platforms and the mental health of young people. Seeing content on Instagram that glorified disordered eating as a young teen greatly impacted her.
The nonprofit Accountable Tech has published several reports showing big tech companies don’t necessarily keep promises about privacy and safety made in their news releases. Their research showed that Google was retaining location data on visits to abortion clinics despite saying that information would be deleted.
As such, Nicole Gill, co-founder and executive director of Accountable Tech, takes these voluntary commitments with “a serious grain of salt.” She noted that there is no enforcement mechanism, and there are few resources devoted to following up on these commitments.
There just aren’t any external incentive structures to prioritize mitigating the harms like image-based sexual abuse, according to Gill. The companies are mostly competing with one another.