Meta’s Oversight Board says deepfake policies need update and response to explicit image fell short

by admin

Meta’s policies on non-consensual deepfake images need updating, including wording that’s “not sufficiently clear,” the company’s oversight panel said Thursday in a decision on cases involving AI-generated explicit depictions of two famous women.

Deepake nude images of women and celebrities including Taylor Swift have proliferated on social media because the technology used to make them has become more accessible and easier to use. Online platforms have been facing pressure to do more to tackle the problem.

The board, which Meta set up in 2020 to serve as a referee for content on its platforms including Facebook and Instagram, has spent months reviewing the two cases involving AI-generated images depicting famous women, one Indian and one American.

The board did not identify either woman, describing each only as a “female public figure.” Meta said it welcomed the board’s recommendations and is reviewing them. One case involved an “AI-manipulated image” posted on Instagram depicting a nude Indian woman shown from the back with her face visible, resembling a “female public figure.

Meta also disabled the account that posted the images and added them to a database used to automatically detect and remove images that violate its rules.

The board said both images violated Meta’s ban on “derogatory sexualized photoshop” under its bullying and harassment policy. However it added that its policy wording wasn’t clear to users and recommended replacing the word “derogatory” with a different term like “non-consensual” and specifying that the rule covers a broad range of editing and media manipulation techniques that go beyond “photoshop.” Deepfake nude images should also fall under community standards on “adult sexual exploitation” instead of “bullying and harassment,” it said.

Related Articles

Leave a Comment