Facebook : 54,000 potential cases of revenge pornography and sextortion in a single month


The 100-page handbook Facebook gives to moderators reveals the social network receives swathes of reports regarding abusive sexual material, an area where they “make most mistakes”.

The documents, leaked to the Guardian, show Facebook users reported almost 54,000 incidents of sexual extortion and revenge porn in January, with the company disabling 14,130 accounts as a result. Moderators escalated 33 cases for involving children.

Revenge porn, which involves intimate images being shared online after a relationship ends, has been a criminal act in the UK since 2015.

Offenders face up to two years in prison if convicted of sharing “private, sexual images of someone without consent and with the intent to cause distress”.

It is not clear how many cases Facebook passed to the police.

The figures for sextortion and revenge porn, which Facebook deems as serious as child exploitation and terrorism, are international and only reflect incidents that have been reported by users.

The scale of the problem could be significantly greater if there are a large number of cases not reported.

The company relies on users to report most abusive content, meaning the real scale of the problem could be much greater.

But the Guardian has been told that moderators find Facebook’s policies on sexual content the hardest to follow. “Sexual policy is the one where moderators make most mistakes,” said a source.

“It is very complex.”

Facebook admitted this was a high priority area and that it was using “image-matching” software to stop explicit content getting on to the site.

It also acknowledged it was difficult to draw a line between acceptable and unacceptable sexual content.

“We constantly review and improve our policies,” said Monika Bickert, ‎ head of global policy management at Facebook. “These are complex areas but we are determined to get it right.”

The company declined to comment on the figures in the document. “We receive millions of reports each week but we do not release individual figures,” it said.

The use of Facebook for the proliferation of pornography as well as the rise of revenge porn and sextortion have become some of the biggest challenges for social media groups.

They are coming under huge political pressure to do more to keep abusive and illegal content off their platforms or face substantial fines.

Documents seen by the Guardian, which form part of the Facebook Files, show for the first time the detailed rules applied by the company to police sexual content published on the site – as well as the scale of the challenge faced by moderators tasked with keeping Facebook clean.

One slide showed that in January moderators alerted senior managers to 51,300 potential cases of revenge pornography, which it defines as attempts to use intimate imagery to shame, humiliate or gain revenge against an individual.

In addition, Facebook escalated 2,450 cases of potential sextortion – which it defines as attempts to extort money, or other imagery, from an individual. This led to a total of 14,130 accounts being disabled. Sixteen cases were taken on by Facebook’s internal investigations teams.

One 53-slide document explains Facebook has introduced two “hotkeys” for moderators to help them quickly identify potential cases of sextortion and revenge porn, which it refers to as “non-consensual intimate imagery”.

Besides these two areas, which Facebook ranks alongside child exploitation and terrorism in importance, the Facebook Files set out various issues facing the service when it comes to sexual content.

They explain that the social media site allows “moderate displays of sexuality, open-mouthed kissing, clothed simulated sex and pixelated sexual activity” involving adults. The documents and flowcharts then set out what is permitted on Facebook in detailed sub-categories called “arousal”, “handy-work”, “mouth work”, “penetration”, “fetish” and “groping”.

The use of sexualised language is also addressed. Facebook decides whether to allow or ban remarks based on the level of detail they contain.

One Facebook document, titled Sexual Activity, explains it is permitted for someone to say: “I’m gonna fuck you.” But if the post adds any extra detail – for instance, where this might happen or how – it should be deleted if reported.

According to this 65-slide manual, other general phrases allowed on Facebook include:

“I’m gonna eat that pussy”; and “Hello ladies, wanna suck my cock?”

Facebook also allows sexual references that have a “humorous context”.

The example it uses to illustrate the point involves a joke about a little boy interrupting his parents having sex.

Facebook said some of these examples “appear to be out of date”, but it declined to say which ones or when the policy had changed.

Until recently Facebook had allowed comments such as “I’d like to poke that bitch in the pussy”

and “How about I fuck you in the ass girl?”

Asked specifically about these comments, Facebook said it would now remove them if they were reported.

“Not all disagreeable or disturbing content violates our community standards,” said Facebook.

“For this reason we offer people who use Facebook the ability to customise and control what they see by unfollowing, blocking or hiding posts, people, pages and applications they don’t want to see.

“We allow general expressions of desire but we don’t allow sexually explicit detail.”

The files also show Facebook is constantly updating certain policies – reacting to criticism that it has been too slow to delete some sexually graphic content, while simultaneously being too strict about other material.

Last September, Facebook was condemned for removing the Pulitzer-prize-winning “Napalm girl” photograph from the Vietnam war because it showed a naked child. After a row over censorship, Facebook relented.

The files reveal that Facebook has tried to avoid similar situations arising again by issuing fresh rules.

One document explains that under Facebook’s new “terror of war” guidelines, there are “newsworthiness exceptions”.

Though the documents does not define newsworthy, it says Facebook now allows, among other things, “photographs of naked babies so young they clearly cannot stand unless the photo closes in on the baby’s genitals … [and] images of adult nudity in the context of the Holocaust”.

However, Facebook says images from the Holocaust depicting naked children should be removed if users complained.

The moderators are struggling to make sense of other guidelines on sexual imagery.

Under these rules, Facebook says it will “allow all handmade and digital nudity … [and] allow handmade sexual activity”.

But moderators are told to “remove digital sexual activity” if reported.

However, the accompanying slides make clear it is sometimes difficult to draw a distinction between the two.

One allowed artwork shows a topless woman riding on a giant, erect penis.

The document explains: “We allow nudity when it is depicted in art like paintings, sculptures, and drawings. We do not allow digitally created nudity or sexual activity.

We drew this line so that we could remove a lot of very sexual digital nudity, but it also covers an increasing amount of non-sexual digitally made art.

The current line is also difficult to enforce because it is hard to differentiate between handmade art and digitally made depictions.”

In an earlier document, Facebook moderators had been warned to delete images of Giambologna’s 16th-century statue the Kidnapping of the Sabine Women in the Loggia dei Lanzi in Florence if reported.

They were also told to delete, if reported, images of the Rape of Europa – paintings that depict the mythological story of the abduction of Europa by Zeus.

The updates seen by the Guardian do not make clear whether these images are allowed or not.

Facebook has also developed detailed policies around “sexual solicitation” on the site. According to its rules, providing contact information is allowed, and solicitation using acronyms is also permissible.

But if the post includes any extra information – such as mentioning sexual acts “in a non medical/ scientific/ educational context” then the post should be deleted if it is flagged up.

Facebook said it was “building better tools to keep our community safe”, adding: “We’re going to make it simpler to report problems to us, faster for our reviewers to determine which posts violate our standards and easier for them to contact law enforcement if someone needs help.”


Please enter your comment!
Please enter your name here

Questo sito usa Akismet per ridurre lo spam. Scopri come i tuoi dati vengono elaborati.