Facebook closed 583million fake accounts in first three months of 2018
responding to calls for transparency after the Cambridge Analytica data privacy scandal, Facebook yesterday said those closures came on top of blocking millions of attempts to create fake accounts every day.
In its first quarterly Community Standards Enforcement Report, Facebook said the overwhelming majority of moderation action was against spam posts and fake accounts: it took action on 837m pieces of spam and shut down a further 583m fake accounts on the site in the three months. But Facebook also moderated 2.5m pieces of hate speech, 1.9m pieces of terrorist propaganda, 3.4m pieces of graphic violence and 21m pieces of content featuring adult nudity and sexual activity.
“This is the start of the journey and not the end of the journey and we’re trying to be as open as we can,” said Richard Allan, Facebook’s vice-president of public policy for Europe, the Middle East, and Africa.
The amount of content moderated by Facebook is influenced by both the company’s ability to find and act on the infringing material and the sheer quantity of items posted by users. For instance, Alex Schultz, the company’s vice-president of data analytics, said the amount of content moderated for graphic violence almost tripled quarter-on-quarter.