Facebook says it removed 3.2b fake accounts in six months
Facebook says it removed 3.2 billion fake accounts from its service from April to September, up slightly from 3 billion in the previous six months.
Nearly all of the bogus accounts were caught before they had a chance to become "active" users of the social network, so they are not counted in the user figures the company reports regularly. Facebook estimates that about 5 per cent of its 2.45 billion user accounts are fake.
The company said in a report Wednesday that it also removed 18.5 million instances of child nudity and sexual exploitation from its main platform in the April-September period, up from 13 million in the previous six months. It says the increase was due to improvements in detection.
In addition, Facebook said it removed 11.4 million instances of hate speech during the period, up from 7.5 million in the previous six months. The company says it is beginning to remove hate speech proactively, the way it does with some extremist content, child-exploitation and other material.
Facebook expanded the data it shares on its removal of terrorist propaganda. Its earlier reports only included data on al-Qaida, ISIS and their affiliates. The latest report shows Facebook detects material posted by non-ISIS or al-Qaida extremist groups at a lower rate than those two organizations.
The report is Facebook's fourth on standards enforcement and the first to include data from Instagram in areas such as child nudity, illicit firearm and drug sales, and terrorist propaganda. The company said it removed 1.3 million instances of child nudity and child sexual exploitation from Instagram during the reported period, much of it before people saw it.
Still, the company's latest transparency report arrives as regulators around the world continue to call on Facebook, and the rest of Silicon Valley, to be more aggressive in stopping the viral spread of harmful content, such as disinformation, graphic violence and hate speech. A series of high-profile failures over the past year have prompted some lawmakers, including Democrats and Republicans in the United States, to threaten to pass new laws holding tech giants responsible for failing to police their sites and services.
The calls for regulation only intensified after the deadly shooting in Christchurch, New Zealand, in March. Video of the gunman attacking two mosques spread rapidly on social media, including Facebook, evading tech companies' expensive systems for stopping such content from going viral. On Wednesday, Facebook offered new data about that incident, reporting that it had removed 4.5 million pieces of content related to the attack between March 15, the day it occurred, and September 30, nearly all of which it spotted before users reported it.
AP, Washington Post
https://www.smh.com.au/business/companies/facebook-says-it-removed-3-2b-fake-accounts-in-six-months-20191114-p53afc.html