Facebook, Aiming for Transparency, Details Removal of Posts and Fake Accounts


Even though it has been focusing on shutting down bogus accounts, Facebook has said that 3 to 4 per cent of its active monthly users are fake. Facebook claimed that it was able to detect 98.5% of the fake accounts soon after they were created in the first three months of 2018.

But most of Facebook's removal efforts centered on spam and fake accounts promoting it.

Facebook took down 3.4 million pieces of graphic violence during the first three months of this year, almost triple the 1.2 million during the previous three months.

Facebook on Tuesday released numbers on the kinds of content-and how much of it-the company has removed in recent months. He said removing fake accounts is the key to combating that type of content.

The report and the methods it details are Facebook's first step toward sharing how they plan to safeguard the news feed in the future. AI did, however, flag 99.5pc of terrorist content on Facebook and 95.8pc of posts containing nudity. "Of every 10,000 content views, an estimate of 22 to 27 contained graphic violence, compared to an estimate of 16 to 19 last quarter", Facebook said while unveiling for the first time a transparency report.

He added, "We're much more focused in this space on protecting the kids than figuring out exactly what categorisation we're going to release in the external report".

More news: Aston Villa heading to Wembley after draw with Middlesbrough in Championship play-offs
More news: EU Negotiator Laments 'Little Progress' on Brexit, May Struggles With Trade Plan
More news: Martinez Drives In 3, Sox Win

"We use a combination of technology, reviews by our teams and reports from our community to identify content that might violate our standards", the company's report says.

You can read the full report here and Facebook has also provided a guide to the report as well as a Hard Questions post about how it measures the impact of its enforcement. The report shows just how much of that content was seen by Facebook users, how much was removed, and provides evidence that 583 million fake Facebook accounts were deleted. However, when it came to hate speech, the company's technology only flagged around 38 percent of posts that it took action on and Facebook notes it has more work to do there.

Hate speech: In Q1, the company took action on 2.5 million pieces of such content, up about 56% from 1.6 million during Q4.

"We continue to be deeply concerned by internet disruptions, which prevent people from communicating with family and friends and also threaten the growth of small businesses", said Sonderby.

"AI still needs to get better before we can use it to effectively remove more linguistically nuanced issues like hate speech in different languages, but we're working on it", said Zuckerberg to CNet. However for hate speech, Rosen said, "our technology still doesn't work that well".

However, the social network said this was up from 1.2 million at the end of 2017 and while the majority of the increase was down to improvements in its detection technology, some of the rise is due to an increase in such content appearing on the platform. Facebook noted that while its artificial intelligence technology found and flagged many standard violations, more progress needed to be done.