Facebook Reveals Just How Good It Is At Moderating The Content We See

In a first-of-its-kind report, Facebook has revealed it's far better at policing content containing graphic violence, gratuitous nudity and even terrorist propaganda than it is at weeding out hate speech.

What you need to know
  • Facebook released a report revealing how well the company manages regulation of content according to its Community Standards
  • The report comes after increasing pressure for transparency from the social network
  • The company says there is still great difficulty in filtering out hate speech using internal systems

It's no secret to Facebook users -- of which there is in excess of two billion -- that the social network has standards regarding what type of content we can and cannot post.

But, for the first time, Facebook has pulled back the curtains on just how well it manages to regulate these standards.

On Wednesday the network published its first quarterly Community Standards Preliminary Enforcement Report, detailing its efforts to crack down on content including graphic violence, nudity and sex, terrorist propaganda, hate speech, spam and fake accounts during the period of October 2017 to March 2018.

The overwhelming majority of moderation during the first three months of the year was taken against spam posts, of which 837 million were dealt with, and fake accounts, of which 583 million were removed.

The report also revealed that the incidence of objectionable content appeared to be on the rise, with 3.4 million pieces of violent content either removed or applied with a warning label during the same period -- a 183 percent increase from the final quarter of 2017.

Content including adult nudity and sexual activity is also on the rise, with 21 million items acted on by the company in 2018's first quarter.

While the increased discovery of this content can be attributed to improved detection technology,  Facebook's vice president of data analytics Alex Schultz believes this increase in violent posts could be be a reflection of changes in the world.

"A really good hypothesis we have, although this is not certain, is what’s going on in Syria," Shultz said.

"When there is a war, more violent content is uploaded to Facebook.”

While Facebook says it cannot reliably estimate the prevalence of terrorist propaganda on the site -- specifically content relating to ISIS, al-Qaeda and their affiliate groups -- the report revealed the company found 99.5 percent of this type of actionable content through their own systems. The remaining 0.5 percent was reported by users.

The company took action on 3 million pieces of terrorist-related content in total over the entire six-month period.

The Fight Against Hate Speech

Where more than 95 percent of posts containing adult nudity, spam or fake accounts are being picked up by Facebook before a user reports them, the company says it has a much greater task at hand when it comes to flagging hate speech.

During the first quarter of 2018, Facebook systems only identified 38 percent of the violating content they took action on, acting on the remaining 62 percent after user reports.

The ability of technology to identify hate speech is far weaker than its ability to recognise nudity, for instance, as image-recognition technology can get to work knowing exactly what it's looking for. Automated programs find it significantly more difficult to understand the context and language nuances that serve as identifiers of hate speech.

More content containing hate speech was detected and acted on in the first quarter of 2018 compared to the final quarter of 2017. Image: Facebook Community Standards Enforcement Preliminary Report

Facebook's internal detection systems consist of both software and people, however with a 2.2 billion-strong user base -- and Facebook's plan to raise the number of cybersecurity employees and content moderators to 20,000 -- it's still unrealistic to hope every potentially hateful post will pass under the eyes of an understanding human with the ability to censor it.

The ubiquitous nature of hate speech on social media is not a new concept to Australians, and with cyber-bullying an ever-present issue, now more than ever there is hope Facebook and its associates are able to improve their ability to combat it.

Director of UNSW's Cyber Security Research Centre, Nigel Phair, told ten daily detection technologies will improve but without a shift in online social norms, things will only move so far forward.

"Technology definitely will get better, but we shouldn't be relying on Facebook to get rid of hate speech," Phair said.

“We should be relying on society not to post it.”

Steps Towards Transparency

The report comes after mounting pressure on Facebook to explain its inner-workings and policies.

The recent Cambridge Analytica scandal, which saw the data of 87 million users exposed and potentially used to benefit the Trump election campaign in 2016, prompted a huge backlash against the social media giant and a discussion regarding Facebook's handling of confidential data.

Facebook CEO Mark Zuckerberg testified before a combined Senate Judiciary and Commerce committee following the Cambridge Analytica scandal. Image: Getty

Last month, also in an effort to increase transparency, the company released a public version of the previously private content moderation guidelines that are used to determine what is removed, flagged and hidden from Facebook.

Phair commends Facebook for releasing its newest report in aid of transparency but acknowledges the social network "still has a long way to go".