20190524_IBT_FacebookViolations
This chart shows the types of content Facebook took action on in in the first quarter of 2019. IBT / Statista

Facebook has published figures showing the amount of controversial content it took action on in the first quarter of 2019. Amid the spread of fake news and increasing levels of inflammatory content circulating online, the social network has come under immense pressure to better regulate what's happening on its watch. The content that Facebook is actively trying to keep from its site can be broken down into eight categories: graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, bullying and harassment, child nudity and sexual exploitation, regulated goods (drugs and firearms) and, last but definitely not least, spam.

Between January and March of this year, 1.8 billion posts categorized as spam were removed from Facebook, accounting for 96 percent of all content taken action on (excluding fake accounts). 34 million posts containing violence and graphic content were also taken down or covered with a warning, 99 percent of which were found and flagged by Facebook's technology before they were reported. Likewise, 97 percent of all posts taken down or flagged for containing adult nudity or sexual activity were pinpointed and identified automatically before they were reported - 19 million were given warning labels or deleted in total.

Unfortunately, Facebook's technology has been significantly less successful at identifying posts containing hate speech. Of the 4 million pieces of content the company took action against for including hate speech, only 65 percent were flagged by Facebook before users reported a violation of the platform's Community Standards. When it comes to spam, the content most frequently deleted, disabling fake accounts is critical. During the first quarter of the year, more than 2 billion fake accounts were disabled and most of them were removed within minutes of registration.