GettyImages-512015968
Facebook released data showing how many fake accounts, spam posts and other types of objectionable content it removed in the first quarter of 2018. The Facebook logo is displayed at the Facebook Innovation Hub on February 24, 2016 in Berlin, Germany. Sean Gallup/Getty Images

Facebook stepped further into its new era of data transparency Tuesday with the release of its inaugural Community Standards Enforcement Report. The report provided detailed data on just how much objectionable content CEO Mark Zuckerberg’s famed social network had to moderate in recent months.

There is a full report but Facebook also published a more concise summary in its Newsroom blog. In the report, content was separated into six categories: graphic violence, adult nudity, terrorist propaganda, hate speech, spam and fake accounts. Within each category are bar graphs showing how many pieces of said content were removed by moderation, including how many were removed by Facebook’s automatic detection technology.

The numbers appear staggering. In the first quarter of 2018, Facebook deleted 837 million spam posts and 583 million fake accounts, largely through its automatic detection technology. Additionally, the site got rid of 21 million examples of adult nudity, 3.5 million violent posts and 2.5 million pieces of hate speech.

Hate speech was the only one that Facebook’s detection tech could not quite get a handle on, as only 38 percent of the posts that were removed were flagged that way.

GettyImages-512015968
Facebook released data showing how many fake accounts, spam posts and other types of objectionable content it removed in the first quarter of 2018. The Facebook logo is displayed at the Facebook Innovation Hub on February 24, 2016 in Berlin, Germany. Sean Gallup/Getty Images

Curiously, several common types of objectionable content were not included in the report. Revenge porn, suicidal ideation, harassment and other prominent types of Facebook posts did not have specific numbers attached to them, according to the Guardian.

Vice-President of Data Analytics Alex Schultz told the Guardian that Facebook just needs to figure out how to categorize those things for the report.

Facebook did announce in November that it would use A.I. detection to help out with suicidal posts.

Facebook revealed on Monday that it had suspended around 200 apps on the site pending investigations into whether or not they misused data, in the wake of the Cambridge Analytica scandal. Facebook has spent the past several months making policy changes and introducing new features to combat negative attention about the spread of fake news during the 2016 presidential election.