facebook
Facebook failed to take down a comment that suggested a user should eat a bullet. The company issued an apology after a media inquiry. Reuters/Dado Ruvic

“Keep making comments like you do and the only thing going to be fed to you will be a bullet,” read a comment on stand-up comedian Hari Kondabolu’s Facebook page posted Wednesday morning.

Kondabolu hid the post by pressing a light-gray X in the upper right corner. Then, he clicked “Report” and went through a series of steps to notify the company:

Is this comment about you or a friend?

“Yes, this is about me or a friend.”

Why don’t you want to see this?

“It’s harassing me.”

What You Can Do

“Submit to Facebook for Review.”

A few hours later, Kondabolu received an email reply from Facebook that the text did not violate the network’s “Community Standards.”

“Sometimes you can say it’s a tricky line but a death threat is a threat. He doesn’t actually want to feed me a bullet. He’s talking about killing me,” Kondabolu told International Business Times. “We’re not talking about risking speech. We’re talking about safety.”

Facebook’s Community Standards are divided in a series of pages on the site. Under the Direct Threats section, Facebook issues that the network will “remove credible threats of physical harm to individuals.” However, the page continues with, “We may consider things like a person’s physical location or public visibility in determining whether a threat is credible.”

A Facebook representative admitted to IBTimes that the Kondabolu review was an error. “As our team processes millions of reports each week, we occasionally make a mistake. In this case, should have removed this content, and we apologize for that," the representative wrote in an email to IBTimes after a request for comment.

The comment to Kondabolu was in response to a Facebook post Kondabolu had published that read, "Fox News host Brian Kilmeade wants to know why we aren't 'clearing the waters' of sharks. WHITE PEOPLE WANT TO GENTRIFY THE OCEAN NOW?"

Kondabolu had earlier reported to Facebook another comment that used the n-word against him, going through the same steps as noted above. This time, a few hours later, he received a Facebook message that the post had been removed and that the user had been notified. In Facebook’s Community Standard for Hate Speech, the description reads that the site “removes hate speech.”

“I can remove a message. What I need is a warning. I need something for them to discourage this behavior,” Kondabolu said. In the case of the latter threat, about a bullet, Kondabolu did not have the satisfaction of knowing the user had been warned.

'Dealing With Abuse'

Death threats have been difficult to police for Facebook. Indeed, a case with messages on Facebook as evidence of death threats, Elonis v. U.S., reached the Supreme Court this year. On June 1, the court ruled 7-2 that the “threats,” which were a set of rap lyrics, were not valid, the Daily Beast reported.

Kondabolu evidently isn’t alone in this scenario, given the Supreme Court case and the curious post from a Facebook user on the community forum with the title, “How do I report a death threat?” The question comes to what role a social network should play in addressing these reports.

Reporting and blocking standards is an issue that other social networks, such as Twitter and YouTube, have faced. Former Twitter CEO Dick Costolo, who resigned from the position July 1, wrote in an internal memo in February, “We suck at dealing with abuse and trolls on the platform and we've sucked at it for years," the Verge reported.

The memo came shortly after writer Lindy West detailed the abuse she received from trolls on Twitter via an editorial in the Guardian, along with the #Gamergate controversy that continues to thrive. In West’s editorial, she called out messages she received from a Twitter report in which the company did not judge rape threats that she reported to be in violation of its rules.

Since Costolo’s memo, Twitter has been releasing new tools in an effort to improve safety on the site and empower its users. In April, Twitter updated its abuse policy to say, “You may not publish or post threats of violence against others or promote violence against others.” This week, Twitter released an online page called “Safety Center” to explain its safety tools and abuse policies.

Facebook has also been adjusting its safety tools. The site supports a dashboard where users can keep track in real time the abuse reports they send. Earlier this month, the page was redesigned, as spotted by SocialTimes.

The Current System

Both social networks, with 1.44 billion users on Facebook and 300 million monthly active users on Twitter, can be victim to so-called “poor” policing in wavering on standards.

Twitter differs slightly in its reporting policy. Click on a tweet to report, and you’ll also go through a series of pages. Had the death threat to Kondabolu been on Twitter, he would have selected: “It’s abusive or harmful” > “Involves harassment or violence” > “Me” > “Threatening violence or physical harm” > and then users can include a message in the report.

Kondabolu believes Facebook should actively remove users who make threats. “Maybe they should have a system that if this happens ‘X’ amount of times. One for my taste, but maybe two or three that immediately bans them from the site,” Kondabolu said. “When you start using racism and oppressive language, there is clearly a line somewhere.”