Facebook’s policies for what users can and can’t post are available online, billed as “community standards” and users are expected to abide by these standards. They serve as public guidelines for how users should conduct themselves while using the social networking site. But “secret” documents obtained by the Guardian show the rules Facebook uses to deem content acceptable (like Facebook Live video with violence or revenge porn) or not are more complex.

The team of moderators responsible for evaluating and deciding whether reported posts should be removed from the site is overwhelmed and stretched thin. Some only have seconds to decide on a post, the Guardian reported. The company announced earlier this month it would hire 3,000 people to the community operations team over the next year to help review the reported content more quickly and efficiently. This addition would help lessen the workload of the team that already consists of 4,500 people.

Read: Facebook Security Report: Social Media Site Unveils Plan For Fighting Fake News, Addressing Other Concerns

Facebook’s “internal rule book,” means users don’t have access to reasons why some posts get deleted. The Guardian got a hold of some of that “rule book” and took a look at what the social networking giant considers acceptable for the site.

Facebook and self harm:

What the community standards say: Facebook’s community standards say: “We don’t allow the promotion of self-injury or suicide.” But it also notes: “People can, however, share information about self-injury and suicide that does not promote these things.”

What the internal rule book says: Facebook will allow users to live stream self-harm in the interest of not punishing people in distress but the content will be removed once the person can no longer be helped, the Guardian reported. The exception to this would be if a video were particularly newsworthy. The site has seen an increase in self-harm or suicide videos in the last year and wants to give users the support they need. In a post earlier this month Mark Zuckerberg, Facebook CEO, said the company is also working with local law enforcement to help address videos in which suicide is a concern. “We should build a safe community that gets them the help they need,” he wrote.

Facebook and threats of violence:

What the community standards say: “We carefully review reports of threatening language to identify serious threats of harm to public and personal safety. We remove credible threats of physical harm to individuals,” the community standards section on direct threats reads.

What the internal rule book says: Threats of self-harm are evaluated using a variety of standards. Threats made using hashtags are ignored, as are threats of self-harm made five or more days in advance and threats made with an ineffective method, the Guardian reports. Threats made towards other people are more complicated though. What constitutes a “credible threat” varies depending on who the threat is made toward and the specificity of the threat. The moderators must consider the specificity of the target, the method, timing and other factors before deciding whether or not to remove a post. Threats made to “vulnerable people” or “vulnerable groups” are considered credible and will be removed if reported. More vague threats without a specific target or that only hint at a future plan are not seen as credible by the site.

Read: 9 Injured In Shooting That May Have Streamed Live On Facebook

Facebook and graphic violence:

There is no specific section in Facebook’s community standards on graphic violence inflicted on people or animals. But the Guardian released the guidelines moderators must consider when deciding whether such content is acceptable for the site. Content that involves sadism along with the graphic violence is not permitted and is removed. Videos of violent deaths are allowed if they promote awareness and the subject is over 18, these videos will not autoplay and will be marked with a warning screen. The guidelines also note that videos of abortion are only removed if they contain nudity and do not count as graphic violence. If a child (person under 18) is being abused in a photo or video it is not necessarily removed. It is removed if the harm is done or shared in a sadistic manner, but the guidelines say: “We do not action photos or child abuse,” and videos are marked with a warning. For animal abuse, the guidelines say: “Generally, imagery of animal abuse can be shared on the site.” While the same sadism rules apply.

Facebook and sexual content and revenge porn:

What the community standards say: The community standards cover sexual violence and exploitation and include revenge porn and illegal sexual acts. The standards say: “We remove content that threatens or promotes sexual violence or exploitation.” This policy also includes protections for victims and survivors of exploitation or assault. “We also remove photographs or videos depicting incidents of sexual violence and images shared in revenge or without permissions from the people in the images.”

What the internal rule book says: Images of “moderate displays of sexuality” are allowed on the site, this includes open mouth kissing, clothed simulated sex and black-barred or pixilated images, according to Facebook slides published by the Guardian. Photos of intercourse or other acts are allowed under very specific guidelines that can be a bit confusing, and it’s the area moderators have the most difficulty accurately controlling. Photos posted as “revenge porn” either to extort or shame individuals is not prohibited on the site and the rulebook goes into specific detail about what constitutes that type of photo or video. These photos usually include nude or nearly nude subjects and are taken in private settings intended only for a specific person, not a social media setting. Additionally, the site has added a photo matching technology to help the site identify photos that were already shared before and identified to not include consent.