facebook
A man is silhouetted against a video screen with a Facebook logo as he poses with a Samsung S4 smartphone in this photo illustration, Aug. 14, 2013. Reuters

Facebook content moderators that work to identify terrorist activity on the platform had their identities compromised, according to a report by the Guardian.

The security breach affected more than 1,000 workers across 22 departments at the company who used the platform’s moderation software to review and remove inappropriate posts from Facebook. The content includes sexual material, hate speech and terrorist propaganda.

Read: Facebook To Fight Terrorism With Artificial Intelligence, Policy Experts

The bug, which was discovered in November 2016, exposed personal details of moderators to suspected terrorist users of the social networking site. The personal profiles of the moderators automatically appeared as notifications in activity logs of Facebook groups they were looking into and shutting down.

Forty of the 1,000 workers whose profiles were compromised were based in a counter-terrorism unit at Facebook’s European headquarters in Dublin, Ireland. Six of them were labeled as “high priority” victims after the social media platform found their profiles were likely viewed by suspected terrorists.

One moderator from Ireland, who was among hundreds of “community operations analysts” contracted by the company Cpl Recruitment, ended up leaving the country fearing for his safety.

The report also pointed out moderators are usually paid low wages and have the grueling job of going through Facebook posts.

“You come in every morning and just look at beheadings, people getting butchered, stoned, executed,” said the employee who was forced to flee. He was paid only $15 an hour to go through inappropriate material.

Moderators realized there was something wrong when they began receiving friend requests from individuals linked to terrorist groups they were monitoring. Facebook then launched an investigation and found moderators’ personal profiles had been exposed. The company then reached out to those who it believed had been affected and set up an email address for them to send inquiries.

Read: Facebook Post In Thailand That Insulted Royals Gets Man 35 Years In Jail

Facebook tried to downplay the situation while the investigation was happening, the report said. Craig D’Souza, the platform’s head of global investigations, tried to reassure moderators that there was a “good chance” any potential terrorists notified of their identity would fail in putting the pieces together.

“Keep in mind that when the person sees your name on the list, it was in their activity log, which contains a lot of information,” D’Souza wrote according to the Guardian. “There is a good chance that they associate you with another admin of the group or a hacker...”

The moderator who was forced to leave Ireland replied, “I understand Craig … but this is taking chances. I’m not waiting for a pipe bomb to be mailed to my address until Facebook does something about it.”

Facebook confirmed the security lapse and told the Guardian it made technical changes to “better detect and prevent these types of issues from occurring.”

“We care deeply about keeping everyone who works for Facebook safe,” a company spokesperson told the news outlet. “As soon as we learned about the issue, we fixed it and began a thorough investigation to learn as much as possible about what happened.”

The report comes after the company announced several initiatives Thursday, including the use of artificial intelligence technology to curb terrorism on its platform.