Facebook
A man holds an iPad with a Facebook application in an office building at the Pudong financial district in Shanghai, Sept. 25, 2013. Reuters/Carlos Barri

After the presidential election in November, Facebook CEO Mark Zuckerberg had said that his company was committed to fighting fake news. But the fact-checkers enlisted by the company have claimed that the company is withholding information from them that could help them do their job more efficiently, according to a report.

The company has declined to share internal data with fact-checkers in determining which stories are fake news stories which should come with a “disputed” tag. They also don’t have enough data to determine which stories they should prioritize out of hundreds of stories emerging on the social networking at any given moment, according to the report published Thursday by Politico.

“I would say that the general lack of information — not only data — given by Facebook is a concern for a majority of publishers,” Adrien Sénécat, a journalist at Le Monde, one of the news organizations that has partnered with Facebook to fact-check stories, told Politico in an emailed response.

Facebook executives, on the other hand, are citing “privacy concerns” for holding back raw data from “outsiders." The social media platform was hauled up after the November election for the spread of fake news. While Zuckerberg tried to downplay the claim, the company later announced a slew of measures to curb the fake news.

According to Brendan Nyhan, Dartmouth College Political Science Professor, the social network was the “key vector of misinformation” during the 2016 presidential election.

The social networking site is actually walking a tightrope. Since it is not a news website, the standards of monitoring news on the platform need to be very different as a lot of news content on the platform is posted by users.

“We’ve seen overall that false news has decreased on Facebook. It’s hard for us to measure because we can’t read everything that gets posted,” Adam Mosseri, the company’s vice-president for news feed, stated in April.

The fact checking mechanism can work efficiently only if fact-checkers get the internal data on the visibility of stories they flag. This data can help them determine whether the fact-checking process was effective, and also, if after a story was flagged, did similar stories start emerging on the social network?

Moreover, a single-fact check can take up to four to five hours, and therefore, prioritizing stories is important for the fact-checkers. “There’s 1,200, 1,500 stories that we could look at today, and we’re going to look at two,” Aaron Sharockman, executive director of fact-checking firm Politifact, said while explaining how real-time fact checking works.

Simply put, it is difficult for the fact-checkers to do their work accurately without getting the engagement numbers for flagged content. “This is going to sound super corny, but fact-checkers don’t really take anything at face value. You need to support with evidence,” Alexios Mantzarlis, the director of Poynter’s International Fact-Checking Network, told Politico.

Facebook’s product manager Sara Su has claimed the current fact-checking mechanism is working to reduce the presence of fake news on its platform. She, however, pointed out the company was working with fact-checkers to ‘refine the tools’ to be more efficient. “I wish I could give you dates, but we are committed to working with our fact-checking partners to continue to refine the tools to be more efficient,” she stated.

Experts such as Mantzarlis are hopeful the situation would improve by the end of the year. “I think this is important to people within Facebook, so I think they will share information,” he said.