GettyImages-187254774
Twitter is considering adding a feature to flag "fake news," sources say. Getty Images

Twitter is looking to actively take on “fake news” by adding a feature that would allow users to flag tweets deemed misleading or harmful – although it’s not clear who would ultimately verify such facts.

A prototype phase of the feature, which may never be implemented across the platform as a whole, would be the company’s most aggressive tactic against fabricated news stories, fake accounts and extremist recruiting through the social media giant. The Washington Post reports that two people familiar with the company’s plans say the feature would look like a small tab or button appearing in a drop-down menu alongside tweets.

The third-party service, Twitter Audit, reports that 59 percent of President Trump’s followers are bots or fake accounts and former presidential candidate Hillary Clinton had 66 percent of her Twitter following also accused of likely being fake.

Twitter spokeswoman Emily Horne told the Post the company has “no current plans to launch” the anti-fake news feature, but declined to comment on whether or not they are testing the program. Two anonymous sources tell the Post the company is still researching how to implement the function due to fears people could game the system and censor legitimate information being dispersed on Twitter.

One source said Twitter’s crowdsourcing feature for labeling fake or malicious content is still in the design phase and would need to be tested on a much smaller user base before being launched across the platform.

Twitter’s Vice President of Policy, Colin Crowell, wrote in a blog post earlier this month that the company is “working hard to detect spammy behaviors…We’ve been doubling down on our efforts.”

The “fake news” flagging tab is part of a larger set of moves by Facebook and Twitter to halt fraudulent or harmful content abuses across the platforms. The term itself has become exceedingly common in the wake of the 2016 presidential election, with President Trump himself continuing to tweet about “fake news” as recently as yesterday.

Facebook has even taken the tactic of crowdsourcing its fight against fake news. The company introduced its own feature allowing users to flag content they deem false. If there are enough “disputes” of the content, the story is sent over to independent partners of Facebook for fact-checking.

Twitter has already placed development efforts into software that attempts to connect suspicious micro-signals, the Post reports. One example is analyzing an account that is sending out a hefty amount of political content in English but using an IP in a foreign country. High-profile retweets – people from mainstream media, verified accounts or political officials – could also be used to gauge the legitimacy of an account’s content.

“We, as a company, should not be the arbiter of truth,” Crowell wrote in the blog post earlier this month. “Journalists, experts, and engaged citizens” should be tasked with correcting public information, he added.

An Ipsos poll conducted with Buzzfeed in December found that 75 percent of U.S. adults who were familiar with a fake news headline still viewed the story as accurate. A Harvard-Harris poll conducted in May found that 65 percent of voters say the mainstream media is littered with fake news. And a Pew Research Center survey last year found that 62 percent of American adults get their daily news from social media.

Twitter has more than 300 million monthly users and earlier this week Facebook said it has hit 2 billion monthly users.