Twitter FBI data
The Twitter Inc. logo is shown with the U.S. flag during the company's IPO on the floor of the New York Stock Exchange, Nov. 7, 2013. REUTERS/Lucas Jackson

Polarizing “junk news” and conspiratorial content inundated Twitter during last year’s presidential election, especially in swing states, according to a recent Oxford University study.

The study’s findings come as Twitter executives prepare to brief Congress and Senate officials about how Russian accounts on the platform played a role in the 2016 election. Twitter’s meeting with lawmakers comes after Facebook met with Congress earlier this month.

Researchers from Oxford’s Computation Propaganda Project said Twitter users received “misinformation, polarizing and conspiratorial content” through the platform. Junk news includes fake, hyper partisan or emotionally charged news content. Experts said the content on Twitter that was used to polarize voters included divisive and inflammatory rhetoric, as well as faulty reasoning or misleading information.

Researchers looked at tweets that showed evidence of physical location and that used hashtags related to politics collected between Nov. 1, 2016 to Nov. 11, 2016. They focused on conversations on the platform during two key moments: the presidential debates and 10 days before the election.

Swing States Mostly Affected By Junk News

The amount of misinformation was higher in swing states compared to uncontested states, regardless of the number of user population in each state. President Donald Trump ended up winning certain swing states that were flooded by junk news.

Out of 16 swing states, 11 of them had levels of junk news that were higher than the national average. Arizona had the highest concentration of junk news, followed by Missouri, Nevada and Florida. The swing state that had the highest level of polarising junk news were those with large number of votes in the Electoral College, like Florida, Arizona and Missouri.

Nationally, the ratio of professionally produced news to junk news was one to one. However, the level of content meant to polarize populations in swing states, coming from sources like Russia and Wikileaks, was higher than the national average.

“Adding in content from Russia Today and unverified WikiLeaks rumours means that a really large portion of the political news and information being shared over social media was misleading,” said senior researcher Philip Howard in a statement.

The study comes after U.S. officials concluded Russia used social media platforms to meddle with the presidential election

“Social media is increasingly becoming a centre of attention of public life,” study co-author Lisa-Maria Neudert in a statement. “Worldwide political actors and governments have been deploying a combination of algorithms and propaganda – computational propaganda – to manipulate opinion during pivotal moments of public life such as elections and referenda.”

Facebook And Fake Russian Accounts

Twitter isn't the only platform affected by Russian content. Last week, Facebook CEO Mark Zuckerberg announced the platform will hand over to Congress 3,000 political ads that traced back to Russian accounts. In a briefing with Congress earlier this month, Facebook said it found about $100,000 in ad spending from June 2015 to May 2017, which were linked to the ads. The propaganda was traced back to nearly 500 inauthentic accounts and pages.

Facebook previously handed over the information it discovered to FBI Special Counsel Robert Mueller under a search warrant. After pressure and complaints about not fully cooperating with lawmakers, Facebook decided to give the data to Congress.

The social media company said it’s possible it could find more Russian ads from fake accounts and pages.

“Using ads and other messaging to affect political discourse has become a common part of the cybersecurity arsenal for organized, advanced actors,” the company said in a post. “This means all online platforms will need to address this issue, and get smarter about how to address it, now and in the future.”