twitter
People holding mobile phones are silhouetted against a backdrop projected with the Twitter logo in this illustration picture taken on Sept. 27, 2013. Reuters

Are some of the most-followed accounts on Twitter actually not run by humans but bots? Either way, there isn’t much difference in the way they would operate, researchers from the University of Cambridge in the United Kingdom have found.

Three researchers from the university analyzed thousands of accounts on the micro-blogging platform and found that ‘celebrity’ accounts — those with more than 10 million followers — behave much the same way that an account run by a bot would. The rate at which these celebrity accounts retweet others’ tweets are similar to bots, and even the pace of original tweets posted by both categories of accounts was found to be similar.

Read: How Many Followers Of The Most Popular Twitter Accounts Are Actually Fake?

This was in sharp contrast with Twitter accounts that had a smaller following, with a large difference opening up between the behavior of the human accounts and those of bots. Bot accounts on Twitter generate a lot more content than the average human user, and also retweet more often, behavior which is in line with human Twitter accounts have over 10 million followers.

“Tweets by human accounts receive on average 19 times more likes and 10 times more retweets than tweets by bot accounts. Bots also spend less time liking other users’ tweets,” the researchers said in a statement Tuesday.

Zafar Gilani, a Ph.D. student at the university and leader of the research, explained why bots behave the way they do.

“We think this is probably because bots aren’t that good at creating original Twitter content, so they rely a lot more on retweets and redirecting followers to external websites. While bots are getting more sophisticated all the time, they’re still pretty bad at one-on-one Twitter conversations, for instance – most of the time, a conversation with a bot will be mostly gibberish,” he said in the statement.

Bots are software designed to carry out specific tasks over the internet, such as crawling through websites, posting content on social media or spamming forums. The researchers cited an estimate of between 40 and 60 percent of all Twitter accounts being bots and specifically mentioned accounts like those of BBC and CNN being automated. They also found that while there are some bot accounts on Twitter with tens of millions of followers, the overall distribution was much like human Twitter users, with the majority of bot accounts having less than a thousand followers.

Gilani and his colleagues were interested in seeing how effectively these bots could be detected and what impact these bots have.

“A Twitter user can be a human and still be a spammer, and an account can be operated by a bot and still be benign,” Gilani said.

Twitter @support Bot
Twitter's @support bot in action. BuzzFeed News

Read: This Twitter Bot Translates Trump’s Tweets Into “Official” White House Statements

For their detection, the researchers first used an online tool called BotOMeter (earlier called BotOrNot) but were not happy with the results. They next recruited four undergraduate students to manually determine the human/bot status of over 3,500 accounts. Using characteristics such as account creation date, average tweet frequency, content posted, account description, whether the user replies to tweets, likes or favorites received and the follower to friend ratio, the students classified the accounts.

The students’ assessment was in contrast to the findings of the BotOMeter tool, but in agreement with facts, leading the researchers to develop a bot detection algorithm that was trained on the dataset used by the four students.

Their algorithm used 21 different features to detect bots and had a 86 percent accuracy rate, the researchers claimed in the statement. Along with Ekaterina Kochmar and Jon Crowcroft, Gilani presented a paper on the subject, titled “Classification of Twitter Accounts into Automated Agents and Human Users,” at the ongoing 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining in Sydney.

“Many people tend to think that bots are nefarious or evil, but that’s not true,” Gilani said. “They can be anything, just like a person. Some of them aren’t exactly legal or moral, but many of them are completely harmless. What I’m doing next is modeling the social cost of these bots – how are they changing the nature and quality of conversations online? What is clear though, is that bots are here to stay.”