Last week's shooting in San Bernadino, California, has put a long-simmering debate back on the front burner: Just how closely should social networks work with law enforcement to stop the spread of terrorist messages?
This question became relevant once again last week with the revelation that Tafsheen Malik posted about her allegiance to the Islamic State group on Facebook -- a post that was taken down for violating Facebook’s rule against celebrating or supporting terrorism.
Both President Barack Obama and 2016 Democratic front-runner Hillary Clinton called broadly on tech companies to further act to combat terrorism and destroy ISIS. “I will urge high-tech and law enforcement leaders to make it harder for terrorists to use technology to escape from justice,” Obama said.
Usually, that’s code for so-called backdoors -- or direct access to user activity-- which virtually all of Silicon Valley vehemently opposes because they are considered unfeasible. Any backdoor policy open to law enforcement past encryption will also be open to hackers.
While Obama and Clinton didn’t mention imposing any backdoor policies, their comments increase the pressure on social networks that have traditionally put user privacy first. “The question is how will companies respond to that pressure,” said Evan Kohlmann of Flashpoint Global Partners, a New York-based security consultancy.
Facebook -- one of the dozens of tech companies that came out against encryption and backdoor policies in a letter to Obama in May -- has acknowledged that the conversation around terrorism has changed and that they’re willing to listen and to continue to act.
“We share the government's goal of keeping terrorist content off our site. Facebook has zero tolerance for terrorists, terror propaganda, or the praising of terror activity and we work aggressively to remove it as soon as we become aware of it,” a Facebook spokesperson told International Business Times on Monday.
“This is an ever-evolving landscape, and we will continue to engage regularly with NGOs, industry partners, academics and government officials on how to keep Facebook and other Internet services, free of terrorists or terror–promoting material,” the spokesperson said.
Facebook may have taken down Malik’s post on Friday, but there are rising concerns that prior to the shooting, she may have used Facebook -- or other online portals -- to share extremist messages. Malik “used to talk to somebody in Arabic at night on the Internet. None of our family members in Pakistan know Arabic, so we do not know what she used to discuss,” according to family members speaking anonymously to New York Daily News.
Facebook takes down content after it is flagged -- whether by a user or by law enforcement -- for violating community standards. It’s inherently a human process, supported by algorithms. Without a user-submitted or government-warranted report on extremism, a company like Facebook would not act.
Backdoor policies, eliminating encryption and transparency about how Facebook identified Malik could all be vulnerable information. “If they knew what magic sauce went into pushing content into the News Feed, spammers or whomever would take advantage of that,” a security expert who has worked at Facebook and Twitter told Reuters.
The social network does respond to government requests and therefore collaborates with law enforcement. Yet many tech leaders, including Facebook, AT&T, Apple, Google, Tumblr, Twitter, Verizon and Yahoo, have published transparency reports on the number of government requests since 2013.
Twitter embraced a more aggressive role by shutting down violent content in 2014 after a video of journalist James Foley being beheaded by the Islamic State was released. "Before it was, 'You flag what you think is offensive.' All of the sudden that started changing. I think it was in the wake of what happened with Charlie Hebdo last January. Freedom of speech was under assault on a different side," said Kohlmann.
Some tech leaders have heralded their social networks as channels for free speech that connect the world and allow for open communication. Yet, with time, some have stepped back on such grandiose ideals as they work to monitor and shut down extremists.
For example, Telegram, an encrypted messaging service, previosly put user’s privacy ahead of the terrorism issue. “I think that privacy, ultimately, and our right for privacy is more important than our fear of bad things happening, like terrorism,” Telegram founder Paul Durov said at the TechCrunch Disrupt conference in September.
Following the attacks in Paris, Telegram publicly announced that the company had blocked 78 ISIS-related channels. Texas Rep. Mike McCaul, who is also the chairman of the House Homeland Security Committee, said that the attackers in Paris had used Telegram. He also announced his intentions to issue legislation on a new committee to address terrorism, according to The Hill.
Paris attackers had Telegram on their phones, McCaul says
— Julian Hattem (@jmhattem) December 7, 2015
Other networks, including Facebook and Twitter, have been repeatedly been open about their willingness to shut down accounts. For instance, a Twitter representative confirmed to the New York Times that the company had shut down 10,000 accounts in a day “for tweeting violent threats,” back in April and had previously said it suspended 2,000 accounts linked to ISIS per week.
Government and law enforcement officials have asked for more help. Clinton specifically called upon Facebook, Twitter and YouTube to step up. “If you look at the story about this woman and maybe the man, too, who got radicalized, self-radicalized, we’re going to need help from Facebook, and from YouTube and from Twitter," she told ABC Sunday.
Just because Facebook, Twitter and YouTube collaborate does not mean that online communication for terrorists, including planning attacks and spreading propaganda, will stop. “Do you think they picked up Telegram and thought this was the best? They had no problem whatsoever moving there,” said Kohlmann.
In his speech Sunday, Obama noted that terrorism is an international issue and that America’s strategy is global. Indeed, the “strategy to destroy ISIS” involves an American-led coalition with 65 other countries. That strategy clearly involves technology on a global scale.
Last week, representatives from Facebook, Google, Microsoft, Twitter and Ask.fm attended the EU Internet Forum in Brussels. “Terrorists are abusing the Internet to spread their poisonous propaganda: that needs to stop,” EU Home Affairs Commissioner Dimitris Avramopoulos said.
While Clinton called out American companies, there are also several international networks that have millions of users and have previously taken action. For instance, WeChat and Weibo attract millions of Chinese users in the United States and abroad. “It’s an essential part of the lives of young Chinese millennials,” said Betsy Page Sigman, a professor at Georgetown University’s McDonough School of Business.
The Chinese government and Chinese-based tech companies have worked together to block and put an end to terrorist-related content. Last year, the Chinese government worked to stop the spread of political rumors on Weibo.
Users, however, seemed unfazed but violating privacy or freedoms. Sigman, after conversations with 30 active WeChat users, said, “I don’t think they were overly concerned about it. They know that the government is that way."