Twitter ISIS propaganda
Twitter, like Facebook and other social media companies, has been under pressure to do more to stifle terrorist activity online. Reuters/Dado Ruvic

This story has been updated with a statement from Facebook.

The free and open networks created by Silicon Valley’s social media companies have led those same companies into a battle with terrorism and violent extremism. While Facebook, Twitter and Google have denounced terrorism — and have vowed to clamp down on terrorist groups that use their sites — they are often roundly criticized when violent events erupt.

In the latest critique, Facebook, Twitter and Google were all named in a lawsuit this week over their alleged roles in fueling the rise of the Islamic State group by allowing it to use their networks for spreading propaganda, raising funds and attracting recruits. The lawsuit’s plaintiff is Reynaldo Gonzalez, the father of Nohemi Gonzalez, an American who was one of 130 victims of the attacks in Paris on Nov. 13, 2015.

The lawsuit, filed Tuesday in U.S. District Court in Northern California, highlights the ways the terrorist organization, also known as ISIS, has used each network for the past few years. For example, ISIS leader Omar Hussain tapped Facebook as a way to recruit members for attacks in the United Kingdom, according to the Mirror. On Twitter, ISIS supporters have tweeted photos of dead soldiers with the hashtag #AMessagefromISIStoUS. And beheading videos have appeared on YouTube, which is owned by Google.

Twitter is already facing a similar lawsuit. In January, the wife of another American who was a victim of the Paris attacks sued the company for the same three alleged actions: propaganda, funds and recruits. In March, Twitter sought dismissal for the suit and pointed to Section 230 of the Communications Decency Act, which states that online services are exempt, for the most part, from the laws that would hold them responsible for other people’s actions.

paris twitter
Two women take part in a vigil at London's Trafalgar Square to pay tribute to the victims of the Paris attacks, Nov. 14, 2015. Reuters/Peter Nicholls

The latest lawsuit alleges that Facebook, Twitter and Google are failing to adequately prevent ISIS from using the networks, which the lawsuit claims is a violation of the Patriot Act’s prohibition against "providing material support for terrorism." The platforms the defendants “purposefully, knowingly or with willful blindness provided to ISIS constitute material support to the preparation and carrying out of acts of international terrorism, including the attack in which Nohemi Gonzalez was killed,” the lawsuit reads.

Facebook, Twitter and Google have not sat idly by. As ISIS has grown, the networks have worked to take increased action against terrorism. Facebook’s and Twitter’s terms were updated in the past year. Twitter released a blog post in February titled “Combating Violent Extremism,” in which it claimed it had suspended 125,000 ISIS-related accounts over the previous six months or so.

A Google spokesperson defended the company's actions. "Our hearts go out to the victims of terrorism and their families everywhere. While we cannot comment on pending litigation, YouTube has a strong track record of taking swift action against terrorist content. We have clear policies prohibiting terrorist recruitment and content intending to incite violence and quickly remove videos violating these policies when flagged by our users. We also terminate accounts run by terrorist organizations or those that repeatedly violate our policies," the spokesperson told International Business Times.

A Twitter spokesperson said, “Twitter strongly condemns the ongoing acts of violence for which ISIS claims credit, and our sympathies go out to those impacted by these acts of terror. We have partnered with others in industry, NGOs and governments to find better ways to combat the online manifestations of the larger societal problem at the core of violent extremism. As we stated earlier this year, violent threats and the promotion of terrorism deserve no place on Twitter and, like other social networks, our rules make that clear. We have teams around the world actively investigating reports of rule violations, identifying violating conduct, and working with law enforcement entities when appropriate. We believe this lawsuit is without merit.”

A Facebook spokesperson told IBT: “We extend our deepest sympathy to those affected by terror attacks. There is no place for terrorists or content that promotes or supports terrorism on Facebook, and we work aggressively to remove such content as soon as we become aware of it. Anyone can report terrorist accounts or content to us, and our global team responds to these reports quickly around the clock. If we see evidence of a threat of imminent harm or a terror attack, we reach out to law enforcement. This lawsuit is without merit and we will defend ourselves.”

Each of the companies has written policies against violent and graphic content. Facebook has a section called “Dangerous Organizations” within its Community Standards. Twitter’s standards read, “Users may not make threats of violence or promote violence, including threatening or promoting terrorism.” And YouTube’s terms say, “It's not OK to post violent or gory content that's primarily intended to be shocking, sensational, or disrespectful.”

The lawsuit acknowledges some of the work the networks have done but states that it is not enough, and Gonzalez isn't alone. Following the Paris attacks, the U.S. government publicly called out the networks for their role and demanded more action. Sen. Dianne Feinstein, D-Calif., said the tech industry should create back doors to give the government instant access to their sites. In December, the House passed a bill to demand more strategy and oversight into terrorism on social media.

Still, Silicon Valley has fought back, arguing in the wake of the Paris attacks that the U.S. would be less safe with back doors. As to the latest lawsuit, the companies could be protected under Section 230 of the Communications Decency Act.