Charlottesville
A protester wears a sign reading "Fight white supremacy" at a protest against white nationalists in New York City, the day after the attack on counter-protesters at the "Unite the Right" rally organized by white nationalists in Charlottesville, Virginia, Aug. 13, 2017. Reuters/Joe Penney

Tech companies have rallied together to take stringent action against white supremacist groups in the wake of the Charlottesville violence that left a woman dead.

While Facebook has taken down the “Unite The Right” page, Google and GoDaddy have de-registered the domain of the racist site Daily Stormer.

Even Apple got into the act, cutting Apple Pay support for websites that sell white supremacist merchandize. Apple CEO Tim Cook even publicly voiced his disagreement with President Donald Trump’s ambivalent stand of assigning blame to both sides.

“I disagree with the president and others who believe that there is a moral equivalence between white supremacists and Nazis, and those who oppose them by standing up for human rights,” Cook wrote in an open letter.

But this is not the first time tech companies have stood up against what we can call cyber evils — the use of technology for nefarious purposes.

There have been many examples in the past. For example, cyber bullying has been a concern for far too long. But the incidents have failed to stop — just this February teenager Mallory Grossman killed herself because of cyber bullying.

A bigger threat emerged with the rise of the Islamic State group (ISIS), which used technology and social media not only to publicize itself, but also to spread fear and target victims.

Yet, in 2017, we are still debating whether a Facebook post of an ISIS image is legal.

The fact remains that despite tech companies being proactive recently, it is quite easy to find racist content online. This content generally violates many social networks and tech companies’ terms of usage, but it still exists online and is quite easy to find.

According a survey done by the Pew Research Center, 60 percent of users have been called offensive names online, while 25 percent have seen someone being physically threatened. In addition to this, 24 percent have witnessed someone being harassed for a sustained period of time.

White supremacist memes and even comments, like the one by a Massachusetts police officer attacking the victims, are widely available. The truth is, although tech companies have begun cracking down on such content, they are already fighting a losing battle.

With the massive amounts of data that social networks, for example, receive, it is almost impossible to monitor content spewing hate and inciting violence. Facebook, for example, has close to 2 billion monthly users, while YouTube has 1.5 billion. Even if these users post status updates and comments on videos just once a day, that would still mean Facebook deals with 24 billion posts in an year, while YouTube will have 18 billion comments to filter. While that is a lot, the actual number is much bigger.

Even if social networks and tech companies like Apple were hypothetically able to monitor all such content, there is the concern about freedom of speech and censorship. Social networks are generally expected to be free spaces with a high level of freedom of expression.

“If an internet service provider wants to deny me the opportunity to read offensive speech like the Daily Stormer, that’s a problem," said Nate Cardozo, senior staff attorney at the Electronic Frontier Foundation. "There doesn’t seem to be any suggestion the Daily Stormer was hosting illegal content, just terrible content. We should be able to see it and reject it."

If tech companies were going to bring down a heavy hand on such content, they would probably use bots for such a purpose. The problem with bots or any other supervisory mechanism is that they won’t be able to differentiate between posts condemning an act or even a satire.

But with the violence and deaths, pressure is mounting on tech companies. Germany, in April, proposed a 50 million euros fine on tech firms offering a platform for cyber hate, while a group that fights against cyber hate, the Anti-defamation League, has proposed setting up its center in Silicon Valley.

Many organizations believe that the current crackdown might not be enough as such offensive content will surface again, in a different form, as has happened before. The Daily Stormer, for example, has tried to make a comeback with a Russian domain, which was then shut down. But it can still come back.

“Racism and bigotry will not be eradicated if we merely force them underground,” said Anthony Romero of the American Civil Liberties Union. And that is the challenge that the tech industry faces, and it seems to have no viable answers.

While the companies definitely need to stop the flow of hate, there seems no technology or paradigm available for them to do it without raising fears of censorship or issues of proper monitoring.

It is a problem they will continue to grapple with, till the next Charlottesville.