In the wake of multiple terror attacks over the course of the last six months, four of the biggest names in technology have signed up to a European code of conduct, which will see them commit to remove the “majority” of hate speech on their networks within 24 hours of being notified.

The commitment, made by Facebook, Twitter, Microsoft and YouTube, is part of the European Union’s push to combat illegal hate speech in the wake of the terror attacks on Brussels earlier this year and the attacks on Paris in November last year.

As well as committing to having a “clear and effective processes to review notifications” the companies have also committed to “to review the majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content, if necessary.”

While companies like Twitter and Facebook have a lot of the provisions mentioned in the code of conduct already in place, the amount of content the companies are having to deal with is growing at a fast pace.

“The recent terror attacks have reminded us of the urgent need to address illegal online hate speech,” Vĕra Jourová, EU commissioner for justice said in a statement. “Social media is unfortunately one of the tools that terrorist groups use to radicalize young people and racist use to spread violence and hatred.”

The four companies have also pledged to make sure their terms of service and community guidelines make it clear that hate speech is not tolerated as well as providing adequate reporting tools to make it easy for users to report any abuse of these guidelines.

The new code also allows companies to establish networks of “trusted reporters” who will be trained to know what is and isn’t permitted by each company’s community guidelines. The EU is encouraging the companies to create a network of these reporters in as many countries as possible by working with civil society organizations to identify individuals who could fill the roles.

In response to the rise in hate speech, Twitter has quietly updated its rules to ban such content, and in February the company announced it had suspended over 125,000 accounts for “threatening or promoting terrorist acts, primarily related to ISIS” in the previous six months.

“Hateful conduct has no place on Twitter and we will continue to tackle this issue head on alongside our partners in industry and civil society,” Twitter’s head of public policy for Europe, Karen White, said.