Theresa May, the Prime Minister of the United Kingdom, used the annual G7 meeting to push other industrialized countries to do more to fight online extremism, including developing strategies to push social networks to remove extremist content.

May chaired a counter-terrorism session held at the G7 summit Friday in Sicily, where she pressed the United States, Canada, Germany, France, Italy and Japan to hone in on the use of social media to broadcast extremist beliefs.

Read: San Bernardino Shooting Victims' Families Accuse Twitter, Facebook, Google Of Supporting ISIS

May wants her counterparts in other advanced economies to begin fighting back harder against the Islamic State’s online presence, including banding together to enforce stricter rules on social media companies who host platforms that have been used to spread extremist ideologies.

According to the Guardian, May’s primary efforts revolve around making social networks develop tools to automatically identify and remove potentially harmful material based on what the post contains and who posted it.

She is also hoping to push tech companies to alert law enforcement when harmful material is posted on their platforms so that action can be taken against the posters. The effort would also include creating industry guidelines that would define harmful material in a clear way so companies have a hard line on what is acceptable and what should be removed.

The group of nations agreed during the meeting that the threat from the Islamic State is “evolving rather than disappearing” according to May. In response, the group signed a document saying more should be done by internet companies to aid in the fight against extremist material.

"We showed our united commitment and our determination to continue and to strengthen our fight against terrorism," Italian Prime Minister Paolo Gentiloni said after the meeting.

Read: WhatsApp In London: UK Officials Call For Encryption Backdoor After Westminster Attack

The push for more action from internet companies come just days after a terrorist attack on UK soil, in which a suicide bomber blew himself up at an Ariana Grande concert in Manchester, killing 23 and injuring 116.

May and other UK leaders have been aggressively pushing for tech companies to take more action to help law enforcement fight terrorism. Following an attack in Westminster earlier this year,  Home secretary Amber Rudd took aim at WhatsApp after it was discovered the attacker possessed a smartphone and was apparently communicating with others through the encrypted messaging service.

Rudd called for platforms like WhatsApp, which is owned by Facebook, to cooperate in investigations in cases like the Westminster attack, and suggested encrypted communications allow terrorists to hide their conversations from law enforcement.

Former U.K. Prime Minister David Cameron also suggested blocking encrypted messaging services unless they provided a government backdoor following the Charlie Hebdo shooting in 2015.

In a statement to TechCrunch, Facebook Head of Global Policy Management Monika Bickert said, “We want to provide a service where people feel safe. That means we do not allow groups or people that engage in terrorist activity, or posts that express support for terrorism. Using a combination of technology and human review, we work aggressively to remove terrorist content from our platform as soon as we become aware of it — and if there is an emergency involving imminent harm to someone’s safety, we notify law enforcement.”

Twitter has also taken steps in recent years to more actively fight the spread of extremist material on its platform. The company reported that it suspended a total of 376,890 accounts on between July 1 and Dec. 31, 2016, with 74 percent of those suspensions surfaced by automated tools rather than user reports.