Twitter has suspended nearly 300,000 accounts that have been linked to terrorism in the first six months of the year, according to the social network’s biannual transparency report published Tuesday.

According to the company, the vast majority of the suspensions—about 95 percent—were handed out after the accounts were identified by its internal tools, highlighting the company’s improving capabilities for fighting back against extremist organizations using the service for propaganda.

The company’s previous report for the second half of 2016 indicated 74 percent of suspended accounts were caught by automated systems at that time.

Just two percent of the accounts deemed to have a link to terrorist organizations were suspended after being reported by government organizations around the world. In total, Twitter received 716 separate reports from governments which resulted in 5,929 accounts being suspended. It marked an 80 percent reduction in accounts reported by governments compared to the previous reporting period.

Twitter also reported that three-quarters of all the suspended accounts were caught before they were able to post their first tweet, effectively stifling the spread of extremist content before it could start.

In a blog post detailing the transparency report, Twitter said that it blocked accounts believed to "actively incite or promote violence associated with internationally recognized terrorist organizations, promote internationally recognized terrorist organizations, and accounts attempting to evade prior enforcement."

The massive purge of accounts with terrorist links is just the latest in Twitter’s ongoing attempts to curb the presence of such organizations from its platform. In previous reports, the social network said it removed 235,000 in the first half of 2016 and 125,000 accounts in second part of 2015.

The numbers of suspended accounts has continued to rise relatively steadily in each report—save for a massive purge of 636,000 accounts in the second half of 2016 —suggesting both that Twitter is improving its ability to fight back against extremists on the platform and that there is still a significant attempt by terrorist organizations to leverage Twitter for their own purposes.

Twitter, along with Facebook and Google, have come under fire in the past for allowing extremist content to exist and spread on their platforms. Politicians in Australia, the United Kingdom and the United States have all made public calls for more action to be taken to police terrorist content on social media.

Some prominent world leaders, including British prime minister Theresa May and French president Emmanuel Macron, have suggested fines and other punishments should be levied against tech companies that fail to remove terrorist content in order to prevent the platforms from becoming “safe spaces” for extremist organizations.

In response to some of the calls for stricter content restrictions, tech companies including Twitter, Facebook, Microsoft and YouTube have agreed to create a shared database of digital fingerprints of extremist material removed from their services in order to help identify the content and make it easier to remove.