Facebook is aiming to stomp out revenge porn being posted on its platform with a new test feature that is not guaranteed to make potential victims feel safer. The company is asking users to upload illicit images in advance so its system can quickly flag them if they are posted.

The new system is currently being tested in Australia, though the company intends to expand it to other countries in the future if the experiment goes well.

Facebook’s new system is modeled after detection systems designed to identify and prevent the spread of child pornography—systems that are used by the social network and other major tech firms including Google, Twitter, Instagram and others.

The system’s first step may be a bit of a hurdle for potential victims, as it requires them to upload images of themselves that they wouldn’t want shared to the platform before their abuser can do so.

Those images are uploaded by sending the photo to one’s self through Facebook Messenger. Once the photo is uploaded, the user can report that photo and Facebook will create a hash of the image—a cryptographic signature that is used to identify a file.

When an abuser attempts to upload an explicit image of a victim to the platform, the image will produce the same hash as the image stored in Facebook’s database. When such an image is recognized, Facebook will block it from being uploaded.

The block will extend beyond just posting the image directly to the site, as well. Facebook can also stop the abuser from sharing the image privately to the victim or to others through messages sent via Facebook Messenger.

The test feature is made possible by another change put in place earlier this year by the social network, in which the company allowed users to start reporting images that may be considered revenge porn.

While the system is an innovative way to prevent potential victims from being exposed by an abusive former partner, it’s also sure to give victims pause before participating. Facebook assures users that it does not store any copies of photos that are reported. The hashing system allows the social network to identify inappropriate images without keeping them available in Facebook’s own database.

Facebook also isn’t acting alone to implement the new feature. In Australia, where it is being tested for the time being, the tech giant has enlisted the help of the Australian government’s e-Safety Commissioner to help guide people through the process.

Facebook confirmed to the Australian Broadcasting Corporation (ABC) that the feature is being tested in three other countries outside of Australia, with plans to expand it further if it proves successful in protecting victims from abuse and unwanted exposure.

Facebook is just the latest tech company to take aim at the issue of revenge porn. In 2015, Google and Microsoft began cracking down on the problem in search results, which would often produce revenge porn photos of a person when someone would search for their name.