KEY POINTS

  • The photos of more than 680,000 women were used to create fake realistic nude images without their consent
  • About 104,852 images of women were posted publicly to the Telegram app
  • 70% of the photos came from social media or private sources

A computer tool led to the creation of thousands of fake nude photos of women -- some of them underage -- without their consent that were then uploaded to the messaging app Telegram, The Washington Post reported.

Sensity, a visual threat intelligence company headquartered in Amsterdam, discovered the Telegram network. There are 101,080 members in the network, and 70% of the group resides in Russia or Europe.

About 104,852 images derived from pictures of more than 680,000 women were posted publicly to the app, with 70% of the photos coming from social media or private sources. A small number of victims appeared to be underage.

What's even more concerning is that the Telegram bot doesn’t need many images to work: A single image will do.

Giorgio Patrini, Sensity's chief executive, said the chatbot signals a dark shift in how the technology is used, from faking images of celebrities and well-known figures to targeting unsuspecting women far from the public eye.

“The fact is that now every one of us, just by having a social media account and posting photos of ourselves and our lives publicly -- we are under threat,” Patrini told the Post. “Simply having an online persona makes us vulnerable to this kind of attack.”

“In the [entertainment] industry, at least, it is a known problem to some extent, but I really struggle to believe that at the level of private citizens it’s known by anyone,” he added.

Sensity said Telegram and “relevant law enforcement authorities" had been notified of its findings.

According to Sensity, seven Telegram channels using the bot had attracted a combined 103,585 members by the end of July. The most popular channel where fake photos were found had 45,615 people in the network.

Deepfakes are becoming harder to stop and can be potentially dangerous. Deepfakes refer to manipulated videos and other distorted content produced by artificial intelligence that appear to be real.

Large corporations such as Facebook and Microsoft have taken initiatives to detect and remove deepfake content. The two companies announced earlier this year that they will be collaborating with top universities across the U.S. to create a large database of fake videos for research, Reuters reported.

While companies are attempting to combat fraudulent content John Villasenor, a senior fellow at the Brookings Institute, said detecting deepfakes is getting harder as the technology becomes more advanced and content looks more realistic.

Villasenor warned that detection techniques “often lag behind the most advanced creation methods.” Villasenor said he fears the question in the future will be: “Will people be more likely to believe a deepfake or a detection algorithm that flags the video as fabricated?”