96% of Deepfake Videos Contain Pornographic Material

"Deepfakes" are video forgeries that make people appear to be saying things they never did, like the popular forged videos

Deepfake, Videos, Pornographic
The rise of synthetic media and deepfakes is forcing us towards an important and unsettling realization: our historical belief that video and audio are reliable records of reality is no longer tenable. Pixabay

As tech firms scramle to tackle the spread of deepfake videos online, a new research has claimed 96 per cent of such videos contain pornographic material targeting female celebrities.

The researchers from Deeptrace, a Netherland-based cybersecurity company, also found that top four websites dedicated to deepfake pornography received more than 134 million views on videos.

“This significant viewership demonstrates a market for websites creating and hosting deepfake pornography, a trend that will continue to grow unless decisive action is taken,” said Giorgio Patrini, Founder, CEO and Chief Scientist at Deeptrace.

“The rise of synthetic media and deepfakes is forcing us towards an important and unsettling realization: our historical belief that video and audio are reliable records of reality is no longer tenable,” he added.

Deepfake, Videos, Pornographic
The researchers from Deeptrace, a Netherland-based cybersecurity company, also found that top four websites dedicated to deepfake pornography received more than 134 million views on videos. Pixabay

“Deepfakes” are video forgeries that make people appear to be saying things they never did, like the popular forged videos of Facebook CEO Mark Zuckerberg and US House Speaker Nancy Pelosi that went viral recently.

Facebook has partnered with Microsoft, Massachusetts Institute of Technology (MIT) and other institutions to fight ‘deepfakes’ and has committed $10 million towards creating open source tools that can better detect if a video has been doctored.

“Deepfake” techniques, which present realistic AI-generated videos of real people doing and saying fictional things, have significant implications for determining the legitimacy of information presented online.

Since its foundation in 2018, Deeptrace has been dedicated to researching deepfakes’ evolving capabilities and threats, providing crucial intelligence for enhancing its detection technology.

Also Read- Consumers who have Partially Drawn-Down their Loans Stuck between Devil and Deep Sea

The research revealed that the deepfake phenomenon is growing rapidly online, with the number of deepfake videos almost doubling over the last seven months to 14,678.

This increase is supported by the growing commodification of tools and services that lower the barrier for non-experts to create deepfakes.

“Perhaps unsurprisingly, we observed a significant contribution to the creation and use of synthetic media tools from web users in China and South Korea, despite the totality of our sources coming from the English-speaking Internet,” Patrini said in a statement.

Deepfakes are also making a significant impact on the political sphere.

Deepfake, Videos, Pornographic
This significant viewership demonstrates a market for websites creating and hosting deepfake pornography, a trend that will continue to grow unless decisive action is taken. Pixabay

“Outside of politics, the weaponization of deepfakes and synthetic media is influencing the cybersecurity landscape, enhancing traditional cyber threats and enabling entirely new attack vectors,” said the company.

To fight the growing menace, Facebook, the Partnership on AI, Microsoft, and academics from Cornell Tech, MIT, University of Oxford, University of California-Berkeley, University of Maryland, College Park, and University at Albany-SUNY are coming together to build the Deepfake Detection Challenge (DFDC).

Also Read- Emissions Trading Scheme in Surat Cuts Pollution, Hikes Profit

According to Professor Rama Chellappa from University of Maryland, “given the recent developments in being able to generate manipulated information (text, images, videos, and audio) at scale, we need the full involvement of the research community in an open environment to develop methods and systems that can detect and mitigate the ill effects of manipulated multimedia”. (IANS)