Deepfakes are becoming more authentic owing to the interaction of two computer algorithms to create perfect ‘fake’ images and videos, and humans are simply unable to gauge which is real or not.
Researchers now propose a new method called ‘frequency analysis’ that can efficiently expose fake images created by computer algorithms.
“In the era of fake news, it can be a problem if users don’t have the ability to distinguish computer-generated images from originals,” said Professor Thorsten Holz from the Chair for Systems Security at Ruhr-Universitat Bochum in Germany.
Follow NewsGram on LinkedIn to know what’s happening around the world.
Deepfake images are generated with the help of computer models, so-called Generative Adversarial Networks (GANs).
Two algorithms work together in these networks: the first algorithm creates random images based on certain input data. The second algorithm needs to decide whether the image is a fake or not.
If the image is found to be a fake, the second algorithm gives the first algorithm the command to revise the image – until it no longer recognises it as a fake.
In recent years, this technique has helped make deepfake images more and more authentic.
Deepfakes are video forgeries that make people appear to be saying things they never did, like the popular forged videos of Facebook CEO Zuckerberg and that of US House Speaker Nancy Pelosi that went viral last year.
To date, deepfakes have been analysed using complex statistical methods.
The Bochum group chose a different approach by converting the images into the frequency domain using the discrete “cosine transform”.
The generated image is thus expressed as the sum of many different cosine functions. Natural images consist mainly of low-frequency functions.
The analysis has shown that images generated by GANs exhibit artefacts in the high-frequency range.
For example, a typical grid structure emerges in the frequency representation of fake images.
“Our experiments showed that these artefacts do not only occur in GAN generated images. They are a structural problem of all deep learning algorithms,” explained Joel Frank.
“We assume that the artefacts described in our study will always tell us whether the image is a deepfake image created by machine learning,” Frank said, adding that frequency analysis is, therefore, an effective way to automatically recognise computer-generated images.
The team presented their work at the virtual International Conference on Machine Learning (ICML) this week. (IANS)