To combat the spread of disinformation, Microsoft has unveiled a new tool that will spot deepfakes or synthetic media which are photos, videos or audio files manipulated by Artificial Intelligence (AI) which are very hard to identify if false or not.
The tool called Microsoft Video Authenticator can analyse a still photo or video to provide a percentage chance, or confidence score, that the content is artificially manipulated. In the case of a video, it can provide this percentage in real-time on each frame as the video plays.
The tool works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye, Microsoft said in a blog post on Tuesday.
Deepfakes are video forgeries that make people appear to be saying things they never did, like the popular forged videos of Facebook CEO Zuckerberg and that of US House Speaker Nancy Pelosi that went viral last year.
“We expect that methods for generating synthetic media will continue to grow in sophistication. As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods,” said Tom Burt, Corporate Vice President of Customer Security and Trust.
There are few tools today to help assure readers that the media they’re seeing online came from a trusted source and that it wasn’t altered. Microsoft also announced another technology that can both detect manipulated content and assure people that the media they’re viewing is authentic.
This technology has two components.
The first is a tool built into Microsoft Azure that enables a content producer to add digital hashes and certificates to a piece of content. The hashes and certificates then live with the content as metadata wherever it travels online.
“The second is a reader – which can exist as a browser extension or in other forms – that checks the certificates and matches the hashes, letting people know with a high degree of accuracy that the content is authentic and that it hasn’t been changed, as well as providing details about who produced it,” Microsoft explained.
Fake audio or video content, also known as ‘Deepfakes’, has been ranked as the most worrying use of artificial intelligence (AI) for crime or terrorism. According to a latest study, published in the journal Crime Science, AI could be misused in 20 ways to facilitate crime over the next 15 years.
Deepfakes could appear to make people say things they didn’t or to be places they weren’t, and the fact that they’re generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology.
“However, in the short run, such as the upcoming US election, advanced detection technologies can be a useful tool to help discerning users identify deepfakes,” Microsoft said.
“No single organisation is going to be able to have a meaningful impact on combating disinformation and harmful deepfakes,” it added.
Microsoft also announced several partnerships in this regard, including with the AI Foundation, a dual commercial and nonprofit enterprise based in the US, and a consortium of media companies that will test its authenticity technology and help advance it as a standard that can be adopted broadly. (IANS)