Deepfakes are video forgeries that make people portray saying things they never said, just like the popular videos of Facebook CEO Mark Zuckerberg which went viral last year.
To fight the virality of disinformation, Microsoft has unveiled a new technology that will spot deepfakes or synthetic media in the forms of photos, videos, or audio files manipulated by AI which are very hard to spot if false or not.
The technology called Microsoft Video Authenticator can analyze a still photo or video to provide a percentage chance, or confidence score, that the content is artificially manipulated. When it comes to videos, it can provide this percentage in real-time on each frame because the video plays.
The technology works by finding the blending boundary of the deepfake and subtle fading or greyscale elements that may not be detectable by the human eye, Microsoft mentioned in a blog post on Tuesday.
Deepfakes are video forgeries that make people appear to be saying things they never said, just like the popular forged videos of Facebook CEO Zuckerberg which folks House Speaker Nancy Pelosi that went viral last year.
“We expect that methods for generating synthetic media will continue to grow in sophistication. As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods,” said Tom Burt, Corporate Vice President of Customer Security and Trust.
There are only a few tools today to help assure readers that the content they are seeing online is from a trusted source and that it wasn’t altered. Microsoft also said that another technology that can both detect manipulated content and assure people that the content is viewing is real.
This technology has two components. The primary may be a tool built into Microsoft Azure that permits a content producer to feature digital hashes and certificates to a bit of content. The hashes and certificates then accept the content as metadata wherever it travels online.
“The second is a reader – which can exist as a browser extension or in other forms – that checks the certificates and matches the hashes, letting people know with a high degree of accuracy that the content is authentic and that it hasn’t been changed, as well as providing details about who produced it,” Microsoft explained.
Fake audio-visual content also referred to as ‘Deepfakes’, has been ranked because of the most worrying use of AI for crime or terrorism. Consistent with the newest study, published in the journal Crime Science, AI might be misused in 20 ways to facilitate crime over the subsequent 15 years.
Deepfakes could appear to make people say things they didn’t say or to be at places they weren’t, and therefore the incontrovertible fact that they’re generated by AI which will still learn makes it inevitable that they’re going to beat conventional detection technology.
“However, in the short run, such as the upcoming US election, advanced detection technologies can be a useful tool to help discerning users identify deepfakes,” Microsoft said.
“No single organisation is going to be able to have a meaningful impact on combating disinformation and harmful deepfakes,” it added.