How Adobe plans to fight the scourge of deepfakes

Adobe said it has found a way to help news outlets build trust by enabling their readers to check whether news photos are deepfakes, or images covertly manipulated by artificial intelligence.

The tool, revealed Tuesday, lets the public review a list of all the ways a photograph has been altered, essentially showing consumers an audit trail that will help vouch for an image’s validity.

For instance, people who doubt the authenticity of a photo published online by the New York Times of President Trump exiting Air Force One in Afghanistan would see an icon indicating that readers can learn specific information about the photo’s origins. By clicking on the icon, which will not have the Adobe branding, readers will see information such as who took the photo and where it was taken, along with a history of the photo’s edits.

If the photo is indeed a deepfake, users would be able to see how the photo was created, including the original photo that was used to generate the deepfake.

The tool is intended to combat the potential problems that deepfakes pose for society. If manipulated images become common, people “will not believe the lies nor the truth,” said Adobe general counsel Dana Rao.

“The media companies—they see this as an existential threat to their businesses,” he said.

Lawmakers and researchers are increasingly sounding the alarm about the potential for deepfakes to erode public trust in accurate information or to manipulate public opinion. Rapid advances in A.I. technology have made it possible to more easily create authentic-looking but fake photos, videos, and audio clips.

The deepfake-fighting tool, which still needs additional testing, was developed as part of Adobe’s Content Authenticity Initiative, a project that debuted last year with help from organizations like the New York Times, the BBC, Twitter, and Microsoft. The idea behind the group is that it will help better coordinate creating tools and best practices to counter deepfakes.

Rao said there is an “arms race” between good and bad actors that makes it difficult to create deepfake detection tools that work. As he put it, bad actors are consistently developing more sophisticated deepfakes that can elude the best detection software, and there is no sign the race is slowing down.

The new tool only works on photographs. Future products from the initiative may include tools for verifying deepfake audio and video clips.

People will be able to see the new tool in action once publishers start their early testing. It will ultimately be up to the publishers to determine how best to use the tool and present the information to readers, Rao said.

Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.