READ MORE: The AI deepfake apocalypse is here. These are the ideas for fighting it. (Washington Post)
Concern about deepfakes is rising worldwide, particularly around political messaging, but attempts to divorce fiction from fact may be doomed. A new study from online security provider McAfee reveals a 66% increase in concerns about deepfakes over the past year and that 43% of Americans list influencing elections as one of the AI-generated technology’s most concerning uses.
The actual number of people exposed to political and other deepfakes is expected to be much higher given that many Americans are not able to decipher what is real versus fake, thanks to the sophistication of AI technologies, according to study author McAfee.
“In many ways, democracy is on the ballot this year thanks to AI,” McAfee CTO Steve Grobman stated. “It’s not only adversarial governments creating deepfakes this election season, it is now something anyone can do in an afternoon. The ease with which AI can manipulate voices and visuals raises critical questions about the authenticity of content.”
In another finding from the research, the vast majority (72%) of American social media users find it difficult to spot AI generated content and that the public is most concerned that deepfakes will influence elections and undermine trust in media.
Tech is fighting back with several initiatives already in play to flag fake content – or authenticate its veracity. Tech reporter Gerrit De Vynck details these efforts in an article in the Washington Post.
They include watermarking AI images baked into the image itself. Google has such a system it calls SynthID. “They’re not visible to the human eye but could be detected by a social media platform, which would then label them before viewers see them,” De Vynck explains.
But such solutions are not rock solid, “because anything that’s digitally pieced together can be hacked or spoofed or altered,” Nico Dekens, director of intelligence at cybersecurity company ShadowDragon, tells De Vynck.
Parallel approaches for video and stills include layering data into each pixel right from the moment it is taken by a camera. Companies doing this include Nikon and Leica, whose metadata imprints are being adopted by organizations like C2PA and CAI, which are set up and run by publishers, broadcasters and technology vendors to provide a record of “provenance.”
It’s a case of whack-a-mole here, too. De Vynck points out that hackers could still figure out how camera companies apply the metadata to the image and add it to fake images, which would then get a pass on social media because of the fake metadata.
“It’s dangerous to believe there are actual solutions against malignant attackers,” Vivien Chappelier, head of R&D at watermarking company Imatag told De Vynck.
Going further, Reality Defender and Deep Media have turned GenAI on itself by building tools that detect deepfakes based on the foundational technology used by AI image generators.
As De Vynck explains, by showing tens of millions of images labeled as fake or real to an AI algorithm, the model begins to be able to distinguish between the two, building an internal “understanding” of what elements might give away an image as fake and can then identify it.
Just as the media industry has had to continually invent, adapt, and apply multiple technologies over the years to combat piracy (with the courts being a backstop), so it will have to do the same vigilance and investment in a bid to detect and prevent deepfake domination.
Adobe’s general counsel Dana Rao is right when he says that AI images are here to stay, and different methods will have to be combined to try to control them.
There are some who believe this will be successful and that scanning and filtering deepfakes from authenticated media will become as commonplace as email applications like Hotmail automatically filtering out spam.
Others think the tech battle against AI deepfake detection is lost before we start.
“If the problem is hard today, it will be much harder next year,” a researcher into the topic tells De Vynck. “It will be almost impossible in five years.”
The consequence will be lack of trust in all media. Even McAfee, which has its own audio deepfake detection, calls on the public to “maintain a healthy sense of skepticism” and to adopt a policy of always questioning the source of content.
If seeing is no longer believing, where does that leave the truth and the bias disseminated by CNN, Fox, CNBC, BBC, ITV and other broadcasters in election year?
“Assume nothing, believe no one and nothing, and doubt everything,” said Dekens. “If you’re in doubt, just assume it’s fake.”
Why subscribe to The Angle?
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!