It’s getting easier to manipulate images but there’s little financial incentive for companies to invest in tools to detect fakes
(Bloomberg) — Artificial intelligence is now so powerful it can trick people into believing an image of Pope Francis wearing a white puffy Balenciaga coat is real, but the digital tools to reliably identify faked images are struggling to keep up with the pace of content generation.
Just ask the researchers at Deakin University’s School of Information Technology, outside of Melbourne. Their algorithm performed the best in identifying the altered images of celebrities in a set of so-called deepfakes last year, according to Stanford University’s Artificial Intelligence Index 2023.
“It’s a fairly good performance,” said Chang-Tsun Li, a professor at Deakin’s Centre for Cyber Resilience and Trust who developed the algorithm, which proved correct 78% of the time. “But the technology is really still under development.” Li said the method needs to be further enhanced before it’s ready for commercial use.
Deepfakes have been around, and prompting concern, for years. Former House Speaker Nancy Pelosi appeared to be slurring her words in a doctored video in 2019 that circulated widely on social media. Meta Platforms Inc. About a month later, Chief Executive Officer Mark Zuckerberg was seen in a video altered to make it seem like he’d said something he didn’t, after Facebook earlier refused to take down the Pelosi video.
While the image of the Pope in the puffer was a relatively harmless manipulation, the potential to inflict serious damage from deepfakes, from election manipulation to sex acts, has grown as the technology advances. Last year, a fake video of Ukraine President Volodymyr Zelenskiy asking his soldiers to surrender to Russia, could have had serious repercussions.
Big tech companies as well as a wave of startups have poured tens of billions of dollars into generative AI to claim a leading role in the technology that could change the face of everything from search engines to video games. However, the global market for technology to root out manipulated content is relatively small. According to research firm HSRC, the global market for deepfake detection was valued at $3.86 billion in 2020 and is expected to expand at a compound annual growth rate of 42% through 2026.
Experts agree there’s undue attention on AI generation and not enough on detection, said Claire Leibowicz, head of the AI and Media Integrity Program at nonprofit organization The Partnership on AI.
While the buzz around the technology, dominated by applications like OpenAI’s ChatGPT, has reached a fever pitch, executives from Tesla Inc. CEO Elon Musk to Alphabet Inc. CEO Sundar Pichai have warned of the risks of going too fast.
It will be a while before detection tools are ready to be used to fight back against the wave of realistic-looking altered images from generative AI programs like Midjourney, which produced the Pope image, and OpenAI’s DALL-E. Part of the problem is the prohibitive cost of developing accurate detection, and there’s little legal or financial incentive to do so.
“I talk to security leaders every day,” said Jeff Pollard, an analyst at Forrester Research. “They are concerned about generative AI. But when it comes to something like deepfake detection, that’s not something they spend budget on. They’ve got so many other problems.”
Still, a handful of startups such as Netherlands-based Sensity AI and Estonia-based Sentinel are developing deepfake detection technology, as are many of the big tech companies. Intel Corp. launched its FakeCatcher product last November as part of its work in responsible AI. The technology looks for authentic clues in real videos by assessing human traits such as blood flow in the pixels of a video, and can detect fakes with 96% accuracy, according to the company.
“The motivation of doing deepfake detection now is not money; It is helping to decrease online disinformation,” said Ilke Demir, senior staff research scientist at Intel.
So far, deepfake detection startups mainly serve governments and businesses that want to reduce fraud and aren’t aimed at consumers. Reality Defender, a Y-Combinator-backed startup, charges fees based on the number of scans it performs. Those costs range from tens of thousands of dollars to millions, in order to cover expensive graphics processing chips and cloud computing power.
Platforms like Facebook and Twitter aren’t required by law to detect and alert the deepfake content on their platforms, leaving consumers in the dark, said Ben Colman, CEO of Reality Defender. “The only organizations that do anything are the ones like banks that have a direct connection to financial fraud.”
Current methods of detecting fake images and videos involve comparing visual characteristics in the content by training computers to learn from examples and embedding watermarks and camera fingerprints on original works. But the rapid proliferation of deepfakes requires more powerful algorithms and computing resources, said Xuequan Lu, another Deakin University professor who worked on the algorithm.
And without a commercially available and massively adopted tool to distinguish fake online content from real, there’s plenty of opportunity for bad actors.
“What I see is pretty similar to what I saw in the early days of the anti-virus industry,” said Ted Schlein, chairman and general partner at Ballistic Ventures, who invests in deepfake detection and previously invested in anti-virus software in the early days. As hacks became more sophisticated and damaging, anti-virus software developed and eventually became cheap enough for consumers to download on their PCs. “We’re at the very beginning stages of deepfakes,” which so far is mostly being done for entertainment purposes. “Now you’re just starting to see a few of the malicious cases,” Schlein said.
But even if it’s cheap enough, consumers might not be willing to pay for such technology, said Shuman Ghosemajumder, head of artificial intelligence at F5 Inc., a security and fraud-prevention company.
“Consumers don’t want to do any additional work themselves,” he said. “They want to automatically be protected as much as possible.”
More stories like this are available on bloomberg.com
©2023 Bloomberg L.P.