TruthScan Disrupts AI Deepfake Fraud. Here’s How.
- Share via
The most convincing media you see online could be an AI-generated fake. Last May, nefarious marketers used deepfake tech to steal the likenesses of famous people, running ads online and making it appear as if they had endorsed a product they were never associated with.
Last month, a Southern California woman was swindled out of $80,000 in an online romance scam. The fraudster not only impersonated a celebrity but also used custom AI-generated videos to convince the elderly victim to send money, going beyond typical catfishing tactics.
However, it’s not just scammy ads or dating-app con jobs – even multi-million-dollar companies are being targeted, with one recent AI-fraud attack resulting in a $25-million loss. The World Economic Forum projects over $200 million in AI-fraud-related losses in just the first quarter of 2025 alone.
“The problem is that generative AI has evolved so quickly, faster than people could prepare for or anticipate the downsides,” says Christian Perry, CEO of TruthScan – a new cybersecurity company specifically targeting deepfake fraud.
Perry says he’s seen the problem emerging firsthand: “To date, we’ve detected and identified over a million pieces of AI content through our detection system across our platforms.”
Perry says his company is developing a proprietary algorithm codenamed “Aletheia,” capable of detecting deepfake audio, video and text. Currently, anyone can test TruthScan’s public deepfake image detector, which uses computer vision photo analysis to identify pixel disruption, image file alteration and AI-obfuscation comparisons to detect AI-deepfakes.
According to TruthScan’s internal benchmarks, the accuracy of spotting obfuscated deepfake media currently sits at around 96%, and for instances of non-obfuscated deepfake content, the accuracy is even higher at 99%.
Receipts, driver’s licenses and passports, or even convincing counterfeit bank statements, can all easily be generated by new AI tools. Perry says the team at TruthScan knows what fraudsters are using AI for and that their tool can spot any type of AI-generated media, whether it’s impersonating people or paper receipts.
How detecting deepfakes began
Perry and his team began developing TruthScan in 2024, which is the sister company of Undetectable AI, an AI text detection and humanization platform. Perry and his co-founder, Devan Leos, see the relationship between companies as necessary and symbiotic.
“Technically, we got our start humanizing AI content. We explored every aspect of detection systems. We found the cracks and reverse-engineered a lot of these systems to hell and back.” And it’s that process that both Perry and Leos say was vital to creating a robust and functional deepfake detection software.
How deepfakes get detected
“Sometimes I think you have to solve problems by working backwards,” explains Perry. “Like, ok, instead of, ‘How do we detect AI-deepfakes?’ we ask, ‘What makes deepfake media undetectable?’”
Khuzaeymah Nasir, one of the lead researchers working on TruthScan, explains how detection works: “The novelty in image detectors is the model and the data that you have.”
Nasir describes it as a “dependent relationship between the model and the data,” where the model must be sophisticated enough to distinguish features between real and AI images. He says the training data must be comprehensive enough to represent those features clearly.
Nasir notes that they don’t always rely on the raw output from the model. Instead, they apply “post-processing techniques” to refine the results, which helps to “balance things out, reduce bias.”
One specific technique the TruthScan team uses to improve the model’s accuracy is by telling it where to look in an image. “We actually generate heat maps showing us where the model focused on the image to make its final prediction,” explains Nasir.
To reduce bias and eliminate inaccurate detection of AI images, Nasir explains that the research team has dedicated people working around the clock to review the AI image analysis results manually. He describes a system where new images that the model misclassifies are automatically sent to a queue, and once a certain threshold is met, “The model can retrain itself or fine-tune itself on those results.”
One challenge is detecting images where a real person is placed onto a real background, a process known as inpainting. In these cases, the model must learn to focus on the boundaries. “At the borders are what we’re looking at and asking, ‘How does it actually blend?’ And that’s where we can definitely figure the fake ones out, versus real images.”
Nasir also says the new TruthScan algorithm currently under works, dubbed “Aletheia,” incorporates a few other proprietary detection methods – specifics on those, however, are presently being kept under wraps.
The rise of deepfake fraud does more than drain bank accounts; it erodes the very foundation of trust. When AI can flawlessly fabricate any image, video or document, the sense of what’s real begins to fracture.
TruthScan’s new software and mission may help stop AI-powered crime. But more than that, its founders hope to help preserve the integrity of information in an era where truth itself is under attack.