02-05-2025
The Deepfake Detector You've Never Heard Of That's 98% Accurate
The most recent data finds deepfake fraud is up 1,740% in the U.S. alone. The need for effective ... More deepfake detection is rising too.
As synthetic media rapidly becomes more realistic, and abundant, the arms race between deepfake creators and the technology designed to prevent them is intensifying. The latest available data show that deepfake fraud has increased 1,740% in the U.S. alone.
To help address this urgent and unsettling trend, a peer-reviewed article in the February 2025 edition of TEM Journal provides one of the clearest, data-driven assessments as to where that technological deepfake face off currently stands.
Researchers at Slovakia's University of Ss. Cyril and Methodius compared five deepfake video detection tools — three open-source (SBI, LSDA, LIPINC) and two commercial (Bio-ID, Deepware) — against 300 manipulated videos in the Celeb-DF (v2) dataset.
Commercial browser-based software Bio-ID topped with 98.00% detection accuracy. Freemium software Deepware, which can be accessed through API and browser interface, came in second at 93.47%. The top-scoring open-source software, SBI, achieved 84.69%, while the lowest-performing model, LSDA, hovered at about 71%.
'The findings of the study demonstrated the suitable applicability of the selected tools for standard users who require the authentication of digital content,' wrote lead author Dr. Miroslav Ölvecký in an email response. 'However, the significance of this topic extends beyond the realm of digital forensics, encompassing crucial aspects such as media credibility, the mitigation of misinformation, and the safeguarding of the public from manipulated content.'
The graphs in Figure 5 show the mean and median percentages of deepfake video detection success for ... More each of the tools evaluated, providing a visual overview of the performance of each tool across the test sample of 300 videos.
Impressive as Bio-ID's highest accuracy score is, its binary classification system ("fake" or "not fake") minimizes outcomes and may mask uncertainty in borderline cases.
'It is possible that the accuracy of the scoring may have biased the results. This may become apparent when evaluating larger amounts of data," noted Dr. Ölvecký. "It would certainly be interesting to compare tools that produce consistent results based on the designed parameters. Unfortunately, this is not possible at the moment, because the different machine learning methods designed for digital content discovery are developed independently and with different architectures.'
Deepware, in contrast, gives confidence scores as percentages and uses EfficientNet B7 deep learning architecture. It was trained with 120,000 Facebook's DFDC videos through frame-level classification. The method enhances speed and accuracy but at the cost of temporal continuity — a drawback even its creators admit.
But Deepware performed well enough to surprise even the researchers.
'It was a surprise to find that even freely available deepfake detection tools (such as Deepware) can achieve high efficiency and be practically usable,' Ölvecký continued. 'It is often the case that free software products do not provide very accurate results in the free version.'
The performance gap was not entirely about brute power—it was also about training data and investment. "Commercial tools often have more resources than open-source tools to develop and train on large datasets," Ölvecký explained. 'In our opinion, [Bio-ID and Deepware] are more advanced methods of digital content manipulation detection.'
Both services are available to the public. Deepware web interface and API provide free testing of videos, whereas Bio-ID provides an easy-to-use dashboard after registration. As per Ölvecký, 'Its simplicity and ease of use make it suitable for lay users.'
Despite strong results, the study's authors caution that the deepfake detection landscape remains fluid. New generation tools are emerging faster than detection models can adapt.
"We are fighting in a 'dynamic field' and that constant innovation is necessary," Ölvecký said. The researchers already are developing both manipulated and authentic videos to include in their work to increase accuracy testing.
One promising direction — the combination of spatial and temporal detection methods. Frame-based methods such as Deepware are speedy but may overlook nuanced, detailed anomalies that arise over time.
'Tools like LIPINC combine spatial and temporal information,' Ölvecký noted. 'Such an approach is promising because video is inherently a temporal medium, and ignoring this dimension can lead to detection errors.'
Beyond the technical merits of this study — there are strong policy implications as well. 'Our research provides practical data on the current availability, effectiveness and reliability of deepfake detection tools that can help policy makers select and recommend technologies suitable for implementation in public services and risk assessment,' Ölvecký added.
Whether it is for fact-checking journalists, screening regulators or social media companies curtailing disinformation, these findings offer a rare look under the hood of state-of-the-art detection capabilities. They serve as as a reminder that staying ahead of AI-generated deception demands vigilance and open science.
'We also highlight the need for continuous innovation of these tools given the dynamics of deepfake technology development,' he concluded.