TrueMedia.org is the easiest way to spot deepfakes in 2024
TrueMedia.org’s free deepfake detection tool claims 90% accuracy. Read our review to see how it performs and who should use it.
{{review-toc}}
In 2024, seeing isn’t believing anymore.
Deepfakes – realistic yet fake media created using artificial intelligence – are becoming sophisticated and widespread to the point that anyone can be fooled by what they see online.
Data shows that deepfakes are both a big concern and an increasing problem in society. Research from SumSub shows a 10x increase in the number of deepfakes being spread across social media in 2023, Statista reports that 60% of US adults are concerned about deepfakes. An area where these concerns are especially severe is politics and elections, with 26 US states planning to regulate or have already regulated the use of generative AI in election communications. The big fear is that deepfakes could threaten the integrity of elections and sway public opinion.
While many of the leading AI companies are doing some effort in setting up guardrails to avoid misinformation being created with their models, most are possible to bypass to some degree with special prompting techniques. Also, not all companies have the same safety measures in place – even xAI’s image generator lets users generate just about anything. And the problem is even bigger with open-source models where there’s even virtually zero moderation.
With the US elections on the horizon, and as someone interested in how our new tech impacts society, I decided to test TrueMedia.org’s platform for AI detection. It’s a free, non-profit platform claiming to provide deepfake detection with 90% accuracy.
Here’s what I found.
Who it’s for
After testing TrueMedia.org, I’d say it’s a great tool for anyone wanting to fact check content they come across online, reliable and easily.
Even more so, if you’re part of an organization that regularly combats misinformation, TrueMedia.org could quickly become an indispensable tool for your team:
- Journalists and editors can use TrueMedia.org to authenticate content they find online before using it in their publications. The platform makes adding team members to the same organization, and sharing analyses and insights across the team, easy.
- Organizations specifically dedicated to debunking false information will, obviously, find the platform very useful. Similarly, any entities involved in policy-making, election monitoring, or public safety, could find it useful to quickly identify and address disinformation. The shared history tab inside TrueMedia.org’s platform will definitely come in handy as it allows you to see what colleagues have already investigated, which prevents double work.
- Content moderation teams at the different social media platforms could include TrueMedia.org into their processes to detect and remove deepfake content more effectively.
It’s really easy to use
After signing up with my email on TrueMedia.org I was brought directly to their platform, which presented me with a clean and straightforward user interface. You essentially have three tabs to choose from:
- Query: This is where you analyse new media.
- History: Access to analyses on your previous queries.
- Notable Deepfakes: A list of highlighted deepfakes curated by TrueMedia.org.
I was pleased with how easy the platform was to use. Just paste the URL from supported platforms like X (formerly Twitter), Instagram, Reddit, or Facebook. You can also upload videos, images, or audio files up to 100MB. A caveat is that videos can’t be longer than 90 seconds. Luckily, if a video is too long there’s an easy way to trim it to 90 seconds directly inside the platform.
The only thing I was missing was support for YouTube videos. This isn’t really TrueMedia.org’s fault, though, as YouTube is currently preventing third parties from downloading videos from their site. Fortunately, it’s not too difficult to download a YT video manually (using a tool like SaveFrom) and then uploading it for analysis.
It did a great job at detecting deepfakes
I was curious to test if the results were as accurate as the platform claims them to be. From what I had read, TrueMedia.org combines multiple AI-detectors (both from in-house and from academia) to give the most accurate results possible. They also use human analysts in some instances to correct the automatic detectors and provide an extra layer of assurance
To test the platform’s deepfake detection, I tried it out on a series of videos I found around the web – some real and some deepfakes. Each analysis I did took between 1-5 minutes to complete.
Some example of videos I tested, and the results:
- Kim and Kanye teaching advanced maths (deepfake)
- Successfully labelled as “substantial evidence” of face and voice manipulation
- Ukraine’s president Volodymyr Zelenskyy bellydancing (deepfake)
- Successfully labelled as “substantial evidence” of face manipulation and little evidence of voice manipulation (there was no voice in the video)
- The Donald Trump assassination attempt (real)
- Successfully labelled as “little evidence” of face or voice manipulation
One thing I really appreciated is how the platform gives you a detailed breakdown of the results. First off, it separates the detection into elements of the media (faces, voices and semantic inconsistencies). It then employs multiple detection methods for each element, and gives you a neat overview with individual confidence scores. This informs a final verdict for the media content queried, such as “little evidence” or “substantial evidence” of manipulation.
Overall, the results I got were spot on. It successfully recognized the deepfakes, and did not label the real videos I tried as fake. I did have a few instances where the different AI methods showed contradicting results and so I got “uncertain” as the verdict. I’d argue this is still way better than false positives – the platform seems to err on the cautious side if results are inconclusive.
It’s easy to use for collaboration
TrueMedia.org is designed with organizations in mind. It has several features that make it great for collaborating on deepfake detection as a team.
In a few clicks, you can set up you organization, invite team members and assign permissions to them. If you verify your website, it enables auto-join, so any team member that signs up with their work email gets automatically added to your team.
One standout feature for organizations looking to sync their efforts is the “Organization History” tab. It does exactly what it sounds like, it gives you a searchable, filterable overview of your organization’s previous queries and the analysis results – along with dates, who queried it, etc. Neat!
What's inside
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript