Latest news with #BenColman


Washington Post
08-08-2025
- Entertainment
- Washington Post
How to spot an AI video? LOL, you can't.
Have you seen the video of bunnies bouncing on a backyard trampoline?!?! They were adorable — and entirely fake. As soon as the bunnies started ping-ponging across the internet a couple of weeks ago, internet sleuths and reporters pointed out how to spot signs that the video was generated by artificial intelligence. The original TikTok video has been watched more than 230 million times. As AI boomed in the past few years, we've coached ourselves to look for hallmarks of AI-created images, including six-fingered humans, mismatched earrings, garbled text or blurry backgrounds. The bunny video had some of those flaws, such as a rabbit disappearing mid-hop. But these sleuthing tactics are rapidly becoming obsolete. Technology that can conjure people, animals and entire scenes is advancing so quickly that professionals warn it's hard for them to tell what's real or AI. 'The tech has gotten so good that even the Ivy League PhDs on our team can't tell the difference,' said Ben Colman, CEO of AI detection company Reality Defender. 'And if they can't tell the difference with their naked eyes, how can my parents or my kids ever stand a chance?' We must stop trying to be AI detectives. It won't work. So now what? The creator of the TikTok bunnies video, a 22-year-old who works in marketing, said in an interview that he set out to make something that seemed authentic. Andy, who declined to be named to protect his privacy, said that he crafted an AI command and a female TikTok persona to appear to reveal her surprise at catching home-security footage of the rabbits in her yard. The video posted to TikTok was his first try with Google's Veo 3 technology, Andy said. He said that he knew the video would stir debate over whether the images were real or AI-generated — and that would signal to TikTok's algorithm to show more people the video. The experiment worked far above his expectations, he said. 'Ten minutes after I posted it, my phone started blowing up,' he said. 'I was constantly refreshing, and it was just, likes, likes, likes, likes.' Emmanuelle Saliba, chief investigative officer of AI detection company GetReal, said she used specialized technology and her skills from investigative journalism to trace the AI bunnies' origins. Being fooled by AI fluff balls was part of the fun. But Saliba worries that the silly stuff is a harbinger of a tidal wave of AI fakes impersonating real people or making us mistrust what's real. Because 'you can actually create things out of thin air,' Saliba said, 'it's making it difficult to also prove real visual evidence.' I don't have all the answers, but I wanted to suggest a mindset for dealing with a sea of AI content and a blueprint for what companies and governments must do to help you sort out what's human- or machine-made. Do: Accept your limitations. Whether false information and images created centuries ago by people or new AI creations, it's helpful to acknowledge our tendency to believe things that scare us, tempt us, surprise us or support what we already believe. But you cannot, and should not be expected to, become a digital Sherlock Holmes to identify whether that's really Taylor Swift or Oprah Winfrey pitching you something. While the AI-spotting forensics tips sometimes work now, they're becoming increasingly useless and counterproductive advice. 'The days of saying to the general public, 'These are the things you should look for' — they're over,' said Henry Ajder, an AI specialist and consultant. 'I think they're becoming actively harmful.' Don't ask chatbots, 'Is this real?' Ajder said that OpenAI's ChatGPT, X's Grok and other chatbots are not programmed to tell you whether images or information are made by AI. The chatbots might get it right but they shouldn't be trusted for that, he said. X and OpenAI didn't comment. The Washington Post has a content partnership with OpenAI. Do: Expect companies to tell you what's real or AI. Many social media companies including TikTok have at least experimented with technology to automatically identify whether images were largely made with AI. (In a statement, TikTok says it requires people to label realistic AI-generated content and that the company invests in capabilities to support that. The TikTok bunnies video wasn't initially labeled as AI. It is now.) Other technology approaches seek to verify that images or audio were made by a human. It's tricky to get this right, and for internet companies to all agree on a standard approach to disclosure. It's getting there. Google's SynthID technology, which marks AI-generated material with an invisible but computer-readable code, identified Andy's video as AI. (SynthID isn't available to the public.) Colman said that disclosures must be mandated and not up to companies' discretion. He and other AI experts say that you must feel confident in what you see and hear. 'We should have the right to know what is AI generated and what is not,' Ajder said.
Yahoo
31-07-2025
- Business
- Yahoo
Reality Defender Launches Public API and Free Tier to Bring Enterprise-Grade Deepfake Detection to Every Developer
Leading AI detection platform enables developers to implement deepfake detection with just two lines of code. NEW YORK, July 31, 2025 /PRNewswire/ -- Reality Defender, the RSA Innovation Award-winning deepfake detection platform, today announced the launch of its public developer API and SDK, along with a free tier offering 50 detections per month. The API enables any developer to integrate enterprise-grade deepfake detection into their applications with just two lines of code, marking a significant step toward natively building trust infrastructure into critical applications. The API leverages Reality Defender's multi-model approach and introduces a context-aware detection model that looks beyond faces to identify deepfake images using proprietary, cutting-edge techniques. Currently supporting audio and image detection, the platform will expand to include other modalities including video in the coming months. "By opening up access to our detection platform, we hope to help build a broader ecosystem of trust and safety, combatting deepfake fraud at scale," said Alex Lisle, CTO of Reality Defender. "Whether for fraud detection, identity verification, content moderation, or other purposes, developers can now build our enterprise-grade detection into their platforms." The free tier is designed for developers and production-ready applications across multiple sectors, including: OSINT and media analysis tools Trust and safety platforms combating AI-powered fraud Financial services preventing sophisticated social engineering attacks Brand protection systems identifying synthetic content before it goes viral Legal and e-discovery platforms ensuring evidence integrity Reality Defender's approach enables a distributed defense network against deepfake frauds, which are projected to cost businesses billions by 2027, according to Deloitte. The urgency for these defenses is underscored by recent successful deepfake impersonations of senior government officials and FBI warnings about AI-generated media being used in sophisticated fraud campaigns targeting these officials. By launching this API for developers to easily implement its platform, Reality Defender aims to make detecting deepfakes as routine as filtering spam. "We live in an AI-first world, and fraud has never been so prevalent or sophisticated," said Ben Colman, Co-Founder and CEO of Reality Defender. "Just as antivirus and spam detection became essential decades ago, today's threats demand deepfake detection to become a foundational layer of trust. Now, developers can build it in from their first line of code." The API launch follows the release of Reality Defender's Zoom integration and a fundraising round led by Illuminate Financial, with additional participation from Booz Allen Ventures, IBM Ventures, the Jefferies Family Office, and Accenture, as well as additional participation from original Series A lead investor DCVC and past investors The Partnership Fund for New York City and Y Combinator. The API and free tier are available immediately at Developers can access documentation, SDKs, and begin their integration today. About Reality Defender Reality Defender is an award-winning cybersecurity company helping enterprises and governments detect deepfakes and AI-generated media. Utilizing a patented multi-model approach, Reality Defender is robust against the bleeding edge of generative platforms producing video, audio, imagery, and text media. Reality Defender's API-first deepfake detection platform empowers teams and developers alike to identify fraud, disinformation campaigns, and harmful deepfakes in real time. CONTACT: Scott Steinhardt, scott@ +17188645744 View original content to download multimedia: SOURCE Reality Defender


Fast Company
31-07-2025
- Business
- Fast Company
Exclusive: Reality Defender expands deepfake detection access to independent developers
New York-based cybersecurity company Reality Defender offers one of the top deepfake detection platforms for large enterprises. Now, the company is extending access to its platform to individual developers and small teams via an API, which includes a free tier offering 50 detections per month. With the API, developers can integrate commercial-grade, real-time deepfake detection into their sites or applications using just two lines of code. This functionality can support use cases such as fraud detection, identity verification, and content moderation, among others. The Reality Defender platform features a suite of custom AI models, each designed to detect different types of deepfakes in various ways. These models are trained on extensive datasets of known deepfake images and audio made using many different types of generative tools. 'What we're doing now is saying you don't need to be a big bank, you don't need to have a bunch of developers,' Reality Defender cofounder and CEO Ben Colman tells Fast Company. 'Anyone that's building a social media platform, a video conferencing solution, a dating platform, professional networking, brand protection—all of them can now have deepfake and generative AI detection.' The new Deepfake Detection API currently supports audio and image detection. But the company plans to expand coverage to additional modalities in the coming months. The detection system can identify visual deepfakes based not only on faces but also on other image features and the broader context in which the media appears. Deepfakes are a form of synthetic media created using artificial intelligence to produce convincing video, image, audio, or text representations of events that never occurred. These can be used to put sham words in a public figure's mouth or to trick someone into sending money by mimicking a relative's voice. Global losses from deepfake-enabled fraud surpassed $200 million in the first quarter of 2025, according to a report by AI voice generation company Resemble AI. The most damaging uses of deepfakes include nonconsensual explicit content (such as revenge porn), scams and fraud, political manipulation, and misinformation. As generative AI tools advance, deepfakes are becoming increasingly difficult to detect. An unidentified imposter recently used a deepfake of Secretary of State Marco Rubio's voice to place calls to at least five senior government officials. Colman says that as generative AI tools become more widespread and deepfakes more common, both consumers and businesses will likely start viewing protection against fake content much like they do protection against computer viruses or spam. The key difference, he adds, is that the tools required to create deepfakes are far more accessible than those needed to produce viruses or spam. 'There's thousands of tools that are free, and there's no regulation yet,' Colman says. In other words, we're likely just seeing the beginning of the deepfake era. 'It just gets worse from there for companies, consumers, countries, elections,' Colman says. 'The risks are endless.'
Yahoo
24-07-2025
- Business
- Yahoo
Reality Defender Names Alex Lisle as Chief Technology Officer to Scale Deepfake Detection Globally
Cybersecurity Veteran and Innovator Joins Leading Deepfake Detection Platform NEW YORK, July 24, 2025 /PRNewswire/ -- Reality Defender, an award-winning deepfake detection platform, today announced Alex Lisle as Chief Technology Officer. Lisle will have a crucial role in driving Reality Defender's commitment to delivering cutting-edge deepfake detection technology to developers and enterprises, as the company continues to scale new and existing offerings for leading financial institutions, government organizations, and more. As CTO, Lisle will focus on strengthening Reality Defender's core technology and ensuring that the products built are both powerful and usable. This includes shaping a roadmap to make Reality Defender's technology accessible to a broader audience, expanding the company's impact beyond enterprise deployments. "The stakes of digital deception grow higher every day," said Ben Colman, Co-Founder and CEO of Reality Defender. "Alex brings exactly the experience and vision we need to scale our mission globally. He's a builder who thinks in systems and scale, and brings the rare combination of strategic vision and hands-on execution that a mission like ours demands." With more than 25 years of experience in cybersecurity and software innovation, Lisle brings deep technical expertise and leadership to the role. Most recently, Lisle led incubation engineering at SecurityScorecard, where he focused on developing emerging cybersecurity technologies. Prior to that, he served as CTO at Hubble Technology, driving innovation in agentless asset visibility, and at Kryptowire, where he advanced mobile app and IoT security solutions. Lisle has consistently taken complex security technology and made it accessible at scale, building robust platforms that could handle enterprise-scale deployments. He holds several patents in cybersecurity, has helped pioneer commercial security tools, architected SaaS platforms that transformed how organizations approach threat detection, and built systems that bring enterprise-grade protection to everyone. "As deepfakes evolve from curiosity to weapon, we're not just building detection technology; we're building the trust infrastructure for an AI-first world," said Lisle. "I immediately aligned with Reality Defender's mission in making complex security technology universally accessible, and wholeheartedly share the team's excitement to expand it." About Reality Defender Reality Defender is an award-winning cybersecurity company helping enterprises and governments detect deepfakes and AI-generated media. Utilizing a patented multi-model approach, Reality Defender is robust against the bleeding edge of generative platforms producing video, audio, imagery, and text media. Reality Defender's API-first deepfake detection platform empowers teams and developers alike to identify fraud, disinformation campaigns, and harmful deepfakes in real time. CONTACT: Scott Steinhardt, scott@ +17188645744 View original content to download multimedia: SOURCE Reality Defender Sign in to access your portfolio


Indian Express
20-06-2025
- Entertainment
- Indian Express
Can you trust what you see? How AI videos are taking over your social media
A few days ago, a video that claimed to show a lion approaching a man asleep on the streets of Gujarat, sniffing him and walking away, took social media by storm. It looked like it was CCTV footage. The clip was dramatic, surreal, but completely fake. It was made using Artificial Intelligence (AI), but that didn't stop it from going viral. The video was even picked up by some news outlets, and reported as if it was a real incident, without any verification. The video originated from a YouTube channel – The world of beasts, which inconspicuously mentioned 'AI-assisted designs' in its bio. In another viral clip, a kangaroo – allegedly an emotional support animal – was seen attempting to board a flight with its human. Again, viewers were fascinated, many believing the clip to be real. The video first appeared on the Instagram account 'Infinite Unreality,' which openly brands itself as 'Your daily dose of unreality.' The line between fiction and reality, now more than ever, isn't always obvious to idle users. From giant anacondas swimming freely through rivers to a cheetah saving a woman from danger, AI-generated videos are flooding platforms, often blurring the boundary between the unbelievable and the impossible. With AI tools becoming more advanced and accessible, these creations are only growing in number and becoming sophisticated. To understand just how widespread the problem of AI-generated videos is, and why it matters, The Indian Express spoke to experts working at the intersection of technology, media, and misinformation. 'Not just the last year, not just the last month, even in the last couple of weeks, I've seen the volume of such videos increase,' said Ben Colman, CEO of deepfake detection firm Reality Defender. He gave a recent example – a 30-second commercial by betting platform Kalshi that aired a couple of weeks ago, during Game 3 of the 2025 NBA Finals. The video was made using Google's new AI video tool, Veo 3. 'It's blown past the uncanny valley, meaning it's infinitely more believable, and more videos like this are being posted to social platforms today compared to the day prior and so on,' Colman said. Sam Gregory, executive director of WITNESS, a non-profit that trains activists in using tech for human rights, said, 'The quantity and quality of synthetic audio have rapidly increased over the past year, and now video is catching up. New tools like Veo generate photorealistic content that follows physical laws, matches visual styles like interviews or news broadcasts, and syncs with controllable audio prompts.'. The reason behind platforms like Instagram, Facebook, TikTok, and YouTube pushing AI-generated videos, beyond technical novelty, is not very complex – such videos grab user attention, something all platforms are desperate for. Colman said, 'These videos make the user do a double‑take. Negative reactions on social media beget more engagement and longer time on site, which translates to more ads consumed.' 'Improvements in fidelity, motion, and audio have made it easier to create realistic memetic content. People are participating in meme culture using AI like never before,' said Gregory. According to Ami Kumar, founder of Social & Media Matters, 'The amplification is extremely high, unfortunately, platform algorithms prioritise quantity over quality, promoting videos that generate engagement regardless of their accuracy or authenticity.' Gregory, however, said that demand plays a role. 'Once you start watching AI content, your algorithm feeds you more. 'AI slop' is heavily monetised,' he said. 'Our own PhDs have failed to distinguish real photos or videos from deepfakes in internal tests,' Colman admitted. Are the big platforms prepared to put labels and checks on AI-generated content? Not yet. Colman said most services rely on 'less‑than‑bare‑minimum provenance watermark checks,' which many generators ignore or can spoof. Gregory warned that 'research increasingly shows the average person cannot distinguish between synthetic and real audio, and now, the same is becoming true for video.' When it comes to detection, Gregory pointed to an emerging open standard, C2PA (Coalition for Content Provenance and Authenticity), that could track the origins of images, audio and video, but it is 'not yet adopted across all platforms.' Meta, he noted, has already shifted from policing the use of AI to policing only content deemed 'deceptive and harmful.' Talking about AI-generated video detection, Kumar said, 'The gap is widening. Low-quality fakes are still detectable, but the high-end ones are nearly impossible to catch without advanced AI systems like the one we're building at Contrails.' However, he is cautiously optimistic that the regulatory tide, especially in Europe and the US, will force platforms to label AI output. 'I see the scenario improving in the next couple of years, but sadly loads of damage will be done by then,' he said. A good question to ask is, 'Who is making all these clips?' And the answer is, 'Everyone'. 'My kids know how to create AI-generated videos and the same tools are used by hobbyists, agencies, and state actors,' Colman said. Gregory agreed. 'We are all creators now,' he said. 'AI influencers, too, are a thing. Every new model spawns fresh personalities,' he said, adding that there is a growing trend of commercial actors producing AI-slop – cheap, fantastical content designed to monetise attention. Also Read | Canva rolls out new AI video clip feature powered by Google's Veo 3 model Kumar estimated that while 90 per cent of such content is made for fun, the remaining 10 per cent is causing real-world harm through financial, medical, or political misinformation. A case in point is the falsified footage of United Kingdom-based activist Tommy Robinson's viral migrant‑landing video. According to Colman, AI is a creative aid – not a replacement – and insisted that intentional deception should be clearly separated from artistic expression. 'It becomes manipulation when people's emotions or beliefs are deliberately exploited,' he said. Gregory pointed out one of the challenges – satire and parody can easily be misinterpreted when stripped of context. Kumar had a pragmatic stance: 'Intent and impact matter most. If either is negative, malicious, or criminal, it's manipulation.' The stakes leap when synthetic videos enter conflict zones and elections. Gregory recounted how AI clips have misrepresented confrontations between protesters and US troops in Los Angeles. 'One fake National Guard video racked up hundreds of thousands of views,' he said. Kumar said deepfakes have become routine in wars from Ukraine to Gaza and in election cycles from India to the US. Colman called for forward-looking laws: 'We need proactive legislation mandating detection or prevention of AI content at the point of upload. Otherwise, we're only penalising yesterday's problems while today's spiral out of control.' Gregory advocated for tools that reveal a clip's full 'recipe' across platforms, while warning of a 'detection-equity problem'. Current tools often fail to catch AI content in non-English languages or compressed formats. Kumar demanded 'strict laws and heavy penalties for platforms and individuals distributing AI-generated misinformation.' 'If we lose confidence in the evidence of our eyes and ears, we will distrust everything,' Gregory warned. 'Real, critical content will become just another drop in a flood of AI slop. And this scepticism can be weaponised to discredit real journalism, real documentation, and real harm.' Synthetic content is, clearly, here to stay. Whether it becomes a tool for creativity or a weapon of mass deception will depend on the speed at which platforms, lawmakers and technologists can build, and adopt, defences that keep the signal from being drowned by the deepfake noise.