
Pramana and ARUP Laboratories Partner to Digitize Pathology Slides and Develop AI-Powered Hematopathology Algorithms for Deployment via Edge AI
Pramana, Inc., an AI-enabled health tech company modernizing the pathology sector, and ARUP Laboratories, the largest nonprofit clinical and academic reference laboratory in the United States, announced a collaboration to digitize pathology slides and develop AI-powered algorithms to improve the assessment of bone marrow biopsies and address other key diagnostic challenges in hematopathology. The partnership combines ARUP's hematopathology expertise with Pramana's cutting-edge SpectralHT autonomous whole-slide imaging scanners to advance diagnostic precision and efficiency.
This press release features multimedia. View the full release here:
'Hematopathology involves highly complex and difficult-to-scan specimens, where traditional methods often fall short in delivering consistent and reproducible results,' said David Ng, MD, medical director of Hematologic Flow Cytometry and Applied Artificial Intelligence at ARUP Laboratories. 'By combining our deep clinical expertise with Pramana's large-scale digitization and AI-driven analysis, we have the ability to develop and clinically validate new AI algorithms that will improve diagnostic accuracy and workflow efficiency. Additionally, this collaboration lays the foundation for the broad distribution of these AI tools, ensuring greater accessibility and impact across the pathology community.'
The AI model development process will be led by ARUP, drawing on its expert hematopathologists and annotation tools to train, refine, and validate the algorithms using real-world clinical cases and pathology slides obtained for this purpose. These algorithms will be designed and tested to run efficiently on Pramana's SpectralHT scanners, demonstrating the viability of in-line edge computing for real-time AI-powered diagnostics. ARUP and Pramana will explore commercialization strategies to enable the seamless deployment and distribution of advanced diagnostic algorithms per regulatory standards, which will run efficiently on scanners and drive broader industry adoption in clinical diagnostics.
'This collaboration with ARUP Laboratories showcases how AI-driven pathology can redefine industry standards, making diagnostics more scalable, efficient, and interoperable,' said Prasanth Perugupalli, chief product officer at Pramana. 'By leveraging edge computing, we're accelerating the adoption of advanced diagnostic algorithms, including those for complex hematopathology cases. This approach enhances precision, removes scalability barriers, and seamlessly integrates AI-driven insights into lab workflows.'
Pramana's SpectralHT scanners feature in-line Edge AI computing, allowing real-time quality control and automated image processing. This streamlined workflow reduces the burden on lab personnel while ensuring high-quality data output. The SpectralHT scanners' volumetric imaging capabilities improve image quality and detail while enhancing scanning efficiency, even for the most challenging diagnostic slides, such as microbiology, hematopathology, parasitology, and cytology.
The ongoing partnership between ARUP Laboratories and Pramana underscores a broader commitment to digitization and AI-driven pathology advancements, signaling the potential for deeper integration and expanded initiatives, including clinical applications.
About Pramana, Inc.
Pramana, Inc., an AI-powered health tech company modernizing the pathology sector, enables seamless digital adoption by pathology labs and medical centers. Built upon extensive industry experience and patented technological innovation, Pramana is a gateway for pathologists and physicians to utilize AI-enabled decision support. The company is headquartered in Cambridge, Mass., and backed by Matrix Capital, a global leader in customized investment solutions, and NTTVC, a leading firm backing diverse founders within the technology spectrum. For more information, visit www.pramana.ai.
About ARUP Laboratories
Founded in 1984, ARUP Laboratories is a leading national reference laboratory and a nonprofit enterprise of the University of Utah's Spencer Fox Eccles School of Medicine and its Department of Pathology. ARUP offers more than 3,000 tests and test combinations, ranging from routine screening tests to esoteric molecular and genetic assays. In addition, ARUP is a worldwide leader in innovative laboratory research and development, led by the efforts of the ARUP Institute for Research and Innovation in Diagnostic and Precision Medicine™. ARUP is ISO 15189 and CAP accredited. For more information, visit www.aruplab.com.
Pramana:
Andrea Sampson, Sampson PR Group
[email protected]:
Bonnie Stray
KEYWORD: UNITED STATES NORTH AMERICA UTAH MASSACHUSETTS
INDUSTRY KEYWORD: RESEARCH TECHNOLOGY MEDICAL DEVICES HEALTH TECHNOLOGY SOFTWARE BIOTECHNOLOGY HEALTH SCIENCE ONCOLOGY ARTIFICIAL INTELLIGENCE
SOURCE: Pramana
Copyright Business Wire 2025.
PUB: 03/11/2025 08:10 AM/DISC: 03/11/2025 08:09 AM

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Business Wire
24-07-2025
- Business Wire
Pramana Receives Health Canada Authorization for Digital Pathology Scanners
CAMBRIDGE, Mass.--(BUSINESS WIRE)-- Pramana, an AI-enabled health tech company modernizing the pathology sector, today announced it has received Health Canada Medical Device Licenses for its flagship digital pathology scanners, the SpectralM and SpectralHT Cubiq systems. The approval authorizes Pramana to import, market, and sell its devices in Canada, expanding access to its advanced imaging and workflow solutions. Health Canada authorization is required to commercialize Class II medical devices and affirms the product's safety, effectiveness, and quality. The approval opens the door for Pramana to support Canadian hospitals, pathology labs, and research institutions, helping modernize diagnostic workflows with scalable, AI-powered digital pathology tools. This recognition marks a critical step in Pramana's international expansion, building on its growing regulatory footprint. 'Receiving Health Canada authorization is a major milestone in our commercialization strategy,' said Prasanth Perugupalli, Chief Product Officer at Pramana. 'It reflects the strength of our product development, quality, and regulatory readiness, and it opens the door to expanding access to our technology in a market known for strong clinical and academic institutions.' Pramana's scanners are designed to digitize a wide range of pathology slides, producing high-resolution whole-slide images with automated quality control and AI-powered decision support. The platform supports both FFPE tissue and Liquid Based Cytology (LBC) samples prepared using methods such as the ThinPrep® Pap test (Hologic) and BD SurePath™ (Becton Dickinson). Unlike most digital pathology systems focused solely on anatomic pathology, Pramana expands digital workflows to include hematopathology, microbiology, and cytology, delivering flexibility across all major slide types. 'Our research at the University of Toronto highlights the need for adaptable platforms that can manage these technical demands while still supporting routine histology and cytology workflows,' said Dr. Carlo Hojilla, Consultant Pathologist at the University of Toronto. 'Pramana's technology meets that standard, and its Health Canada authorization reflects both its clinical utility and the rigorous quality required for widespread adoption.' Prior to receiving Health Canada authorization, Pramana secured Medical Device Single Audit Program (MDSAP) certification, a requirement that validated its quality management system and streamlined regulatory access in Canada, the United States, Brazil, Australia, and Japan, highlighting Pramana's commitment to modernizing digital pathology worldwide. To discover how Pramana's whole-slide imaging solution can help healthcare organizations across Canada, visit About Pramana, Inc. Pramana is a health tech company transforming digital pathology with AI-powered imaging solutions that support seamless adoption across labs, health systems, and medical centers. Pramana's Spectral scanners deliver industry-leading image quality and unprecedented accuracy. Built-in AI algorithms and automated quality control streamline workflows, increase efficiency, and capture previously undetectable tissue features, empowering pathologists with the tools needed to improve clinical diagnostics and research. The company is headquartered in Cambridge, Mass. For more information, visit


Entrepreneur
23-07-2025
- Entrepreneur
AI Deepfakes Are Stealing Millions Every Year — Who's Going to Stop Them?
This story appears in the July 2025 issue of Entrepreneur. Subscribe » Your CFO is on the video call asking you to transfer $25 million. He gives you all the bank info. Pretty routine. You got it. But, What the — ? It wasn't the CFO? How can that be? You saw him with your own eyes and heard that undeniable voice you always half-listen for. Even the other colleagues on the screen weren't really them. And yes, you already made the transaction. Ring a bell? That's because it actually happened to an employee at the global engineering firm Arup last year, which lost $25 million to criminals. In other incidents, folks were scammed when "Elon Musk" and "Goldman Sachs executives" took to social media enthusing about great investment opportunities. And an agency leader at WPP, the largest advertising company in the world at the time, was almost tricked into giving money during a Teams meeting with a deepfake they thought was the CEO Mark Read. Experts have been warning for years about deepfake AI technology evolving to a dangerous point, and now it's happening. Used maliciously, these clones are infesting the culture from Hollywood to the White House. And although most businesses keep mum about deepfake attacks to prevent client concern, insiders say they're occurring with increasing alarm. Deloitte predicts fraud losses from such incidents to hit $40 billion in the United States by 2027. Related: The Advancement Of Artificial Intelligence Is Inevitable. Here's How We Should Get Ready For It. Obviously, we have a problem — and entrepreneurs love nothing more than finding something to solve. But this is no ordinary problem. You can't sit and study it, because it moves as fast as you can, or even faster, always showing up in a new configuration in unexpected places. The U.S. government has started to pass regulations on deepfakes, and the AI community is developing its own guardrails, including digital signatures and watermarks to identify their content. But scammers are not exactly known to stop at such roadblocks. That's why many people have pinned their hopes on "deepfake detection" — an emerging field that holds great promise. Ideally, these tools can suss out if something in the digital world (a voice, video, image, or piece of text) was generated by AI, and give everyone the power to protect themselves. But there is a hitch: In some ways, the tools just accelerate the problem. That's because every time a new detector comes out, bad actors can potentially learn from it — using the detector to train their own nefarious tools, and making deepfakes even harder to spot. So now the question becomes: Who is up for this challenge? This endless cat-and-mouse game, with impossibly high stakes? If anyone can lead the way, startups may have an advantage — because compared to big firms, they can focus exclusively on the problem and iterate faster, says Ankita Mittal, senior consultant of research at The Insight Partners, which has released a report on this new market and predicts explosive growth. Here's how a few of these founders are trying to stay ahead — and building an industry from the ground up to keep us all safe. Related: 'We Were Sucked In': How to Protect Yourself from Deepfake Phone Scams. Image Credit: Terovesalainen If deepfakes had an origin story, it might sound like this: Until the 1830s, information was physical. You could either tell someone something in person, or write it down on paper and send it, but that was it. Then the commercial telegraph arrived — and for the first time in human history, information could be zapped over long distances instantly. This revolutionized the world. But wire transfer fraud and other scams soon followed, often sent by fake versions of real people. Western Union was one of the first telegraph companies — so it is perhaps appropriate, or at least ironic, that on the 18th floor of the old Western Union Building in lower Manhattan, you can find one of the earliest startups combatting deepfakes. It's called Reality Defender, and the guys who founded it, including a former Goldman Sachs cybersecurity nut named Ben Colman, launched in early 2021, even before ChatGPT entered the scene. (The company originally set out to detect AI avatars, which he admits is "not as sexy.") Colman, who is CEO, feels confident that this battle can be won. He claims that his platform is 99% accurate in detecting real-time voice and video deepfakes. Most clients are banks and government agencies, though he won't name any (cybersecurity types are tight-lipped like that). He initially targeted those industries because, he says, deepfakes pose a particularly acute risk to them — so they're "willing to do things before they're fully proven." Reality Defender also works with firms like Accenture, IBM Ventures, and Booz Allen Ventures — "all partners, customers, or investors, and we power some of their own forensics tools." So that's one kind of entrepreneur involved in this race. On Zoom, a few days after visiting Colman, I meet another: He is Hany Farid, a professor at the University of California, Berkeley, and cofounder of a detection startup called GetReal Security. Its client list, according to the CEO, includes John Deere and Visa. Farid is considered an OG of digital image forensics (he was part of a team that developed PhotoDNA to help fight online child sexual abuse material, for example). And to give me the full-on sense of the risk involved, he pulls an eerie sleight-of-tech: As he talks to me on Zoom, he is replaced by a new person — an Asian punk who looks 40 years younger, but who continues to speak with Farid's voice. It's a deepfake in real time. Related: Machines Are Surpassing Humans in Intelligence. What We Do Next Will Define the Future of Humanity, Says This Legendary Tech Leader. Truth be told, Farid wasn't originally sure if deepfake detection was a good business. "I was a little nervous that we wouldn't be able to build something that actually worked," he says. The thing is, deepfakes aren't just one thing. They are produced in myriad ways, and their creators are always evolving and learning. One method, for example, involves using what's called a "generative adversarial network" — in short, someone builds a deepfake generator, as well as a deepfake detector, and the two systems compete against each other so that the generator becomes smarter. A newer method makes better deepfakes by training a model to start with something called "noise" (imagine the visual version of static) and then sculpt the pixels into an image according to a text prompt. Because deepfakes are so sophisticated, neither Reality Defender or GetReal can ever definitively say that something is "real" or "fake." Instead, they come up with probabilities and descriptions like strong, medium, weak, high, low, and most likely — which critics say can be confusing, but supporters argue can put clients on alert to ask more security questions. To keep up with the scammers, both companies run at an insanely fast pace — putting out updates every few weeks. Colman spends a lot of energy recruiting engineers and researchers, who make up 80% of his team. Lately, he's been pulling hires straight out of Ph.D. programs. He also has them do ongoing research to keep the company one step ahead. Both Reality Defender and GetReal maintain pipelines coursing with tech that's deployed, in development, and ready to sunset. To do that, they're organized around different teams that go back and forth to continually test their models. Farid, for example, has a "red team" that attacks and a "blue team" that defends. Describing working with his head of research on a new product, he says, "We have this very rapid cycle where she breaks, I fix, she breaks — and then you see the fragility of the system. You do that not once, but you do it 20 times. And now you're onto something." Additionally, they layer in non-AI sleuthing techniques to make their tools more accurate and harder to dodge. GetReal, for example, uses AI to search images and videos for what are known as "artifacts" — telltale flaws that they're made by generative AI — as well as other digital forensic methods to analyze inconsistent lighting, image compression, whether speech is properly synched to someone's moving lips, and for the kind of details that are hard to fake (like, say, if video of a CEO contains the acoustic reverberations that are specific to his office). "The endgame of my world is not elimination of threats; it's mitigation of threats," Farid says. "I can defeat almost all of our systems. But it's not easy. The average knucklehead on the internet, they're going to have trouble removing an artifact even if I tell 'em it's there. A sophisticated actor, sure. They'll figure it out. But to remove all 20 of the artifacts? At least I'm gonna slow you down." Related: Deepfake Fraud Is Becoming a Business Risk You Can't Ignore. Here's the Surprising Solution That Puts You Ahead of Threats. All of these strategies will fail if they don't have one thing: the right data. AI, as they say, is only as good as the data it's trained on. And that's a huge hurdle for detection startups. Not only do you have to find fakes made by all the different models and customized by various AI companies (detecting one won't necessarily work on another), but you also have to compare them against images, videos, and audio of real people, places, and things. Sure, reality is all around us, but so is AI, including in our phone cameras. "Historically, detectors don't work very well once you go to real world data," says Phil Swatton at The Alan Turing Institute, the United Kingdom's national institute for AI and data science. And high-quality, labeled datasets for deepfake detection remain scarce, notes Mittal, the senior consultant from The Insight Partners. Colman has tackled this problem, in part, by using older datasets to capture the "real" side — say from 2018, before generative AI. For the fake data, he mostly generates it in house. He has also focused on developing partnerships with the companies whose tools are used to make deepfakes — because, of course, not all of them are meant to be harmful. So far, his partners include ElevenLabs (which, for example, translates popular podcaster and neuroscientist Andrew Huberman's voice into Hindi and Spanish, so that he can reach wider audiences) along with PlayAI and Respeecher. These companies have mountains of real-world data — and they like sharing it, because they look good by showing that they're building guardrails and allowing Reality Defender to detect their tools. In addition, this grants Reality Defender early access to the partners' new models, which gives it a jump start in updating its platform. Colman's team has also gotten creative. At one point, to gather fresh voice data, they partnered with a rideshare company — offering their drivers extra income by recording 60 seconds of audio when they weren't busy. "It didn't work," Colman admits. "A ridesharing car is not a good place to record crystal-clear audio. But it gave us an understanding of artificial sounds that don't indicate fraud. It also helped us develop some novel approaches to remove background noise, because one trick that a fraudster will do is use an AI-generated voice, but then try to create all kinds of noise, so that maybe it won't be as detectable." Startups like this must also grapple with another real-world problem: How do they keep their software from getting out into the public, where deepfakers can learn from it? To start, Reality Defender's clients have a high bar for whom within the organizations can access their software. But the company has also started to create some novel hardware. To show me, Colman holds up a laptop. "We're now able to run all of our magic locally, without any connection to the cloud on this," he says. The loaded laptop, only available to high-touch clients, "helps protect our IP, so people don't use it to try to prove they can bypass it." Related: Nearly Half of Americans Think They Could Be Duped By AI. Here's What They're Worried About. Some founders are taking a completely different path: Instead of trying to detect fake people, they're working to authenticate real ones. That's Joshua McKenty's plan. He's a serial entrepreneur who cofounded OpenStack and worked at NASA as Chief Cloud Architect, and this March launched a company called Polyguard. "We said, 'Look, we're not going to focus on detection, because it's only accelerating the arms race. We're going to focus on authenticity,'" he explains. "I can't say if something is fake, but I can tell you if it's real." To execute that, McKenty built a platform to conduct a literal reality check on the person you're talking to by phone or video. Here's how it works: A company can use Polyguard's mobile app, or integrate it into their own app and call center. When they want to create a secure call or meeting, they use that system. To join, participants must prove their identities via the app on their mobile phone (where they're verified using documents like Real ID, e-passports, and face scanning). Polyguard says this is ideal for remote interviews, board meetings, or any other sensitive communication where identity is critical. In some cases, McKenty's solution can be used with tools like Reality Defender. "Companies might say 'We're so big, we need both,'" he explains. His team is only five or six people at this point (whereas Reality Defender and GetReal both have about 50 employees), but he says his clients already include recruiters, who are interviewing candidates remotely only to discover that they're deepfakes, law firms wanting to protect attorney-client privilege, and wealth managers. He's also making the platform available to the public for people to establish secure lines with their attorney, accountant, or kid's teacher. This line of thinking is appealing — and gaining approval from people who watch the industry. "I like the authentication approach; it's much more straightforward," says The Alan Turing Institute's Swatton. "It's focused not on detecting something going wrong, but certifying that it's going right." After all, even when detection probabilities sound good, any margin of error can be scary: A detector that catches 95% of fakes will still allow for a scam 1 out of 20 times. That error rate is what alarmed Christian Perry, another entrepreneur who's entered the deepfake race. He saw it in the early detectors for text, where students and workers were being accused of using AI when they weren't. Authorship deceit doesn't pose the level of threat that deepfakes do, but text detectors are considered part of the scam-fighting family. Perry and his cofounder Devan Leos launched a startup called Undetectable in 2023, which now has over 19 million users and a team of 76. It began by building a sophisticated text detector, but then pivoted into image detection, and is now close to launching audio and video detectors as well. "You can use a lot of the same kind of methodology and skill sets that you pick up in text detection," says Perry. "But deepfake detection is a much more complicated problem." Related: Despite How the Media Portrays It, AI Is Not Really Intelligent. Here's Why. Finally, instead of trying to prevent deepfakes, some entrepreneurs are seeing the opportunity in cleaning up their mess. Luke and Rebekah Arrigoni stumbled upon this niche accidentally, by trying to solve a different terrible problem — revenge porn. It started one night a few years ago, when the married couple were watching HBO's Euphoria. In the show, a character's nonconsensual intimate image was shared online. "I guess out of hubris," Luke says, "our immediate response was like, We could fix this." At the time, the Arrigonis were both working on facial recognition technologies. So as a side project in 2022, they put together a system specifically designed to scour the web for revenge porn — then found some victims to test it with. They'd locate the images or videos, then send takedown notices to the websites' hosts. It worked. But valuable as this was, they could see it wasn't a viable business. Clients were just too hard to find. Then, in 2023, another path appeared. As the actors' and writers' strikes broke out, with AI being a central issue, Luke checked in with former colleagues at major talent agencies. He'd previously worked at Creative Artists Agency as a data scientist, and he was now wondering if his revenge-porn tool might be useful for their clients — though in a different way. It could also be used to identify celebrity deepfakes — to find, for example, when an actor or singer is being cloned to promote someone else's product. Along with feeling out other talent reps like William Morris Endeavor, he went to law and entertainment management firms. They were interested. So in 2023, Luke quit consulting to work with Rebekah and a third cofounder, Hirak Chhatbar, on building out their side hustle, Loti. "We saw the desire for a product that fit this little spot, and then we listened to key industry partners early on to build all of the features that people really wanted, like impersonation," Luke says. "Now it's one of our most preferred features. Even if they deliberately typo the celebrity's name or put a fake blue checkbox on the profile photo, we can detect all of those things." Using Loti is simple. A new client submits three real images and eight seconds of their voice; musicians also provide 15 seconds of singing a cappella. The Loti team puts that data into their system, and then scans the internet for that same face and voice. Some celebs, like Scarlett Johansson, Taylor Swift, and Brad Pitt, have been publicly targeted by deepfakes, and Loti is ready to handle that. But Luke says most of the need right now involves the low-tech stuff like impersonation and false endorsements. A recently-passed law called the Take It Down Act — which criminalizes the publication of nonconsensual intimate images (including deepfakes) and requires online platforms to remove them when reported — helps this process along: Now, it's much easier to get the unauthorized content off the web. Loti doesn't have to deal with probabilities. It doesn't have to constantly iterate or get huge datasets. It doesn't have to say "real" or "fake" (although it can). It just has to ask, "Is this you?" "The thesis was that the deepfake problem would be solved with deepfake detectors. And our thesis is that it will be solved with face recognition," says Luke, who now has a team of around 50 and a consumer product coming out. "It's this idea of, How do I show up on the internet? What things are said of me, or how am I being portrayed? I think that's its own business, and I'm really excited to be at it." Related: Why AI is Your New Best Friend... and Worst Enemy in the Battle Against Phishing Scams Will it all pay off? All tech aside, do these anti-deepfake solutions make for strong businesses? Many of the startups in this space are early-stage and venture-backed, so it's not yet clear how sustainable or profitable they can be. They're also "heavily investing in research and development to stay ahead of rapidly evolving generative AI threats," says The Insight Partners' Mittal. That makes you wonder about the economics of running a business that will likely always have to do that. Then again, the market for these startups' services is just beginning. Deepfakes will impact more than just banks, government intelligence, and celebrities — and as more industries awaken to that, they may want solutions fast. The question will be: Do these startups have first-mover advantage, or will they have just laid the expensive groundwork for newer competitors to run with? Mittal, for her part, is optimistic. She sees significant untapped opportunities for growth that go beyond preventing scams — like, for example, helping professors flag AI-generated student essays, impersonated class attendance, or manipulated academic records. Many of the current anti-deepfake companies, she predicts, will get acquired by big tech and cybersecurity firms. Whether or not that's Reality Defender's future, Colman believes that platforms like his will become integral to a larger guardrail ecosystem. He compares it to antivirus software: Decades ago, you had to buy an antivirus program and manually scan your files. Now, these scans are just built into your email platforms, running automatically. "We're following the exact same growth story," he says. "The only problem is the problem is moving even quicker." No doubt, the need will become glaring at some point soon. Farid at GetReal imagines a nightmare like someone creating a fake earnings call for a Fortune 500 company that goes viral. If GetReal's CEO, Matthew Moynahan, is right, then 2026 will be the year that gets the flywheel spinning for all these deepfake-fighting businesses. "There's two things that drive sales in a really aggressive way: a clear and present danger, and compliance and regulation," he says. "The market doesn't have either right now. Everybody's interested, but not everybody's troubled." That will likely change with increased regulations that push adoption, and with deepfakes popping up in places they shouldn't be. "Executives will connect the dots," Moynahan predicts. "And they'll start saying, 'This isn't funny anymore.'" Related: AI Cloning Hoax Can Copy Your Voice in 3 Seconds—and It's Emptying Bank Accounts. Here's How to Protect Yourself.
Yahoo
16-07-2025
- Yahoo
How Generative AI's 'Deepfake Economy' Is Hobbling Small Businesses
Over the past few years, the potential uses of generative AI, both positive and negative, have been talked to death. However, there's one application of the technology that small business owners are saying is often overlooked: the deepfake economy. Several small business owners told Business Insider that since ChatCPT's debut three years ago, the deepfake economy has blown up. Now, scammers are using these deepfakes to pose as employees of a company, running cons that are wreaking havoc on the brands' reputations and bottom lines. Don't Miss: Named a TIME Best Invention and Backed by 5,000+ Users, Kara's Air-to-Water Pod Cuts Plastic and Costs — $100k+ in investable assets? – no cost, no obligation. An unnamed finance clerk at engineering firm Arup told the outlet about a time he joined a video call with his AI versions of colleagues. One of these "colleagues," supposedly the company's chief financial officer, asked him to approve a series of overseas transfers worth more than $25 million. Believing that the request came from his boss, the finance clerk approved the transactions. Only after the money had been sent did he learn that the colleagues were actually deepfake recreations of his real coworkers. The finance clerk isn't the only one being deceived by these impressionists. According to data from Chinabuse, TRM Labs' open-source fraud reporting platform, generative AI-enabled scams rose by 456% between May 2024 and April, when compared with the same period the year before. Another survey from Nationwide Insurance released in September found that 12% of small business owners had faced at least one deepfake scam within the previous year. Small businesses, the survey said, are more likely to fall victim to these types of scams because they lack the cybersecurity infrastructure of larger companies. Trending: This AI-Powered Trading Platform Has 5,000+ Users, 27 Pending Patents, and a $43.97M Valuation — Rob Duncan, vice president of strategy at Netcraft, told Business Insider that he isn't surprised at the increase in highly personalized attacks against small businesses. Generative AI has made it much easier for inexperienced scammers to pose as brands and launch these scams. As AI continues to improve, "attackers can more easily spoof employees, fool customers, or impersonate partners across multiple channels," he said. Many of the platforms used by small businesses, like Teams and Zoom, are getting better at detecting AI and weeding out accounts that don't have real people behind them. However, many experts worry that improved detection tools are making the AI problem worse. Beyond Identity CEO Jasson Casey told Business Insider that the data collected by platforms like Zoom and Teams is not only used to suss out deepfakes but to train sophisticated AI models. This creates a vicious cycle that becomes "an arms race defenders cannot win.'Casey and Robin Pugh, the executive director of non-profit Intelligence for Good, say that small businesses can best protect themselves from deepfake scams by focusing on confirming identities rather than disproving AI use. They also warn that these generative AI-based scams will not be going away anytime soon. Nina Etemadi, cofounder of a Philadelphia-based small business named Cake Life Bake Shop, agrees, telling Business Insider, 'Doing business online gets more necessary and high risk every year. AI is just part of that." Read Next: Many are using retirement income calculators to check if they're on pace — Image: Shutterstock UNLOCKED: 5 NEW TRADES EVERY WEEK. Click now to get top trade ideas daily, plus unlimited access to cutting-edge tools and strategies to gain an edge in the markets. Get the latest stock analysis from Benzinga? APPLE (AAPL): Free Stock Analysis Report TESLA (TSLA): Free Stock Analysis Report This article How Generative AI's 'Deepfake Economy' Is Hobbling Small Businesses originally appeared on © 2025 Benzinga does not provide investment advice. All rights reserved.