logo
#

Latest news with #RealityDefender

Inside NYNext groundbreaking AI event at New York Tech Week
Inside NYNext groundbreaking AI event at New York Tech Week

New York Post

time2 days ago

  • Business
  • New York Post

Inside NYNext groundbreaking AI event at New York Tech Week

On Tuesday night as part of New York Tech Week, NYNext joined forces with Tech:NYC and PensarAI to host our first-ever event. The night celebrated the key players — from scrappy startups to giants such as Google and IBM — that are making big moves in artificial intelligence. Nearly 150 people took part in NY AI Demo Night. Founders and venture capitalists snacked on figs and tuna tartar and sipped rosé — as well as our new favorite non-alcoholic beverage Töst — at the Domino Sugar Factory in Williamsburg. The factory has gotten a major facelift and now houses a number of startups as well as a sweeping view of Manhattan. Eight AI companies presented their newest ideas to the audience with the goal of getting people to download their apps and invest in their companies 4 Julie Samuels, who runs Tech: NYC — which plays a major role in hosting Tech Week — addressed attendees. Emmy Park 'One of the most unique aspects of the NY tech scene is the ability to bring together and showcase tech heavyweights implementing AI at scale alongside startups in deep builder mode,' Caroline McKechnie, Director of Platform at Tech:NYC, told me. 'We saw a real need for an event that gives founders and engineers a window into what's being built across the city's AI landscape — all against the iconic skyline. The energy of having established players and emerging talent demo side by side is something you can only capture in a city like New York.' Reality Defender, which detects deepfakes, showed just how effective it is in finding AI-generated images among a slew of photos. Founder Ben Colman told me it would have made the plot of HBO's 'Mountainhead' — a film based on the premise that deep fakes are destroying the world — completely null. 4 More than 150 guests came to our New York Tech Week event. Tech Week has ballooned to more than 1,000 events this year. Emmy Park PromptLayer, which aims to empower lay people to create their own apps with AI, demonstrated how seamless it is for anyone to prompt AI to build a product. Founder Jared Zoneraich said, 'The best AI builders, the best prompt engineers are not machine learning engineers … they're subject matter experts.' Representatives from IBM presented their newest insights into AI. But the company also made headlines this week with its newly unveiled watsonx AI Labs in NYC. 'This isn't your typical corporate lab. watsonx AI Labs is where the best AI developers gain access to world-class engineers and resources and build new businesses and applications that will reshape AI for the enterprise,' Ritika Gunnar, General Manager, Data & AI, IBM, told me. 'By anchoring this mission in New York City, we are investing in a diverse, world‑class talent pool and a vibrant community whose innovations have long shaped the tech landscape.' 4 NYNext co-hosted the evening with PensarAI, Two Trees, and Tech: NYC. Emmy Park Other presenters included Flora, an AI tool for creatives; a podcast and newsletter network powered by AI; Superblocks, an AI platform building software; Run Loop AI, which helps companies scale coding; and Google's Deepmind. This story is part of NYNext, an indispensable insider insight into the innovations, moonshots and political chess moves that matter most to NYC's power players (and those who aspire to be). The event was just one piece of what has become a sprawling and celebratory week for anyone in technology. 4 The event was hosted in Williamsburg where the Domino Sugar Refinery has gotten a major facelift — and now houses dozens of tech startups. Emmy Park The idea for a tech week came from Andreessen Horowitz (a16z). The firm launched with a Tech Week in Los Angeles in 2022. In 2023, they expanded to San Francisco and New York City. Since the first New York Tech Week in 2023, the seven-day conference has ballooned to more than 1,000 events with 60,000 RSVPs. This year, over half of the events focused on AI. 'The energy that is in this room, the startups that we're going to hear from, these are the ideas that are going to propel New York's economy for generations to come,' Tech:NYC CEO Julie Samuels told me. 'These are the idea that are gonna change the way we all live, we all work, we all do business, we communicate. We are on the cusp of such an exciting time for New York, and tonight is just a little bit of a flavor of that.' Send NYNext a tip: nynextlydia@

I scammed my bank. All it took was an AI voice generator and a phone call.
I scammed my bank. All it took was an AI voice generator and a phone call.

Yahoo

time02-05-2025

  • Business
  • Yahoo

I scammed my bank. All it took was an AI voice generator and a phone call.

I may be a tech reporter, but I am not tech savvy. Something breaks, I turn it off and back on, and then I give up. But even I was able to deepfake my own bank with relative ease. Generative AI has made it way easier to impersonate people's voices. For years, there have been deepfakes of politicians, celebrities, and the late pope made to sow disinformation on social media. Lately, hackers have been able to deepfake people like you and me. All they need is a few seconds of your voice, which they might find in video posts on Instagram or TikTok, and maybe some information like your phone or debit card number, which they might be able to find in data leaks on the dark web. In my case — for the purposes of this story — I downloaded the audio of a radio interview I sat for a few weeks ago, trained a voice generator on it after subscribing to a service for a few dollars, and then used a text-to-voice function to chat with my bank in a voice that sounded a bit robotic but eerily similar to my own. Over the course of a five-minute call, first with the automated system and then a human representative, my deepfake seemingly triggered little to no suspicion. It's a tactic scammers are increasingly adopting. They take advantage of cheap, widely available generative-AI tools to deepfake people and gain access to their bank accounts, or even open accounts in someone else's name. These deepfakes are not only getting easier to make but also getting harder to detect. Last year, a financial worker in Hong Kong mistakenly paid out $25 million to scammers after they deepfaked the company's chief financial officer and other staff members in a video call. That's one major oopsie, but huge paydays aren't necessarily the goal. The tech allows criminal organizations to imitate people at scale, automating deepfake voice calls they use to scam smaller amounts from tons of people. A report from Deloitte predicts that fraud losses in the US could reach $40 billion by 2027 as generative AI bolsters fraudsters, which would be a jump from $12.3 billion in 2023. In a recent Accenture survey of 600 cybersecurity executives at banks, 80% of respondents said they believed gen AI was ramping up hackers' abilities faster than banks could respond. These scammers can take gen-AI tools and target accounts at a massive scale. "They're the best engineers, the best product managers, the best researchers," says Ben Colman, the CEO of Reality Defender, a company that makes software for governments, financial institutions, and other businesses to detect the likelihood that content was generated by AI in real time. "If they can automate fraud, they will use every single tool." In addition to stealing your voice or image, they can use gen AI to falsify documents, either to steal an identity or make an entirely new, fake one to open accounts for funneling money. In a recent Accenture survey of 600 cybersecurity executives at banks, 80% of respondents said they believed gen AI was ramping up hackers' abilities faster than banks could respond. The scammers are playing a numbers game. Even when a financial institution blocks them, they can try another account or another service. By automating the attempts, "the attackers don't have to be right very often to do well," Colman says. And they don't care about going after only the richest people; scamming lots of people out of small amounts of money can be even more lucrative over time. According to the FBI's Internet Crime Complaint Center, the average online scam in 2024 came out to just under $20,000 across more than 250,000 complaints the FBI received from people of all ages (those over 60 filed the most complaints and saw the biggest losses, but even people under 20 lost a combined $22.5 million). "Everybody is equally a target," he says. Colman says some banks have tried to get ahead of the deepfake problem in the past few years, while others didn't see it as a pressing issue. Now, more and more are using software to protect their clients. A 2024 survey of business executives (who worked across industries, not just in banking) found that more than 10% had faced an attempted or successful deepfake fraud. More than half said that their employees had not been trained to identify or address such attacks. I reached out to several of the largest banks in the US, asking them what they're doing to detect and shut down deepfake fraud. Several did not respond. Citi declined to share any details of its fraud detection methods and technology. Darius Kingsley, the head of consumer banking practices at JPMorgan Chase, told me the bank sees "the challenges posed by rapidly evolving technologies that can be exploited by bad actors" and is "committed to staying ahead by continuously advancing our security protocols and investing in cutting-edge solutions to protect our customers." Spotting deepfakes is tricky work. Even OpenAI discontinued its AI-writing detector shortly after launching it in 2023, reasoning that its accuracy was too low to even reliably detect whether something was generated by its own ChatGPT. Image, video, and audio generation have all been rapidly improving over the past two years as tools become more sophisticated: If you remember how horrifying and unrealistic AI Will Smith eating spaghetti looked just two years ago, you'll be shocked to see what OpenAI's text-to-video generator, Sora, can do now. Generative AI has gotten leaps and bounds better at covering its tracks, which is great news for scammers. On my deepfake's call with my bank, I had fake me read off information like my debit card number and the last four digits of my Social Security number. Obviously, this was info I had on hand, but it's disturbingly easy these days for criminals to buy this kind of personal data on the dark web, as it may have been involved in a data leak. I generated friendly phrases that asked my bank to update my email address, please, or change my PIN. Fake me repeatedly begged the automated system to connect me to a representative, and then gave a cheery, "I'm doing well today, how are you?" greeting to the person on the other line. I had deepfake me ask for more time to dig up confirmation codes sent to my phone and then thank the representative for their help. Authorities are starting to sound the alarm on how easy and widespread deepfakes are becoming. In November, the Financial Crimes Enforcement Network put out an alert to financial institutions about gen AI, deepfakes, and the risk of identity fraud. Speaking at the Federal Reserve Bank of New York in April, Michael Barr, a governor of the Federal Reserve, said that the tech "has the potential to supercharge identity fraud" and that deepfake attacks had increased twentyfold in the past three years. Barr said that we'll need new policies that raise the cost for the attacker and lower the burden on banks. Right now, it's relatively low risk and low cost for scammer organizations to carry out a massive number of attacks, and impossible for banks to catch each and every one. It's not just banks getting odd calls; scammers will also use deepfakes to call up people and impersonate someone they know or a service they use. There are steps we can take if suspicious requests come our way. "These scams are a new flavor of an old-school method that relies on unexpected contact and a false sense of urgency to trick people into parting with their money," Ashwin Raghu, the head of scam policy and innovation at Citi, tells me in an email. Raghu says people should be suspicious of urgent requests and unexpected calls — even if they're coming from someone who sounds like a friend or family member. Try to take time to verify the caller or contact the person in a different way. If the call seems to be from your bank, you may want to hang up and call the bank back using the phone number on your card to confirm it. For all the data on you that scammers can dig up using AI, there will be things that only two people can ever know. This past summer, an executive at Ferrari was able to catch a scammer deepfaking the company CEO's voice when he asked the caller what book he had recommended just days earlier. Limiting what you share on social media and to whom is one way to crack down on the likelihood you'll become a target, as are tools like two-factor authentication and password managers that store complex and varied passwords. But there's no foolproof way to avoid becoming a target of the scams. Barr's policy ideas included creating more consistency in cybercrime laws internationally and more coordination among law enforcement agencies, which would make it more difficult for criminal rings to operate undetected. He also called for increasing penalties on those who attempt to use generative AI for fraud. But those won't be the quickest of fixes to keep up with how rapidly the tech has changed. Even though this tech is readily available, sometimes in free apps and sometimes for purchases of just a few dollars, the problem is less a proliferation of lone wolf hackers, says Jason Ioannides, the vice president of global fintech and sponsor banking at Alloy, a fraud prevention platform. These are often carried out by big, organized crime rings that are able to move in large numbers and are bolstered by automation to carry out thousands of attacks. If they try 1,000 times to get through and make it once, they'll then focus their efforts on chipping away at that same institution, until the bank notices a trend and comes up with fixes to stop it. "They look for a weakness, and then they attack it," Ioannides says. He says banks should "stay nimble" and have "layered approaches" to detect quickly evolving fraud. "You're never going to stop 100% of fraud," he says. And banks generally won't be perfect, but their defense lies in making themselves "less attractive to a bad actor" than other institutions. Ultimately, I wasn't able to totally hack my bank. I tried to change my debit card PIN and my email address during the phone calls, but I was told I had to do the first at an ATM and the second online. I was able to hear my account balance, and with a bit more prep and expertise, I may have been able to move some money. Each bank has different systems and rules in place, and some might allow people to change personal information, like emails, over the phone, which could give a scammer much easier access to the account. Whether my bank caught on to my use of a generated voice, I'm not sure, but I do sleep a little bit better knowing there are some protections in place. Amanda Hoover is a senior correspondent at Business Insider covering the tech industry. She writes about the biggest tech companies and trends. Read the original article on Business Insider

I scammed my bank
I scammed my bank

Business Insider

time02-05-2025

  • Business
  • Business Insider

I scammed my bank

I may be a tech reporter, but I am not tech savvy. Something breaks, I turn it off and back on, and then I give up. But even I was able to deepfake my own bank with relative ease. Generative AI has made it way easier to impersonate people's voices. For years, there have been deepfakes of politicians, celebrities, and the late pope made to sow disinformation on social media. Lately, hackers have been able to deepfake people like you and me. All they need is a few seconds of your voice, which they might find in video posts on Instagram or TikTok, and maybe some information like your phone or debit card number, which they might be able to find in data leaks on the dark web. In my case — for the purposes of this story — I downloaded the audio of a radio interview I sat for a few weeks ago, trained a voice generator on it after subscribing to a service for a few dollars, and then used a text-to-voice function to chat with my bank in a voice that sounded a bit robotic but eerily similar to my own. Over the course of a five-minute call, first with the automated system and then a human representative, my deepfake seemingly triggered little to no suspicion. It's a tactic scammers are increasingly adopting. They take advantage of cheap, widely available generative-AI tools to deepfake people and gain access to their bank accounts, or even open accounts in someone else's name. These deepfakes are not only getting easier to make but also getting harder to detect. Last year, a financial worker in Hong Kong mistakenly paid out $25 million to scammers after they deepfaked the company's chief financial officer and other staff members in a video call. That's one major oopsie, but huge paydays aren't necessarily the goal. The tech allows criminal organizations to imitate people at scale, automating deepfake voice calls they use to scam smaller amounts from tons of people. A report from Deloitte predicts that fraud losses in the US could reach $40 billion by 2027 as generative AI bolsters fraudsters, which would be a jump from $12.3 billion in 2023. In a recent Accenture survey of 600 cybersecurity executives at banks, 80% of respondents said they believed gen AI was ramping up hackers' abilities faster than banks could respond. These scammers can take gen-AI tools and target accounts at a massive scale. "They're the best engineers, the best product managers, the best researchers," says Ben Colman, the CEO of Reality Defender, a company that makes software for governments, financial institutions, and other businesses to detect the likelihood that content was generated by AI in real time. "If they can automate fraud, they will use every single tool." In addition to stealing your voice or image, they can use gen AI to falsify documents, either to steal an identity or make an entirely new, fake one to open accounts for funneling money. In a recent Accenture survey of 600 cybersecurity executives at banks, 80% of respondents said they believed gen AI was ramping up hackers' abilities faster than banks could respond. The scammers are playing a numbers game. Even when a financial institution blocks them, they can try another account or another service. By automating the attempts, "the attackers don't have to be right very often to do well," Colman says. And they don't care about going after only the richest people; scamming lots of people out of small amounts of money can be even more lucrative over time. According to the FBI's Internet Crime Complaint Center, the average online scam in 2024 came out to just under $20,000 across more than 250,000 complaints the FBI received from people of all ages (those over 60 filed the most complaints and saw the biggest losses, but even people under 20 lost a combined $22.5 million). "Everybody is equally a target," he says. Colman says some banks have tried to get ahead of the deepfake problem in the past few years, while others didn't see it as a pressing issue. Now, more and more are using software to protect their clients. A 2024 survey of business executives (who worked across industries, not just in banking) found that more than 10% had faced an attempted or successful deepfake fraud. More than half said that their employees had not been trained to identify or address such attacks. I reached out to several of the largest banks in the US, asking them what they're doing to detect and shut down deepfake fraud. Several did not respond. Citi declined to share any details of its fraud detection methods and technology. Darius Kingsley, the head of consumer banking practices at JPMorgan Chase, told me the bank sees "the challenges posed by rapidly evolving technologies that can be exploited by bad actors" and is "committed to staying ahead by continuously advancing our security protocols and investing in cutting-edge solutions to protect our customers." Spotting deepfakes is tricky work. Even OpenAI discontinued its AI-writing detector shortly after launching it in 2023, reasoning that its accuracy was too low to even reliably detect whether something was generated by its own ChatGPT. Image, video, and audio generation have all been rapidly improving over the past two years as tools become more sophisticated: If you remember how horrifying and unrealistic AI Will Smith eating spaghetti looked just two years ago, you'll be shocked to see what OpenAI's text-to-video generator, Sora, can do now. Generative AI has gotten leaps and bounds better at covering its tracks, which is great news for scammers. On my deepfake's call with my bank, I had fake me read off information like my debit card number and the last four digits of my Social Security number. Obviously, this was info I had on hand, but it's disturbingly easy these days for criminals to buy this kind of personal data on the dark web, as it may have been involved in a data leak. I generated friendly phrases that asked my bank to update my email address, please, or change my PIN. Fake me repeatedly begged the automated system to connect me to a representative, and then gave a cheery, "I'm doing well today, how are you?" greeting to the person on the other line. I had deepfake me ask for more time to dig up confirmation codes sent to my phone and then thank the representative for their help. Authorities are starting to sound the alarm on how easy and widespread deepfakes are becoming. In November, the Financial Crimes Enforcement Network put out an alert to financial institutions about gen AI, deepfakes, and the risk of identity fraud. Speaking at the Federal Reserve Bank of New York in April, Michael Barr, a governor of the Federal Reserve, said that the tech "has the potential to supercharge identity fraud" and that deepfake attacks had increased twentyfold in the past three years. Barr said that we'll need new policies that raise the cost for the attacker and lower the burden on banks. Right now, it's relatively low risk and low cost for scammer organizations to carry out a massive number of attacks, and impossible for banks to catch each and every one. It's not just banks getting odd calls; scammers will also use deepfakes to call up people and impersonate someone they know or a service they use. There are steps we can take if suspicious requests come our way. "These scams are a new flavor of an old-school method that relies on unexpected contact and a false sense of urgency to trick people into parting with their money," Ashwin Raghu, the head of scam policy and innovation at Citi, tells me in an email. Raghu says people should be suspicious of urgent requests and unexpected calls — even if they're coming from someone who sounds like a friend or family member. Try to take time to verify the caller or contact the person in a different way. If the call seems to be from your bank, you may want to hang up and call the bank back using the phone number on your card to confirm it. For all the data on you that scammers can dig up using AI, there will be things that only two people can ever know. This past summer, an executive at Ferrari was able to catch a scammer deepfaking the company CEO's voice when he asked the caller what book he had recommended just days earlier. Limiting what you share on social media and to whom is one way to crack down on the likelihood you'll become a target, as are tools like two-factor authentication and password managers that store complex and varied passwords. But there's no foolproof way to avoid becoming a target of the scams. Barr's policy ideas included creating more consistency in cybercrime laws internationally and more coordination among law enforcement agencies, which would make it more difficult for criminal rings to operate undetected. He also called for increasing penalties on those who attempt to use generative AI for fraud. But those won't be the quickest of fixes to keep up with how rapidly the tech has changed. Even though this tech is readily available, sometimes in free apps and sometimes for purchases of just a few dollars, the problem is less a proliferation of lone wolf hackers, says Jason Ioannides, the vice president of global fintech and sponsor banking at Alloy, a fraud prevention platform. These are often carried out by big, organized crime rings that are able to move in large numbers and are bolstered by automation to carry out thousands of attacks. If they try 1,000 times to get through and make it once, they'll then focus their efforts on chipping away at that same institution, until the bank notices a trend and comes up with fixes to stop it. "They look for a weakness, and then they attack it," Ioannides says. He says banks should "stay nimble" and have "layered approaches" to detect quickly evolving fraud. "You're never going to stop 100% of fraud," he says. And banks generally won't be perfect, but their defense lies in making themselves "less attractive to a bad actor" than other institutions. Ultimately, I wasn't able to totally hack my bank. I tried to change my debit card PIN and my email address during the phone calls, but I was told I had to do the first at an ATM and the second online. I was able to hear my account balance, and with a bit more prep and expertise, I may have been able to move some money. Each bank has different systems and rules in place, and some might allow people to change personal information, like emails, over the phone, which could give a scammer much easier access to the account. Whether my bank caught on to my use of a generated voice, I'm not sure, but I do sleep a little bit better knowing there are some protections in place.

How to avoid deep fake scams
How to avoid deep fake scams

Yahoo

time17-04-2025

  • Yahoo

How to avoid deep fake scams

Imagine getting a call from a loved one terrified, desperate, begging for help. But what if that voice wasn't real? Scammers now use powerful AI voice-cloning apps to steal voices or mimic someone you trust to pull off convincing scams. Consumer Reports investigates the rise of deepfakes, revealing how these high-tech scams work and what you can do right now to protect yourself and your family. ALSO READ: Many online reviews are fake or written by AI; can you spot them? Deepfake technology is becoming more convincing every day. Ben Colman, Co-founder and CEO of Reality Defender, a deep-fake detection company, says it's the number one digital risk people should be worried about. He says, 'Over the last few years, there's been an explosion of calls claiming that we have your daughter. She's in trouble, send money, or else. Well, what's happened recently is the call comes in and says, we are your daughter; hi, I'm your daughter. I'm in trouble, send money right now.' So, what exactly is a deepfake, and how does it work? A deepfake is taking anyone's likeness, whether it's their face, a single image from LinkedIn or online, or a few seconds of audio, and using a pre-trained model to replicate their likeness to make them say or do anything you want. The deepfakes are so advanced it's even hard for experts to tell the difference. And what's worse, there are no federal laws to stop someone from cloning your voice without your permission. Consumer Reports reviewed six popular voice cloning apps, uncovering a troubling trend. Four of the six apps had no meaningful way to ensure that the user had the original speaker's consent to voice clone them. The two other apps were better and had more safeguards, but found ways around them. While it's practically impossible to erase your digital footprint, CR says there are some steps you can take to protect yourself: The first thing is knowing that deep fake scams like this exist. The second thing is using two-factor authentication on all of your financial accounts. That means having an extra security feature on your smartphone device that requires you to input a security code or respond to an email when trying to gain access to your bank accounts. Third, be wary of calls, texts, or emails that ask for your personal financial information or data. And finally, do a gut check. Does what you're hearing or seeing make sense? By default, you should not believe anything you see online. You should always follow standard common sense. Read more about Consumer Reports revealing deepfake investigation here. WATCH BELOW: Consumer Reports weighs in on wise ways to spend your tax refund

Reality Defender Announces Strategic Investments from BNY, Samsung Next, and Fusion Fund
Reality Defender Announces Strategic Investments from BNY, Samsung Next, and Fusion Fund

Yahoo

time03-04-2025

  • Business
  • Yahoo

Reality Defender Announces Strategic Investments from BNY, Samsung Next, and Fusion Fund

NEW YORK, April 3, 2025 /PRNewswire/ -- Reality Defender, the RSA Innovation Sandbox-winning deepfake and AI-generated media detection platform, today announced strategic investments from BNY (NYSE: BK), Samsung Next, and Fusion Fund. These collaborations reinforce the organizations' commitment to the prevention of deepfake-enabled fraud across critical financial and communication channels. Founded in 2021, Reality Defender provides best-in-class detection against advanced communication-based threats posed by dangerous deepfake impersonations. The company's comprehensive solutions for enterprises, government, and institutional clients help protect against everything from advanced voice fraud in call centers to deepfake intrusions into highly sensitive web conferencing calls. As deepfakes continue to spread and cause tangible damages, Reality Defender's technology has become increasingly critical in the fight against fraud and disinformation. A deepfake attempt happened every five minutes in 2024 according to Entrust, while deepfakes now account for 40% of all biometric fraud as reported by the same source, showing the immediate need for Reality Defender's AI-driven real-time detection capabilities in maintaining the trust and integrity of critical financial systems. "BNY, Samsung Next, and Fusion Fund's strategic investments and global expertise will allow Reality Defender to scale our technology during a time of heightened deepfake volatility," said Ben Colman, Co-Founder and CEO of Reality Defender. "With their support and alignment with our mission of securing critical communication channels against deepfake impersonations, we can work toward defeating the most advanced fraud attacks and cyber threats of our time." "Financial institutions face unprecedented risks from deepfake technology that can compromise critical communication channels and undermine trust in our systems," said Marianna Lopert-Schaye, Managing Director, Strategic Partnerships, Investments and Innovation for BNY. "Our investment in Reality Defender reflects our commitment to protecting the integrity of financial transactions through industry-leading security innovations that address emerging threats at scale." "As AI-generated content becomes increasingly sophisticated, the ability to detect deepfakes in real time is vital for maintaining trust in digital communications," said Raymond Liao, Managing Director, Samsung Next. "Reality Defender's multimodal approach and proven technology align perfectly with our focus on supporting innovations that secure the future of communication in an AI-powered world." "Deepfake technology represents one of the most significant emerging threats to institutional trust in our digital economy," said Lu Zhang, Founder and Managing Partner of Fusion Fund. "Reality Defender's pioneering approach to multimodal detection addresses this challenge head-on with technology that not only identifies today's threats but is engineered to anticipate tomorrow's attacks. We're proud to support their mission of securing critical communication channels in an increasingly AI-powered world." Reality Defender was named the Most Innovative Company at the 2024 RSA Innovation Sandbox competition and continues to lead the industry in developing innovative solutions for detecting and preventing AI-generated threats. Terms of the investments were not disclosed. About Reality DefenderReality Defender secures critical communication channels against deepfake impersonations, enabling enterprises and governments to interact with confidence in an AI-powered world. Our patented multimodal approach detects sophisticated impersonations in real time, while flexible deployment options integrate seamlessly with existing infrastructure. Through continuous engineering and rigorous testing, Reality Defender empowers security teams to stop deepfake-enabled attacks before they can compromise assets or damage institutional trust. About BNYBNY is a global financial services company that helps make money work for the world – managing it, moving it and keeping it safe. For 240 years BNY has partnered alongside clients, putting its expertise and platforms to work to help them achieve their ambitions. Today BNY helps over 90% of Fortune 100 companies and nearly all the top 100 banks globally to access the money they need. BNY supports governments in funding local projects and works with over 90% of the top 100 pension plans to safeguard investments for millions of individuals, and so much more. As of December 31, 2024, BNY oversees $52.1 trillion in assets under custody and/or administration and $2.0 trillion in assets under management. BNY is the corporate brand of The Bank of New York Mellon Corporation (NYSE: BK). Headquartered in New York City, BNY employs over 50,000 people globally and has been named among Fortune's World's Most Admired Companies and Fast Company's Best Workplaces for Innovators. Additional information is available on Follow on LinkedIn or visit the BNY Newsroom for the latest company news. About Samsung NextSamsung Next invests in bold and ambitious founders, focusing on transformative innovations in AI, intelligent machines, healthtech, consumer services, and frontier technology. Learn more at About Fusion FundFounded in 2015 by Managing Partner Lu Zhang, Fusion Fund has consistently supported technically driven entrepreneurs leveraging data and technology to redefine industries. Since its inception, Fusion Fund has been an early pioneer in AI – driving thought leadership and investing in early leaders. With $190M in committed capital, Fusion Fund IV builds on the success of Funds I, II, and III by focusing on next-generation technology and AI solutions to meet the rising demands for efficiency and scalability across industries. For media inquiries, please contact: Scott SteinhardtReality Defenderscott@ View original content to download multimedia: SOURCE Reality Defender Sign in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store