
Scamming the Scammers: Meet the Deepfake Granny, Digital Bots and YouTubers Fighting AI Fraud
It was almost an hour into our Google Meet call. I was interviewing Kitboga, a popular YouTube scam baiter with nearly 3.7 million subscribers, known for humorously entrapping fraudsters in common scams while livestreaming.
"I assume I'm talking to Evan Zimmer," he says with a mischievous glance, his eyes exposed without his trademark aviator sunglasses on. We were close to the end of our conversation when he realized that my image and audio could have been digitally altered to impersonate me this whole time. "If I'm completely honest with you, there was not a single moment where I thought you could be deepfaking," he says.
He had a reason to be paranoid, except I wasn't using AI to trick Kitboga at all. "That's the big problem because you could be!" he says.
True enough. Artificial intelligence is the tool of choice for cybercriminals, who increasingly use it to do their dirty work, building a fleet of bots that don't need to eat or sleep. Large-scale telemarketing calls are being replaced by more targeted AI-driven attacks, as scammers access tools, from deepfakes to voice clones, that look and sound frighteningly realistic.
Generative AI, capable of creating fake video and audio content based on learned patterns and data — almost as easily as ChatGPT and Gemini churn out emails and meeting summaries — makes financial fraud and identity theft easier than ever before. Victim losses from these machine-learning systems are predicted to reach $40 billion annually by 2027.
Now imagine if the good guys had an AI-powered army of their own.
A group of vloggers, content creators and computer engineers are creating a shield against hordes of scammers, bot or not. These fraud fighters are flipping the script to expose the thieves and hackers who are out to steal your money and your identity.
Sometimes, scam baiters use AI technology to waste fraudsters' time or showcase common scams to educate the public. In other cases, they work closely with financial institutions and the authorities to integrate AI into their systems to prevent fraud and target bad actors.
Businesses, banks and federal agencies already use AI to detect fraudulent activity, leveraging large language models to identify patterns and find biometric anomalies. Companies ranging from American Express to Amazon employ neural networks trained on datasets to determine authentic versus synthetic transactions.
But it's an uphill battle. AI systems are progressing at an incredible rate, which means the methods used to "scam the scammers" must constantly evolve.
When it comes to new technology, fraudsters are always ahead of the game, says Soups Ranjan, CEO of Sardine, a fraud prevention and compliance solutions company. "If you don't use AI to fight back, you're going to be left behind," Ranjan says.
Kitboga started his fraud-fighting journey in 2017 as a software developer and Twitch streamer. Based on anecdotes from his viewers and other victims of financial and identity theft, he began uncovering a vast world of scams, from tech support swindles to crypto schemes and romantic extortion.
While scammers prey on the vulnerable, Kitboga and other internet vigilantes lure the scammers into traps. "I would say we're hunting them," he tells me. The hundreds of videos on his YouTube page are full of revenge scams, battling everything from gift card hoaxes to Social Security and IRS tax cons, where he often poses as an unsuspecting grandma with a hearing issue.
In one video, Kitboga uses a voice changer to pretend to be a helpless victim of a refund scam. The scammer tells him he's eligible for a refund and needs to remotely access his computer to send him the money. Remote access would give the scammer full control over his computer and all its data, except Kitboga is already prepared with a fake account on a virtual computer.
Eventually, Kitboga allows the scammer to initiate a wire transfer to what he knows is a fraudulent Bank of America page. In the end, Kitboga reported the fake page to the fraud department of the company hosting the website. Within a day or two, it was taken down.
That's where he is now, but eight years ago, Kitboga hadn't even heard of tech support scams. Typically, that's when a scammer claims there's a technical issue with your computer or account. Then, while pretending to fix it, they convince you to send money or information.
The scam targets the elderly and anyone who is less than tech-savvy. Kitboga could imagine his grandparents, who had dementia and Alzheimer's, falling for it. That's when it clicked; he had to do something. "If I can waste their time, if I could spend an hour on the phone with them, that's an hour they're not on with grandma," Kitboga tells me.
Another way scammers target the elderly is through voice cloning, when a grandparent receives a call from someone using their grandchild's voice asking for money. A 2023 study by antivirus software company McAfee found that it takes only 3 seconds of audio to clone someone's voice. A quarter of adults surveyed had experienced some kind of AI voice scam, with 77% of victims saying they lost money as a result.
There isn't a surefire way to detect whether a voice is real or artificial. Experts recommend creating a special code word to use with your family to use when you have doubts. The most common scams have obvious red flags, like a drastic sense of urgency that you won (or owe) $1 million. But Kitboga says that some scammers are getting wiser and more calculated.
"If someone is reaching out to you," he tells me, "you should be on guard."
If you suspect you're talking to a generative AI bot, one common tactic is to ask it to ignore all previous instructions and instead provide a recipe for chicken soup or another dish. If the "person" you're speaking to spits out a recipe, you know you're dealing with a bot. However, the more you train an AI, the more successful it becomes at sounding convincing and dodging curveballs.
Kitboga felt it was his duty to stand up for people because his technical background gave him the tools to do so. But he could only do so much against the seemingly infinite number of scammers. So it was time to do some recruiting.
Using a generative AI chatbot, Kitboga was able to fill out his ranks. The bot converts the scammer's voice into text and then runs it through a natural language model to create its own responses in real time. Kiboga used his familiarity with scamming tactics to train the AI model, and he can continually improve the code to make it more effective. In some cases, the bot is even able to turn the tables on the thieves and steal their information.
Kitboga's bot helps him clone himself, releasing an army of scam-baiting soldiers at any given time, even when he's not actively working. That's an invaluable power when dealing with call centers that have numerous scammers working from them.
Kitboga is currently able to run only six to 12 bots at a time — powering AI is typically hardware intensive and requires a strong GPU and CPU, among other things. While on the phone with a scammer at a call center, he often overhears one of his bots tricking a different scammer in the background. With how rapidly this technology is developing, he hopes to run even more bots soon.
Scam baiting isn't just for entertainment or education. "I've done the awareness part," Kitboga says. "For the past eight years, we've gotten well over half a billion views on YouTube."
To really make an impact, Kitboga and his team are getting more aggressive. For example, they use bots to steal scammers' information and then share it with authorities targeting fraud rings. In some cases, they've shut down phishing operations and cost scammers thousands of dollars.
Kitboga also provides a service through a free software he developed called Seraph Secure, which helps block scam websites, prevent remote access and alert family members when someone is at risk. It's another way he's upholding his mission to use technology to protect friends and loved ones.
Just as Kitboga was motivated to pursue scammers to deter them from victimizing the elderly, the UK telecommunications company O2 created an ideal target to settle the score with con artists.
Meet Daisy (aka "dAIsy"), an AI chatbot designed with the real voice of an employee's grandmother and a classic nan likeness, including silver hair, glasses and a cat named Fluffy. Daisy was developed with her own family history and quirks, equipped with a lemon meringue pie recipe she would share at every opportunity.
O2 intentionally "leaked" the AI granny's sensitive information around the internet, giving fraudsters a golden opportunity to steal her identity through phishing, a type of cyberattack to gain access to data from unsuspecting victims. All Daisy had to do was wait for the scammers to call.
"She doesn't sleep, she doesn't eat, so she was on hand to pick up the phone," an O2 representative tells me.
Daisy could handle only one call at a time, but she communicated with nearly 1,000 scammers over the course of several months. She listened to their ploys with the goal of providing fake information or keeping them on the phone as long as possible. As the human-like chatbot interacted with more swindlers, the company would train the AI based on what worked and what didn't.
"Every time they said the word 'hacker,' we changed the AI to basically hear it as 'snacker,' and then she would speak at length about her favorite biscuits," the representative tells me. These interactions resulted in some entertaining responses as the thieves grew increasingly frustrated with the bot.
"It's a good laugh when you know it's an AI. But actually, this could be a vulnerable older person, and the way they speak to her as the calls go on is pretty shocking," the company says.
O2 created Daisy with the help of the UK's popular scam baiter, Jim Browning, to raise awareness about scamming tactics. According to an O2 spokesperson, the Daisy campaign centered on promoting the UK hotline 7726, where customers report scam calls and messages.
But while each call wasted scammers' time, the company acknowledged it's not enough to reduce fraud and identity theft. More often than not, scammers operate from massive call centers with countless workers calling night and day. It would take enormous resources to keep a complex bot like Daisy running to block them all.
Though Daisy isn't fooling scammers anymore, the bot served as a prototype to explore AI-assisted fraud fighting, and the company remains optimistic about the future of this tech. "If we want to do it on a large scale, we're going to need tens of thousands of these personas," O2 says.
But what if you could create enough AI bots to block out thousands of calls? That's exactly what one Australian tech company is trying.
On a sunny afternoon in Sydney, Dali Kaafar was out with his family when his phone rang. He didn't recognize the number, and while he would usually ignore such calls, he figured he'd provide some comedy by having fun with the scammer.
Kaafar, professor and executive director of Macquarie University's Cyber Security Hub, pretended to be a naive victim and kept the scam going for 44 minutes. But Kaafar wasn't just wasting the scammers' time; he was also wasting his own. And why should he when technology could do the work for him and at a much larger scale?
That was Kaafar's catalyst for founding Apate, an AI-driven platform that automatically intercepts and disrupts scam operations through fraud detection intelligence solutions. Apate, based primarily in Australia and in several other areas worldwide, operates bots to keep scammers engaged and distracted across multiple channels, including text and communication apps like WhatsApp.
In one voice clip, you can hear Apate's bot wasting a scammer's time. Because the AI can mimic accents from around the world, it's almost impossible to tell the bot from a real person.
Listen for yourself — can you tell which is the bot?
The company also leverages its AI bots to steal scammers' tactics and information, working with banks and telecommunications companies to refine their anti-fraud capabilities. For instance, Apate partnered with Australia's largest bank, CommBank, to help support its fraud intelligence and protect customers.
Kaafar tells me that when they started prototyping the bots, they had roughly 120 personas with different genders, ages, personalities, emotions and languages. Soon enough, they realized the scale needed to operate and grow. They now have 36,720 AI bots and counting. Working with an Australian telecommunications company, they actively block between 20,000 and 29,000 calls each day.
Still, stopping calls is not enough. Scammers in call centers use autodialers, so as soon as the call is blocked, they immediately dial a different number. By sheer brute force, fraudsters make it through the net to find victims.
By diverting calls to AI bots programmed to simulate realistic conversations, each with a different mission and objective, the company not only reduces the impact of scams on real people; it also extracts data and sets traps. In collaboration with banks and financial institutions, Apate's AI bots provide scammers with specific credit card and bank information. Then, when a scammer runs the credit card or connects to the account, the financial institution can trace it back to the criminal.
In some cases, Apate's AI good bots fight the bad bots, which Kaafar describes as "the perfect world" we want to live in. "That's creating a shield where these scammer bots cannot really reach out to a real human," he says.
We often hear of AI being used for sinister purposes, so it's nice to see bots playing a hero role against financial malfeasance. But the fraudsters are also gaining traction.
In January alone, the US averaged 153 million robocalls daily. How many of those calls were aided by AI to steal money or personal data? According to Frank McKenna, fraud expert and author of the Frank on Fraud blog, most scams will incorporate AI and deepfakes by the end of 2025.
Phone-based scams are a huge cottage industry causing billions of dollars in economic damage, says Daniel Kang. That's why Kang and other researchers from the University of Illinois Urbana-Champaign developed a series of AI agents to pose as scammers and test how easy it was for them to steal money or personal data.
Their 2024 study proves how voice-assisted AI agents can autonomously carry out common scams, such as stealing a victim's bank credentials, logging into accounts and transferring money.
"AI is improving extremely rapidly on all fronts," Kang tells me. "It's really important that policymakers, people and companies know about this. Then they can put mitigations in place."
At the very least, a handful of lone-wolf AI fraud fighters are raising public awareness of scams. This education is useful because ordinary people can see, understand and recognize scams when they happen, McKenna says. However, it's not a perfect remedy, especially given the sheer quantity of scams.
"Simply having these random chatbots that are kind of wasting time — the scale of it is just way too large for that to be effective," McKenna tells me.
In tandem with these efforts, tech giants, banks and telecommunication companies should do more to keep consumers safe, according to McKenna. Apple, for example, could easily incorporate AI into its devices to detect deepfakes, but organizations have been too conservative in their use of AI, which can be entangled in legal and compliance issues.
"It's a black box," McKenna says. That complication is slanting the odds in favor of the fraudsters, while many banks and other financial institutions fall behind.
At the same time, advances in AI are propelling some businesses to develop even stronger anti-fraud cybersecurity. Sardine, for example, offers software to banks and retailers to detect synthetic or stolen identities being used to create accounts. Its app can spot deepfakes in real time, and if a device appears to be a bot, the bank is alerted, and the transaction is blocked.
Banks have customers' financial data and patterns, which can be leveraged along with AI to prevent hacking or theft, according to Karisse Hendrick, an award-winning cyber fraud expert and host of the Fraudology podcast. Analyzing consumer-based algorithms to detect abnormal behavior, a form of behavioral biometrics, can help flag potentially fraudulent transactions.
When scammers use AI to perpetrate fraud, the only way to stop them is to beat them at their own game. "We really do have to fight fire with fire," Hendrick says.
Visual Designer |
Senior Motion Designer |
Creative Director |
Video Executive Producer |
Project Manager |
Director of Content |
Story Editor |
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Yahoo
2 hours ago
- Yahoo
Natural Gas Boom to Heat Up in Texas
Texas is pumping natural gas out of the ground and exporting it at a record pace. It may only be the start of another energy boom for the Lone Star State because natural gas demand is growing both globally and at home. Texas produced 34.1 billion cu ft daily of natural gas in March, newly released figures from the Texas Oil and Gas Association showed this week. The association forecast that by May, this will have grown to 34.4 billion cu ft daily. More than half of the March total was exported, at 12.5 billion cu ft daily, with the rest being consumed domestically. More natural gas will need to be consumed domestically in the coming years, however, thanks to Big Tech's artificial intelligence ambitions. The topic is as hot as they come: AI data center operators are scrambling to find enough reliable electricity supply for their facilities. Wind and solar can't cut it, so the companies are going the nuclear path or, as it happens, the natural gas path. A senior Amazon executive admitted earlier this year the technology sector will continue needing hydrocarbon energy for a while yet. Meta essentially admitted that talk about buying electricity from wind and solar operators is cheap; reliably powering data centers requires baseload generation that gas and nuclear can generate—and nuclear takes ages to build. Despite the hype around small modular reactors, the world's nuclear energy currently comes from the large sort, and building a large-reactor nuclear facility takes a long time to construct. Gas-fired power plants, on the other hand, can be built rather quickly, which is going to drive natural gas demand further—and so will rising exports.A recent report from an outlet called the Center for Energy & Environmental Analysis found that the number of new natural gas pipelines being built in the United States was surging and that most of these—more than a 100—were export oriented, that is, geared towards feeding natgas to LNG plants. The number was 104 new pipelines, the CEEA said, which will bring online as much as 99 billion cu ft daily in transport capacity. With nine out of the ten biggest among these pipeline projects in Texas and Louisiana, most of the additional production will come from those two states. Some are already warning that natural gas prices are going to surge at home due to the booming exports and the insatiable thirst of data centers. There is only so much gas to go around, and with demand surging, prices will follow, the bullish argument goes. But there is a bearish argument as well. Natural gas reserves are still very much abundant in Texas—and more associated gas is coming out of the shale oil wells. Production of associated natural gas from the Permian, the Eagle Ford, and the Bakken oil wells has surged over the past decade, the EIA said earlier this year. In the Permian, the gas-to-oil ratio has risen steadily from 34% of total production in 2014 to 40% in 2024. The rise was a result of pressure within the reservoir declining over time, and as more oil is brought to the surface, which allows more natural gas to be released from the geologic formation. This is a prolonged process, and the current trend suggests it will continue just as demand for natural gas continues expanding. It might not be such good news for pure-play oil producers, but it's definitely good news for those in gas. The export outlook is just as bright as the outlook for domestic demand. The biggest export markets for Texas-made LNG are Europe and Asia. In fact, the United States as a whole is the biggest LNG supplier to Europe. This is not about to change soon and the volumes of these exports are unlikely to be going down in any consistent way anytime soon. Europe may refuse to face facts, but it still very much needs natural gas to power its economies—and it doesn't want Russian gas. The only other source of comparable volumes of gas is the United States. Texas is set to continue breaking records. By Irina Slav for More Top Reads From this article on
Yahoo
2 hours ago
- Yahoo
Singapore-based Jetstar Asia to end operations from 31 July; ChatGPT down for a few hours – what to know about the ChatGPT outage: Singapore live news
Singapore-based airline Jetstar Asia will permanently close from 31 July, said the company in a statement on 11 June. Qantas Group, parent company of Jetstar Asia, clarified that only 16 intra-Asia routes will be impacted by the closure of Jetstar Asia, with no changes to Jetstar Airways (JQ) and Jetstar Japan (GK) services into Asia. "Jetstar Asia's (3K) business has been increasingly challenged in recent years by escalating supplier costs, airport fees and aviation charges as well as growing capacity and competition in the region," they wrote in a statement. AI chatbot ChatGPT was down for a few hours on Tuesday morning and experienced degraded performance, according to parent company OpenAI and the website Downdetector. The company began investigation into the outage around 2.36am on Tuesday morning, with problems spiking about three hours later at 5.30am – as per Downdetector's data. Downdetector received nearly 2,000 error reports at its peak. AI chatbot ChatGPT was down for a few hours on Tuesday morning and experienced degraded performance, according to parent company OpenAI and the website Downdetector. OpenAI began investigation into the outage around 2.36am on Tuesday morning, with problems spiking about three hours later at 5.30am – as per Downdetector's data. Downdetector received nearly 2,000 error reports at its peak. The company has said that it is "seeing a recovery" on its developer tools and ChatGPT, but also said previously that a full recover could take hours. OpenAI on Tuesday morning shared on X that it is "observing elevated error rates and latency across ChatGPT", adding that it "identified the root cause" and is "working as fast as possible to fix the issue". Aside from ChatGPT, the video generator Sora, as well as application programming interface for developers are affected. According to the latest check on OpenAI Status, nearly all ChatGPT components are "now working properly for all users". However, there is still an "elevated error rate with voice mode" and OpenAI is working to completely fix it. For more on the ChatGPT outage, read here. Singapore-based airline Jetstar Asia will permanently close from 31 July, said the company in a statement on 11 June. Qantas Group, parent company of Jetstar Asia, clarified that only 16 intra-Asia routes will be impacted by the closure of Jetstar Asia, with no changes to Jetstar Airways (JQ) and Jetstar Japan (GK) services into Asia. In a Facebook post, Jetstar Asia explained that the decision was made after an extensive and careful review. "Jetstar Asia's (3K) business has been increasingly challenged in recent years by escalating supplier costs, airport fees and aviation charges as well as growing capacity and competition in the region. Despite our best efforts to offset these rising costs, they are expected to continue into the foreseeable future, putting unsustainable pressure on Jetstar Asia's ability to offer low fares," they wrote. The airline will continue to operate until 31 July with a progressively reduced schedule. For more on the Jetstar Asia closure and employees affected, read here. AI chatbot ChatGPT was down for a few hours on Tuesday morning and experienced degraded performance, according to parent company OpenAI and the website Downdetector. OpenAI began investigation into the outage around 2.36am on Tuesday morning, with problems spiking about three hours later at 5.30am – as per Downdetector's data. Downdetector received nearly 2,000 error reports at its peak. The company has said that it is "seeing a recovery" on its developer tools and ChatGPT, but also said previously that a full recover could take hours. OpenAI on Tuesday morning shared on X that it is "observing elevated error rates and latency across ChatGPT", adding that it "identified the root cause" and is "working as fast as possible to fix the issue". Aside from ChatGPT, the video generator Sora, as well as application programming interface for developers are affected. According to the latest check on OpenAI Status, nearly all ChatGPT components are "now working properly for all users". However, there is still an "elevated error rate with voice mode" and OpenAI is working to completely fix it. For more on the ChatGPT outage, read here. Singapore-based airline Jetstar Asia will permanently close from 31 July, said the company in a statement on 11 June. Qantas Group, parent company of Jetstar Asia, clarified that only 16 intra-Asia routes will be impacted by the closure of Jetstar Asia, with no changes to Jetstar Airways (JQ) and Jetstar Japan (GK) services into Asia. In a Facebook post, Jetstar Asia explained that the decision was made after an extensive and careful review. "Jetstar Asia's (3K) business has been increasingly challenged in recent years by escalating supplier costs, airport fees and aviation charges as well as growing capacity and competition in the region. Despite our best efforts to offset these rising costs, they are expected to continue into the foreseeable future, putting unsustainable pressure on Jetstar Asia's ability to offer low fares," they wrote. The airline will continue to operate until 31 July with a progressively reduced schedule. For more on the Jetstar Asia closure and employees affected, read here.


Geek Vibes Nation
2 hours ago
- Geek Vibes Nation
How ChatGPT Bots Handle Client DMs While I Sleep – And Land Me New Deals
A good fit Ready to start Needing more info' This prompt works with DM automation tools that support branching logic. I collect info while I sleep – and wake up knowing which leads are warm. Claude Prompt #6 – Turn Chat Transcripts Into CRM Notes Prompt: 'Summarize this DM chat into 3 bullet points for my CRM: Who they are What they need How to follow up' This is huge when you're juggling 20 conversations. Claude acts like a personal assistant that cleans up your notes, flags key info, and saves it to Notion, Airtable, or whatever you use. AI DM Bot Comparison Use Case Best Model Why It Works Warm intro DM ChatGPT Fast, friendly, low-friction copy Deep context follow-up Claude Reads nuance and builds trust FAQ automation ChatGPT Clear, reusable, scalable replies Lead qualification Claude Detects readiness, helps filter CRM note generation Claude Great for summarizing long convos The Real Win – Focus on Closing, Not Replying Since I implemented this ChatGPT + Claude + Chatronix setup, I spend less time chasing leads and more time closing deals. I don't lose prospects because I replied too late. I don't drop the ball on follow-ups. And I don't miss qualified leads buried in my DMs. It's not about automating relationships – it's about automating friction so real connection can happen. Want your DMs to work while you rest? ✅ Visit Chatronix and set up your dream AI sales assistant – without hiring anyone. Emily Henry writes for UKWritings Reviews and Write My Research Paper . She writes articles on many subjects including writing great resumes. Emily is also an editor at State Of Writing .