Latest news with #AIbots
Yahoo
22-07-2025
- Yahoo
New Study Shows Teens Are Increasingly Relying on AI Chatbots for Social Interaction
This story was originally published on Social Media Today. To receive daily news and insights, subscribe to our free daily Social Media Today newsletter. Yeah, this seems like it's going to be a problem in future, though maybe that's considered the cost of progress? Last week, Common Sense Media published a new report which found that 72% of U.S. teens have already used an AI companion, with many of them now conducting regular social interactions with their chosen virtual friends. The study is based on a survey of 1,060 teens, so it's intended as an indicative measure, not as a definitive overview of AI usage. But the trends do point to some potentially significant concerns, particularly as platforms now look to introduce AI bots that can also serve as romantic partners in some capacity. First off, as noted, the data shows that 72% of teens have tried AI companions, and 52% of them have become regular users of these bots. What's worth noting here is that AI bots aren't anywhere near where they're likely to be in a few more years' time, with the tech companies investing billions of dollars into advancing their AI bots to make them more relatable, more conversational, and better emulators of real human engagement. But they're not. These are bots, which respond to conversational cues based on the context that they have available, and whatever weighting system each company puts into their back-end process. So they're not an accurate simulation of actual human interaction, and they never will be, due to the real mental and physical connection enabled through such. Yet, we're moving towards a future where this is going to become a more viable replacement for actual civic engagement. But what if a bot gets changed, gets infected with harmful code, gets hacked, shut down, etc.? The broader implications of enabling, and encouraging such connection, are not yet known, in terms of the mental health impacts that could come as a result. But we're moving forward anyway, with the data showing that 33% of teens already use AI companions for social interaction and relationships. Of course, some of this may well end up being highly beneficial, in varying contexts. For example, the ability to ask questions that you may not be comfortable saying to another person could be a big help, with the survey data showing that 18% of AI companion users refer to the tools for advice. Nonjudgmental interaction has clear benefits, while 39% of AI companion users have also transferred social skills that they've practiced with bots over to real-life situations (notably, 45% of females have done this, versus 34% of male users). So there's definitely going to be benefits. But like social media before it, the question is whether those positives will end up outweighing the potential negatives of over-reliance on non-human entities for traditionally human engagement. 31% of survey participants indicated that they find conversations with AI companions as satisfying or more satisfying than those with real-life friends, while 33% have chosen AI over humans for certain conversations. As noted, the fact that these bots can be skewed to answer based on ideological lines is a concern in this respect, as is the tendency for AI tools to 'hallucinate' and make incorrect assumptions in their responses, which they then state as fact. That could lead youngsters down the wrong path, which could then lead to potential harm, while again, the shift to AI companions as romantic partners opens up even more questions about the future of relationships. It seems inevitable that this is going to become a more common usage for AI tools, that our budding relationships with human simulators will lead to more people looking to take those understanding, non-judgmental relationships to another level. Real people will never understand you like your algorithmically-aligned AI bot can, and that could actually end up exacerbating the loneliness epidemic, as opposed to addressing it, as some have suggested. And if young people are learning these new relationship behavors in their formative years, what does that do for their future concept of human connection, if indeed they feel they need that? And they do need it. Centuries of studies have underlined the importance of human connection and community, and the need to have real relationships to help shape your understanding perspective. AI bots may be able to simulate some of that, but actual physical connection is also important, as is human proximity, real world participation, etc. We're steadily moving away from this over time, and you could argue, already, that increasing rates of severe loneliness, which the WHO has declared a 'pressing global health threat,' are already having major health impacts. Indeed, studies have shown that loneliness is associated with a 50% increased risk of developing dementia and a 30% increased risk of incident coronary artery disease or stroke. Will AI bots help that? And if not, why are we pushing them so hard? Why is every app now trying to make you chat with these non-real entities, and share your deepest secrets with their evolving AI tools? Is this more beneficial to society, or to the big tech platforms that are building these AI models? If you lean towards the latter conclusion, then progress is seemingly the bigger focus, just as it was with social media before it. AI providers are already pushing for the European Union to relax its restrictions on AI development, while the looming AI development race between nations is also increasing the pressure on all governments to loosen the reigns, in favor of expediting innovation. But should we feel encouraged by Meta's quest for 'superintelligence,' or concerned at the rate in which these tools are becoming so common in elements of serious potential impact? That's not to say that AI development in itself is bad, and there are many use cases for the latest AI tools that will indeed increase efficiency, innovation, opportunity, etc. But there does seem to be some areas in which we should probably tread more cautiously, due to the risks of over reliance, and the impacts of such on a broad scale. That's seemingly not going to happen, but in ten years time, we're going to be assessing this from a whole different perspective. You can check out Common Sense Media's 'Talk, Trust, and Trade-Offs' report here. 擷取數據時發生錯誤 登入存取你的投資組合 擷取數據時發生錯誤 擷取數據時發生錯誤 擷取數據時發生錯誤 擷取數據時發生錯誤


Crypto Insight
16-07-2025
- Business
- Crypto Insight
Can AI bots steal your crypto? The rise of digital thieves
AI bots are self-learning software that automates and continuously refines crypto cyberattacks, making them more dangerous than traditional hacking methods. At the heart of today's AI-driven cybercrime are AI bots — self-learning software programs designed to process vast amounts of data, make independent decisions, and execute complex tasks without human intervention. While these bots have been a game-changer in industries like finance, healthcare and customer service, they have also become a weapon for cybercriminals, particularly in the world of cryptocurrency. Unlike traditional hacking methods, which require manual effort and technical expertise, AI bots can fully automate attacks, adapt to new cryptocurrency security measures, and even refine their tactics over time. This makes them far more effective than human hackers, who are limited by time, resources and error-prone processes. Why are AI bots so dangerous? The biggest threat posed by AI-driven cybercrime is scale. A single hacker attempting to breach a crypto exchange or trick users into handing over their private keys can only do so much. AI bots, however, can launch thousands of attacks simultaneously, refining their techniques as they go. Speed: AI bots can scan millions of blockchain transactions, smart contracts and websites within minutes, identifying weaknesses in wallets (leading to crypto wallet hacks), decentralized finance (DeFi) protocols and exchanges. AI bots can scan millions of blockchain transactions, smart contracts and websites within minutes, identifying weaknesses in wallets (leading to crypto wallet hacks), decentralized finance (DeFi) protocols and exchanges. Scalability: A human scammer may send phishing emails to a few hundred people. An AI bot can send personalized, perfectly crafted phishing emails to millions in the same time frame. A human scammer may send phishing emails to a few hundred people. An AI bot can send personalized, perfectly crafted phishing emails to millions in the same time frame. Adaptability: Machine learning allows these bots to improve with every failed attack, making them harder to detect and block. This ability to automate, adapt and attack at scale has led to a surge in AI-driven crypto fraud, making crypto fraud prevention more critical than ever. In October 2024, the X account of Andy Ayrey, developer of the AI bot Truth Terminal, was compromised by hackers. The attackers used Ayrey's account to promote a fraudulent memecoin named Infinite Backrooms (IB). The malicious campaign led to a rapid surge in IB's market capitalization, reaching $25 million. Within 45 minutes, the perpetrators liquidated their holdings, securing over $600,000. AI-powered bots aren't just automating crypto scams — they're becoming smarter, more targeted and increasingly hard to spot. Here are some of the most dangerous types of AI-driven scams currently being used to steal cryptocurrency assets: 1. AI-powered phishing bots Phishing attacks are nothing new in crypto, but AI has turned them into a far bigger threat. Instead of sloppy emails full of mistakes, today's AI bots create personalized messages that look exactly like real communications from platforms such as Coinbase or MetaMask. They gather personal information from leaked databases, social media and even blockchain records, making their scams extremely convincing. For instance, in early 2024, an AI-driven phishing attack targeted Coinbase users by sending emails about fake cryptocurrency security alerts, ultimately tricking users out of nearly $65 million. Also, after OpenAI launched GPT-4, scammers created a fake OpenAI token airdrop site to exploit the hype. They sent emails and X posts luring users to 'claim' a bogus token — the phishing page closely mirrored OpenAI's real site. Victims who took the bait and connected their wallets had all their crypto assets drained automatically. Unlike old-school phishing, these AI-enhanced scams are polished and targeted, often free of the typos or clumsy wording that is used to give away a phishing scam. Some even deploy AI chatbots posing as customer support representatives for exchanges or wallets, tricking users into divulging private keys or two-factor authentication (2FA) codes under the guise of 'verification.' In 2022, some malware specifically targeted browser-based wallets like MetaMask: a strain called Mars Stealer could sniff out private keys for over 40 different wallet browser extensions and 2FA apps, draining any funds it found. Such malware often spreads via phishing links, fake software downloads or pirated crypto tools. Once inside your system, it might monitor your clipboard (to swap in the attacker's address when you copy-paste a wallet address), log your keystrokes, or export your seed phrase files — all without obvious signs. 2. AI-powered exploit-scanning bots Smart contract vulnerabilities are a hacker's goldmine, and AI bots are taking advantage faster than ever. These bots continuously scan platforms like Ethereum or BNB Smart Chain, hunting for flaws in newly deployed DeFi projects. As soon as they detect an issue, they exploit it automatically, often within minutes. Researchers have demonstrated that AI chatbots, such as those powered by GPT-3, can analyze smart contract code to identify exploitable weaknesses. For instance, Stephen Tong, co-founder of Zellic, showcased an AI chatbot detecting a vulnerability in a smart contract's 'withdraw' function, similar to the flaw exploited in the Fei Protocol attack, which resulted in an $80-million loss. 3. AI-enhanced brute-force attacks Brute-force attacks used to take forever, but AI bots have made them dangerously efficient. By analyzing previous password breaches, these bots quickly identify patterns to crack passwords and seed phrases in record time. A 2024 study on desktop cryptocurrency wallets, including Sparrow, Etherwall and Bither, found that weak passwords drastically lower resistance to brute-force attacks, emphasizing that strong, complex passwords are crucial to safeguarding digital assets. 4. Deepfake impersonation bots Imagine watching a video of a trusted crypto influencer or CEO asking you to invest — but it's entirely fake. That's the reality of deepfake scams powered by AI. These bots create ultra-realistic videos and voice recordings, tricking even savvy crypto holders into transferring funds. 5. Social media botnets On platforms like X and Telegram, swarms of AI bots push crypto scams at scale. Botnets such as 'Fox8' used ChatGPT to generate hundreds of persuasive posts hyping scam tokens and replying to users in real-time. In one case, scammers abused the names of Elon Musk and ChatGPT to promote a fake crypto giveaway — complete with a deepfaked video of Musk — duping people into sending funds to scammers. In 2023, Sophos researchers found crypto romance scammers using ChatGPT to chat with multiple victims at once, making their affectionate messages more convincing and scalable. Similarly, Meta reported a sharp uptick in malware and phishing links disguised as ChatGPT or AI tools, often tied to crypto fraud schemes. And in the realm of romance scams, AI is boosting so-called pig butchering operations — long-con scams where fraudsters cultivate relationships and then lure victims into fake crypto investments. A striking case occurred in Hong Kong in 2024: Police busted a criminal ring that defrauded men across Asia of $46 million via an AI-assisted romance scam. AI is being invoked in the arena of cryptocurrency trading bots — often as a buzzword to con investors and occasionally as a tool for technical exploits. A notable example is which in 2023 marketed an AI bot supposedly yielding 2.2% returns per day — an astronomical, implausible profit. Regulators from several states investigated and found no evidence the 'AI bot' even existed; it appeared to be a classic Ponzi, using AI as a tech buzzword to suck in victims. was ultimately shut down by authorities, but not before investors were duped by the slick marketing. Even when an automated trading bot is real, it's often not the money-printing machine scammers claim. For instance, blockchain analysis firm Arkham Intelligence highlighted a case where a so-called arbitrage trading bot (likely touted as AI-driven) executed an incredibly complex series of trades, including a $200-million flash loan — and ended up netting a measly $3.24 in profit. In fact, many 'AI trading' scams will take your deposit and, at best, run it through some random trades (or not trade at all), then make excuses when you try to withdraw. Some shady operators also use social media AI bots to fabricate a track record (e.g., fake testimonials or X bots that constantly post 'winning trades') to create an illusion of success. It's all part of the ruse. On the more technical side, criminals do use automated bots (not necessarily AI, but sometimes labeled as such) to exploit the crypto markets and infrastructure. Front-running bots in DeFi, for example, automatically insert themselves into pending transactions to steal a bit of value (a sandwich attack), and flash loan bots execute lightning-fast trades to exploit price discrepancies or vulnerable smart contracts. These require coding skills and aren't typically marketed to victims; instead, they're direct theft tools used by hackers. AI could enhance these by optimizing strategies faster than a human. However, as mentioned, even highly sophisticated bots don't guarantee big gains — the markets are competitive and unpredictable, something even the fanciest AI can't reliably foresee. Meanwhile, the risk to victims is real: If a trading algorithm malfunctions or is maliciously coded, it can wipe out your funds in seconds. There have been cases of rogue bots on exchanges triggering flash crashes or draining liquidity pools, causing users to incur huge slippage losses. AI is teaching cybercriminals how to hack crypto platforms, enabling a wave of less-skilled attackers to launch credible attacks. This helps explain why crypto phishing and malware campaigns have scaled up so dramatically — AI tools let bad actors automate their scams and continuously refine them based on what works. AI is also supercharging malware threats and hacking tactics aimed at crypto users. One concern is AI-generated malware, malicious programs that use AI to adapt and evade detection. In 2023, researchers demonstrated a proof-of-concept called BlackMamba, a polymorphic keylogger that uses an AI language model (like the tech behind ChatGPT) to rewrite its code with every execution. This means each time BlackMamba runs, it produces a new variant of itself in memory, helping it slip past antivirus and endpoint security tools. In tests, this AI-crafted malware went undetected by an industry-leading endpoint detection and response system. Once active, it could stealthily capture everything the user types — including crypto exchange passwords or wallet seed phrases — and send that data to attackers. While BlackMamba was just a lab demo, it highlights a real threat: Criminals can harness AI to create shape-shifting malware that targets cryptocurrency accounts and is much harder to catch than traditional viruses. Even without exotic AI malware, threat actors abuse the popularity of AI to spread classic trojans. Scammers commonly set up fake 'ChatGPT' or AI-related apps that contain malware, knowing users might drop their guard due to the AI branding. For instance, security analysts observed fraudulent websites impersonating the ChatGPT site with a 'Download for Windows' button; if clicked, it silently installs a crypto-stealing Trojan on the victim's machine. Beyond the malware itself, AI is lowering the skill barrier for would-be hackers. Previously, a criminal needed some coding know-how to craft phishing pages or viruses. Now, underground 'AI-as-a-service' tools do much of the work. Illicit AI chatbots like WormGPT and FraudGPT have appeared on dark web forums, offering to generate phishing emails, malware code and hacking tips on demand. For a fee, even non-technical criminals can use these AI bots to churn out convincing scam sites, create new malware variants, and scan for software vulnerabilities. AI-driven threats are becoming more advanced, making strong security measures essential to protect digital assets from automated scams and hacks. Below are the most effective ways on how to protect crypto from hackers and defend against AI-powered phishing, deepfake scams and exploit bots: Use a hardware wallet: AI-driven malware and phishing attacks primarily target online (hot) wallets. By using hardware wallets — like Ledger or Trezor — you keep private keys completely offline, making them virtually impossible for hackers or malicious AI bots to access remotely. For instance, during the 2022 FTX collapse, those using hardware wallets avoided the massive losses suffered by users with funds stored on exchanges. AI-driven malware and phishing attacks primarily target online (hot) wallets. By using hardware wallets — like Ledger or Trezor — you keep private keys completely offline, making them virtually impossible for hackers or malicious AI bots to access remotely. For instance, during the 2022 FTX collapse, those using hardware wallets avoided the massive losses suffered by users with funds stored on exchanges. Enable multifactor authentication (MFA) and strong passwords: AI bots can crack weak passwords using deep learning in cybercrime, leveraging machine learning algorithms trained on leaked data breaches to predict and exploit vulnerable credentials. To counter this, always enable MFA via authenticator apps like Google Authenticator or Authy rather than SMS-based codes — hackers have been known to exploit SIM swap vulnerabilities, making SMS verification less secure. AI bots can crack weak passwords using deep learning in cybercrime, leveraging machine learning algorithms trained on leaked data breaches to predict and exploit vulnerable credentials. To counter this, always enable MFA via authenticator apps like Google Authenticator or Authy rather than SMS-based codes — hackers have been known to exploit SIM swap vulnerabilities, making SMS verification less secure. Beware of AI-powered phishing scams: AI-generated phishing emails, messages and fake support requests have become nearly indistinguishable from real ones. Avoid clicking on links in emails or direct messages, always verify website URLs manually, and never share private keys or seed phrases, regardless of how convincing the request may seem. AI-generated phishing emails, messages and fake support requests have become nearly indistinguishable from real ones. Avoid clicking on links in emails or direct messages, always verify website URLs manually, and never share private keys or seed phrases, regardless of how convincing the request may seem. Verify identities carefully to avoid deepfake scams: AI-powered deepfake videos and voice recordings can convincingly impersonate crypto influencers, executives or even people you personally know. If someone is asking for funds or promoting an urgent investment opportunity via video or audio, verify their identity through multiple channels before taking action. AI-powered deepfake videos and voice recordings can convincingly impersonate crypto influencers, executives or even people you personally know. If someone is asking for funds or promoting an urgent investment opportunity via video or audio, verify their identity through multiple channels before taking action. Stay informed about the latest blockchain security threats: Regularly following trusted blockchain security sources such as CertiK, Chainalysis or SlowMist will keep you informed about the latest AI-powered threats and the tools available to protect yourself. As AI-driven crypto threats evolve rapidly, proactive and AI-powered security solutions become crucial to protecting your digital assets. Looking ahead, AI's role in cybercrime is likely to escalate, becoming increasingly sophisticated and harder to detect. Advanced AI systems will automate complex cyberattacks like deepfake-based impersonations, exploit smart-contract vulnerabilities instantly upon detection, and execute precision-targeted phishing scams. To counter these evolving threats, blockchain security will increasingly rely on real-time AI threat detection. Platforms like CertiK already leverage advanced machine learning models to scan millions of blockchain transactions daily, spotting anomalies instantly. As cyber threats grow smarter, these proactive AI systems will become essential in preventing major breaches, reducing financial losses, and combating AI and financial fraud to maintain trust in crypto markets. Ultimately, the future of crypto security will depend heavily on industry-wide cooperation and shared AI-driven defense systems. Exchanges, blockchain platforms, cybersecurity providers and regulators must collaborate closely, using AI to predict threats before they materialize. While AI-powered cyberattacks will continue to evolve, the crypto community's best defense is staying informed, proactive and adaptive — turning artificial intelligence from a threat into its strongest ally. Source:
Yahoo
16-07-2025
- Yahoo
Reddit Launches New Age Checking Requirements in the UK
This story was originally published on Social Media Today. To receive daily news and insights, subscribe to our free daily Social Media Today newsletter. Reddit has announced that it will begin verifying user ages in the U.K. before enabling access to restricted subreddits, in line with evolving U.K. laws, which could also expand to other regions. In the U.K., the new Online Safety Act requires that all platforms implement measures to prevent children from accessing age-inappropriate content. As a result, beginning this week, Reddit says that it will begin verifying user ages, via third-party platform Persona, to abide by these new regulations. As explained by Reddit: 'Reddit was built on the principle that you shouldn't need to share personal information to participate in meaningful discussions. Unlike platforms that are identity-based and cater to the famous (or those that want to become famous), Reddit has always favored upvoting great posts and comments by people who use whimsical usernames and not their real name. These conversations are often more candid and real than those that force you to share your real-world identity. However, while we still don't want to know who you are on Reddit, there are certainly situations where it would be helpful if we knew a little more about you.' For example, whether you're actually a human being. Reddit recently found itself in the firing line after it was revealed that researchers had unleashed a swarm of AI bots into the r/changemyview subreddit, in order to test whether AI bots were better at swaying people's opinions than actual humans (note: they are). Which produced some interesting findings, but Reddit users were less-than-enthused about being manipulated by AI bots, without any knowledge or note about those interactions. Reddit has since been working on various solutions to address this, with identity checks, via a third party, now viewed as another option to assure people of humanity. Though age checking is the main focus, while Reddit also notes that the U.K. is not alone on this front, with a growing number of jurisdictions now developing laws that will require platforms to verify the ages of their users. As such, this is likely an inevitable, broader shift either way, and Reddit's just getting ahead of it, and killing two birds with one stone, with all users, in all regions, set to potentially be subject to the same. 'We've tried to do this in a way that protects the privacy of UK redditors. To verify your age, we partner with a trusted third-party provider (Persona) who performs the verification on either an uploaded selfie or a photo of your government ID. Reddit will not have access to the uploaded photo, and Reddit will only store your verification status along with the birthdate you provided so you won't have to re-enter it each time you try to access restricted content. Persona promises not to retain the photo for longer than 7 days and will not have access to your Reddit data such as the subreddits you visit.' Your birthdate also won't be visible to other users or advertisers, and will only be used to support safety features and age-appropriate experiences on Reddit. Video age-checking has emerged as the most accurate and workable solution for age verification, with the Australian government also testing the same to align with its coming age restrictions on social media use. Though Reddit has also reportedly explored the use of eye-scanning to detect user identity. That could be a more viable option in future, but right now, Reddit's going with video selfie checks to ensure it aligns with local requirements. Which will be criticized by Redditors, but again, could be an inevitable change either way. Though it is interesting to see Reddit go from the app that exposed thousands of celebrity nudes, to the first platform to implement more thorough age checks. Recommended Reading Clubhouse Looks to More Private Sharing as it Seeks to Regain its Growth Mojo


Gizmodo
21-06-2025
- Business
- Gizmodo
Reddit Looks to Get in Bed With Altman's Creepy ‘World ID' Orbs for User Verification
Gaze into the Orb if you want your upvotes. According to a report from Semafor, Reddit is actively considering partnering with World ID, the verification system co-founded by OpenAI CEO Sam Altman, to perform user verification on its platform. Per the report, Reddit's potential partnership with World ID would allow users to verify that they are human by staring into one of World ID's eye-scanning orbs. Once confirmed to be a real person, users would be able to continue using Reddit without revealing anything about their identity. Currently, Reddit only does verification via email, which has been insufficient to combat the litany of incoming AI-powered bots that are flooding the platform. Gizmodo reached out to both Reddit and World ID for details of the potential partnership. Reddit declined to comment. A spokesperson for World said, 'We don't have anything to share at this time; however, we do see value in proof of human being a key part of online experiences, including social, and welcome all of the opportunities possible to discuss this technology with potential partners.' For those unfamiliar, World is somewhere between a verification system and a crypto scheme. World ID is a method for verifying that a person is a human without requiring them to provide additional personal information—something the company calls 'anonymous proof of human.' It offers several verification techniques, but the most notable is its eye-scanning Orb. The company claims that neither 'verification data, nor iris photos or iris codes' are ever revealed, but going through the scan gets you a World ID, which can be used on a platform like Reddit, should it partner with World on this endeavor. Somewhere in the backend of this whole thing is a cryptocurrency called Worldcoin, which you theoretically can use at major retailers—but like, can you really? Is anyone doing that? The founders of World, Altman and Alex Bania, launched the crypto part of the program with the intention of building an 'AI-funded' universal basic income. Mostly, it's made local governments really mad and has been at the center of legal and regulatory investigations into how it's handling user data. It's largely targeted developing nations for its early launches, and used some dubious practices along the way to get people to demo the system. Also, it's probably not technically illegal, but it does seem pretty convenient that Sam Altman offers a 'solve' for a problem that his other company, OpenAI, is in no small part responsible for. Almost seems like he knew what issues he was about to cause and decided to cash in on both ends. Must be nice.


Daily Mail
07-05-2025
- Business
- Daily Mail
California colleges are being inundated with GHOST students who have a very sinister purpose
Artificial Intelligence (AI) bots are infiltrating California college classes in a financial aid fraud scheme, costing the state and federal government millions of dollars. Professors across Golden State community colleges have noticed an uptick in non-participative students, specifically in virtual classes, since the pandemic. Alarmingly, a significant portion of these passive enrollees have not actually been human, but AI-generated 'ghost students.' These bots have been creeping their way into online courses and taking spots away from real students to scam money from financial aid and scholarship programs. Since community colleges have high acceptance rates, these academic institutions have been easy targets. Over the last 12 months, state colleges reportedly handed off $10 million in federal funds and $3 million in states funds to fake students, according to CalMatters. Data collected from the start of 2025 indicates schools have already thrown away $3 million in federal aid and about $700,000 in state funds. This is a jarring increase from the period between September 2021 and December 2023, when fake students reportedly drew in more than $5 million in federal money and $1.5 million in state funds. This growingly prominent scam has left professors disheartened. Instead of focusing on the quality of their teaching, they must probe their students to make sure they are legitimate. 'I am very intentional about having individualized interaction with all of my students as early as possible,' City College of San Francisco professor Robin Pugh told SFGate. 'That included making phone calls to people, sending email messages, just a lot of reaching out individually to find out "Are you just overwhelmed at work and haven't gotten around to starting the class yet? Or are you not a real person?"' In previous years, Pugh said she only had to pluck about five people from her 40-student online introductory real estate course for not engaging with her at the start of the semester. But this spring, she had to slash 11 students - most of them bots - from the class. Roughly 20 percent of 2021 college applicants were likely fraudulent, CalMatters reported. In January 2024, the number of fake applications rose to 25 percent. The fraction shot up this year, with 34 percent of applications being suspected 'ghost students.' 'It's been going on for quite some time,' Wendy Brill-Wynkoop, the president of the Faculty Association of California Community Colleges and a professor at College of the Canyons in Santa Clarita, told SFGate. 'I think the reason that you're hearing more about it is that it's getting harder and harder to combat or to deal with. 'I have heard from faculty friends that the bots are getting so smart, they're being programmed in a way that they can even complete some of the initial assignments in online classes so that they're not dropped...' Berkeley City College librarian Heather Dodge realized her online course was packed with the scammer bots when she asked students to submit a brief introductory video of themselves so she could get to know them, despite never meeting in person. 'I started noticing that there would be a handful of students that wouldn't submit that assignment in the first week,' she told CalMatters. After emailing them - and not receiving a response - she dropped them from her class. Southwestern College professor Elizabeth Smith had a similar experience this spring, when two of her online courses and their waiting lists were completely maxed out. 'Teachers get excited when there's a lot of interest in their class. I felt like, "Great, I'm going to have a whole bunch of students who are invested and learning,"' she told The Hechinger Report. 'But it quickly became clear that was not the case.' Of the 104 students in the classes and on the waitlists, only 15 of the ended up being real people. Professors have a specified time frame to move people from their roster so they cannot get financial aid for the class. However, after that period, it is more difficult to remove them. Educators are also weary of becoming overzealous with their cuts - accidentally mistaking an actual student, possibly experiencing technical difficulties, for a scammer-sent agent. 'Maybe they didn't have a webcam, maybe they didn't understand the assignment. It was really hard to suss out what was going on with them,' Dodge explained. This was the case for Martin Romero, a journalism major at East Los Angeles College, who was mistook for a bot and wrongfully dropped from a class. On his first day of classes last fall, he failed to log onto class, so his professor swiftly removed him. 'I was freaking out,' the 20-year-old told CalMatters. He emailed the professor to try to rectify the issue, but the course was already filled up again. In the grand scheme of college funding, the California Community Colleges Chancellor's Office estimated only a fraction of a percent of financial aid was handed out to scammers, SFGate reported. Chris Ferguson, a finance executive at the chancellor's office, told CalMatters the scope of the fraud is 'relatively small' considering that California community colleges received $1.7 billion in federal aid and $1.5 billion in state aid last year. Catherine Grant of the Department of Education's Office of Inspector General, which is tasked with handling fraud, told CalMatters her team is 'committed to fighting student aid fraud wherever we find it.' CalMatters discovered that the FBI busted a scammer ring at Los Angeles Harbor College and West Los Angeles College in June 2022 after being tipped off by the department. These fraudsters used at least 57 AI identities to steal more than $1.1 million in federal aid and loans over four years. Another document from the education department to the FBI revealed at least 70 fake students were enrolled at Los Angeles City College 'for the sole purpose of obtaining financial aid refund money.'