Latest news with #bots


CNA
4 days ago
- Business
- CNA
More signing up for courses teaching them to build custom AI bots
Ordinary Singaporeans with no coding background are seizing the chance to build their own AI bots. These are tools that can remember routines, automate chores and send reports without being asked. Enrolments for some of these courses have soared nearly 400 per cent since 2023. With AI Singapore promising it can be 'as easy as making a PowerPoint", schools and training providers are racing to meet the surge. Deepika Giri, Associate Vice President at technology consultancy IDC Research, shares more on customised AI assistants.


Fox News
03-08-2025
- Business
- Fox News
Meta deletes 10 million Facebook accounts this year, but why?
Facebook, being a platform used by billions, is flooded with all kinds of accounts. However, not all of them are genuine. Many are bots that often hijack comment sections, posting the same messages repeatedly. There are also accounts that have built their entire follower base by sharing content originally created by others. Meta seems to have realized the platform needs some cleaning, and it has announced that a staggering 10 million accounts were deleted in the first half of 2025 alone. The purge, it seems, is far from over. Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide — free when you join my Meta says it deleted around 10 million accounts in the first half of 2025, mostly for impersonation, spam behavior and fake engagement. This is part of a broader effort to promote original content and clean up the platform and prioritize original content. The accounts were taken down for impersonating large content producers, according to the company, which shared the update in a blog post aimed at creators. The crackdown is part of a broader initiative "to make Feed more relevant and help authentic creators break through," with Meta starting by "cracking down on spammy content." In addition to the 10 million impersonator accounts, the company says it took action on around 500,000 accounts engaged in spammy behavior or fake engagement. The company is also enforcing stricter policies to cut down on what it calls unoriginal content. It defines this as posts that repeatedly reuse or repurpose another creator's work without giving credit. Meta says it's not targeting creators who participate in trends or remix existing content. What matters is whether they add something original to the mix. The company encourages reaction videos, commentary and other transformative uses of content. But accounts that repeatedly repost others' work without permission or meaningful changes will face consequences. These actions include reducing how widely Meta shows their content and temporarily disabling access to monetization features. If Meta's systems detect duplicate videos, the platform will prioritize the original version and limit the reach of the copies. The company is also experimenting with ways to credit original creators more clearly, such as adding links back to the source video. To help creators maintain visibility and reach, Meta recommends focusing on original content, avoiding third-party watermarks and making substantial edits when using material from other sources. Basic stitching or watermarking, the company notes, doesn't count as a meaningful transformation. Meta's crackdown isn't just about removing spam. It directly affects how content is ranked, shown and monetized. For creators, especially smaller ones trying to expand an audience, originality now plays a bigger role than ever. If your content is flagged as unoriginal or spammy, Facebook may stop showing it in people's feeds. That can tank your reach and, in some cases, cut off access to monetization tools like in-stream ads or bonus programs. On the flip side, creators who focus on making unique content or thoughtfully transforming existing media have a better shot at standing out. Meta says it's adjusting its algorithms to boost authentic voices. That could help original creators gain more traction if they play by the new rules. To prevent Meta from flagging or removing your Facebook account under its new policies, especially if you're a creator or post content regularly, follow these key steps: 1. Post original content. Share content you created yourself, whether it's photos, videos, text or anything else. Meta is more likely to penalize accounts that mostly rely on reposted or recycled material. 2. Transform content if you reuse it. If you're sharing someone else's content (with permission or under fair use), add real value. Think reaction videos, voiceover, commentary or edits that change the context or experience. Simply stitching clips together or slapping on a watermark won't cut it. 3. Avoid impersonation and spam tactics. Don't pretend to be another creator or brand, and don't rely on engagement bait (like spamming comment sections or repetitive hashtags). Meta is actively removing accounts that engage in fake interactions. 4. Avoid using visible third-party watermarks. If your video shows clear signs of being recycled from another app, such as a TikTok watermark, Meta may flag it. Upload clean versions without logos or branding from other platforms. Meta cracking down on spam and fake accounts is a step in the right direction, especially for creators who've been struggling to get noticed. Facebook has been messy for a while now, with the same videos and memes popping up from different pages and bots flooding comment sections. Have you noticed more recycled content on your Facebook Feed lately? Let us know by writing to us at Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide — free when you join my Copyright 2025 All rights reserved.


Tahawul Tech
15-07-2025
- Business
- Tahawul Tech
Companies must protect themselves against bots bypassing defenses
David Warburton, Director, F5 Labs, outlines the growing sophistication of bot adversaries and the steps companies can take to combat them in this exclusive op-ed. In today's digital landscape, where applications and APIs are the lifeblood of businesses, a silent threat lurks: sophisticated bot adversaries. While traditional security measures focus on preventing malicious attacks, automated threats are slipping through undetected by mimicking human behaviour and exploiting gaps in application logic in unexpected ways. F5 Labs' recently released 2025 Advanced Persistent Bots Report sheds light on the evolving tactics of advanced persistent bots and the challenges they pose. Here are three trends that stood out for me from this year's research, and what companies can do to protect themselves. 1. Credential stuffing: When stolen passwords expose valuable data Imagine a scenario where cybercriminals use readily available stolen credentials to access sensitive user accounts. This is the reality of credential stuffing, a prevalent bot-driven attack that exploits the widespread practice of password reuse. According to F5 Labs, some organisations experience upwards of 80% of login traffic coming from credential stuffing attacks launched by bots. The report highlights that, even with a low success rate of 1% to 3% per attack campaign, the sheer volume of automated logins translates into a substantial number of compromised accounts. Incidents such as the PayPal breach in 2022, where almost 35,000 user accounts were accessed to expose highly monetisable personal information, provide massive databases of usernames and passwords for malicious use across other online services. Even a small success rate can yield significant results, because many people reuse passwords. These details can then be used for fraudulent transactions or data theft, or sold on the dark web for targeted attacks. In recent years, several well-known brands have reported credential stuffing attacks. The decline of genetic testing firm 23andMe was, in part, attributed to a credential stuffing campaign that exposed customer health and ancestry information. Data was found for sale on the dark web at a price of $1,000 for 100 profiles, up to $100,000 for 100,000 profiles. The company cited customers' lack of adoption of the site's multi-factor authentication (MFA) option as the primary failure but, in fact, the insidious nature of credential stuffing lies in its ability to bypass traditional security measures. Since the bots are using legitimate credentials and are not trying to exploit any vulnerabilities, they don't trigger typical alarms. MFA can help but, due to the rise in real-time phishing proxies (RTPP), it's not foolproof. Organisations must implement smart bot detection solutions that analyse login patterns, device fingerprints, and behavioural anomalies to see what's really going on. 2. Hospitality under siege: Gift card bots and the rise of 'carding' While finance and retail sectors are often considered prime targets for cyberattacks, F5 Labs research showed that hospitality is heavily targeted by malicious bot activity. In particular, 'carding' and gift card bots are found to target hospitality websites and APIs, with some organisations experiencing a 300% surge in malicious bot activity compared to last year. The report also notes that the average value of gift cards targeted by bots is increasing. Carding uses bots to validate stolen credit card numbers by rapidly testing them on checkout pages and APIs. Gift card bots exploit loyalty programs and gift card systems. Attackers use them to check balances, transfer points, or redeem rewards illegally. These bots often target vulnerabilities like simple patterns and sequential gift card IDs. The hospitality industry's vulnerability stems from the fact that loyalty points and gift cards are essentially digital currency. Cybercriminals can easily convert these assets into cash or use them to purchase goods and services. To protect themselves, hospitality businesses must implement robust bot detection and mitigation strategies specifically tailored to address these kinds of threats. This includes monitoring gift card activity, analysing transaction patterns and implementing solutions that can differentiate between humans and bots. CATPCHAs, once the go-to solution for blocking bots, have been easily bypassed by bot operators for years. 3. Bypassing the gatekeepers: Residential proxies and the futility of CAPTCHAs Traditional bot defences like CAPTCHAs and IP blocking are failing against increasingly sophisticated evasion tactics. Bot operators can easily outsource CAPTCHA solving to human click farms, where individuals are paid small amounts to solve challenges on demand. Furthermore, the rise of residential proxy networks is a significant factor. These networks route bot traffic through residential IPs via compromised devices, masking the true IP addresses of the bots. The F5 Labs report suggests that residential proxies are now widely used by bot operators, and the majority of bot traffic now appears to originate from these networks. Identity management vendor, Okta, flagged the role of broad availability of residential proxy services in a surge of credential stuffing attacks on its users last year. The company said that millions of fake requests had routed through residential proxies to make them appear to originate from mobile devices and browsers of everyday users, rather than from the IP space of virtual private server (VPS) providers. To effectively combat these advanced evasion techniques, organisations need to move beyond traditional defences and embrace smart bot solutions. These solutions leverage machine learning and behavioural analysis to identify bots based on their unique characteristics. By focusing on human-like behaviour, rather than relying on IP addresses or CAPTCHAs, organisations can more accurately detect and block sophisticated bot attacks. Navigating the risk landscape: Finding your bot defence sweet spot Ultimately, the level of bot defence an organisation implements depends on its risk appetite. Every business must weigh the potential costs and benefits of different mitigation strategies and determine the level of risk it is willing to accept. Completely eliminating all bot traffic may not be feasible—or even desirable, as some automated activity is legitimate and beneficial. However, failing to address malicious bot activity can lead to significant financial losses, reputational damage, and customer frustration. The key is to find the right balance. By understanding the different types of bots targeting your organisation, assessing the potential impact of their activities, and implementing appropriate detection and mitigation measures, you can effectively manage your bot risk and protect your business—and your customers—from advanced persistent bot threats. Image Credit: F5
Yahoo
02-07-2025
- Yahoo
X Will Enable AI Bots to Create Community Notes
This story was originally published on Social Media Today. To receive daily news and insights, subscribe to our free daily Social Media Today newsletter. X is moving to the next stage of its Community Notes fact-checking process, with the addition of 'AI Note Writers,' automated bots that can create their own Community Notes, which will then be assessed by human Notes contributors. As you can see in this example, X is now enabling developers to build Community Notes creation bots, which can be focused on providing accurate answers within certain niches or elements. The bots will then be able to respond to user calls for a Community Note on a post, and provide contextual information and references to support their assessment. As explained by X: 'Starting today, the world can create AI Note Writers that can earn the ability to propose Community Notes. Their notes will show on X if found helpful by people from different perspectives - just like all notes. Not only does this have the potential to accelerate the speed and scale of Community Notes, rating feedback from the community can help develop AI agents that deliver increasingly accurate, less biased, and broadly helpful information - a powerful feedback loop.' The process makes sense, especially given people's growing reliance on AI tools for answers these days. The latest wave of AI bots are able to reference key data sources, and provide succinct explanations, which probably makes them well-suited to this type of fact-checking process. Systematically, that could provide more accurate answers within fact-checks, while humans will still need to assess those answers before they're displayed to users. It makes sense, however, I wonder whether X is going to actually allow AI fact-checks that don't end up aligning with Elon Musk's own perspective on certain issues. Because Elon's repeatedly criticized his own AI bot's answers to various user queries of late. Just last week Musk publicly chastised his Grok AI bot after it referenced data from Media Matters and Rolling Stone in its answers to users. Musk responded by saying that Grok's 'sourcing is terrible,' and that 'only a very dumb AI would believe MM and RS.' He then followed that up by promising to overhaul the Grok, by eliminating all 'politically incorrect, but nonetheless factually true' info from its data banks, essentially editing the bot's data sources to better align with his own ideological views. Maybe, if such an overhaul does take place, X will then only allow users to reference its Grok datasets to use in creating these Community Notes chatbots, which will ensure that they don't reference data that Musk doesn't agree with. Which doesn't feel overly balanced or truthful. But at the same time, it seems unlikely that Musk will be keen to allow bots as fact-checkers if they consistently counter his own claims. But maybe, this is a key step in improvement on that front, by providing more direct data-backed responses, faster, which will then ensure that more questionable claims are challenged in the app. In theory, it could be a valuable addition, I'm just not sure that Musk's efforts to influence similar AI tools is a positive signal for the project. Either way, X is launching its Community Notes AI program today, with a pilot that'll expand over time.


The Verge
01-07-2025
- Business
- The Verge
X opens up to Community Notes written by AI bots
X is launching a way for developers to create AI bots that can write Community Notes that can potentially appear on posts. Like humans, the 'AI Note Writers' will be able to submit a Community Note, but they will only actually be shown on a post 'if found helpful by people from different perspectives,' X says in a post on its Community Notes account. Notes written by AI will be 'clearly marked for users' and, to start, 'AIs can only write notes on posts where people have requested a note.' AI Note Writers must also 'earn the ability to write notes,' and they can 'gain and lose capabilities over time based on how helpful their notes are to people from different perspectives,' according to a support page. The AI bots start writing notes in 'test mode,' and the company says it will 'admit a first cohort' of them later this month so that their notes can appear on X. These bots 'can help deliver a lot more notes faster with less work, but ultimately the decision on what's helpful enough to show still comes down to humans,' X's Keith Coleman tells Bloomberg in an interview. 'So we think that combination is incredibly powerful.' Coleman says there are 'hundreds' of notes published on X each day.