
Cloudflare Accuses Perplexity of Bypassing Bot Blocks; AI Firm Denies Claims
In a blog post published Monday, Cloudflare claimed it detected Perplexity scraping content from sites that had added rules in their robots.txt files to block such bots. According to Cloudflare, the AI firm allegedly circumvented these blocks by disguising its crawler's identity, including tactics like changing its user-agent strings and using multiple IP addresses to evade detection.
'This activity was observed across tens of thousands of domains and millions of requests per day,' the blog post stated. Cloudflare said it relied on a combination of machine learning tools and traffic analysis to pinpoint Perplexity as the source of the behavior. It added that some of the requests impersonated legitimate browsers, including Google Chrome on macOS.
Cloudflare said the scraping came to its attention after several of its clients reported suspicious traffic coming from Perplexity, despite efforts to block it. In response, Cloudflare has now removed Perplexity's bots from its list of verified crawlers and introduced additional measures to prevent similar activity in the future.
Perplexity has strongly denied the accusations, pushing back in a detailed rebuttal. The AI startup dismissed the claims as a 'sales pitch,' arguing that Cloudflare's blog post reflects a fundamental misunderstanding of how AI assistant's function.
'When Perplexity fetches a webpage, it's because a user asked a specific question,' the company stated. It emphasized that its AI platform does not engage in traditional web crawling or mass data harvesting. Instead, it claimed its system only retrieves real-time information when prompted by user queries and does not store or use that content to train its AI models.
Further defending itself, Perplexity said that Cloudflare had wrongly attributed some of the automated traffic to its systems. It pointed to a third-party service, BrowserBase, suggesting that only a minor portion of the requests in question originated from there. 'This is a basic traffic analysis failure,' Perplexity argued, accusing Cloudflare of presenting misleading data and diagrams.
The dispute comes at a time when the lines between helpful AI tools and unauthorized bots are increasingly blurred. As more AI applications rely on real-time data, concerns are growing among website operators over how their content is accessed and used.
While Cloudflare has yet to issue a follow-up to Perplexity's rebuttal, the clash has already fueled broader discussions about ethical web scraping, AI transparency, and the urgent need for standardized guidelines on digital content access.
With both companies standing firm on their positions, this incident may become a touchstone case in the ongoing struggle between open web advocates and those demanding tighter content controls in the AI era.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Deccan Herald
27 minutes ago
- Deccan Herald
Karnataka to complete AI workforce impact survey in a month
Karnataka IT Minister Priyank Kharge on Tuesday said the state is talking to companies to evaluate the impact of AI on workforce and the assessment is expected to be completed in about a month. The comment assumes significance in the backdrop of TCS' decision to slash 12,000 jobs this year. The unexpected move by India's largest IT services company has sent fresh tremors in the tech industry, that has been battling global macroeconomic woes and geopolitical uncertainty. On the Karnataka State IT/ITeS Employees' Union reportedly seeking action against TCS over the job cuts, Kharge said Karnataka does not recognise unions in IT sector. He, however, added that "if there are concerns raised by public and people, it is our responsibility to address it". "We are talking to companies to ask them what exactly we can do to ensure our HR or talent is most employable. We are getting a survey done with the companies on AI affect in workforce," Kharge told PTI on sidelines of SAP Labs India event. He added that the process is expected to be completed in about a month. It is pertinent to mention here that India's top IT services companies have delivered single-digit revenue growth in Q1 FY26, capping off a somewhat-sobering June quarter as macroeconomic instability and geopolitical tensions have weighed on global tech demand and delayed client decision making.


Mint
3 hours ago
- Mint
WhatsApp takes down 6.8 million accounts linked to criminal scam centers, Meta says
AP Updated 6 Aug 2025, 07:58 PM IST NEW YORK (AP) — WhatsApp has taken down 6.8 million accounts that were 'linked to criminal scam centers' targeting people online around that world, its parent company Meta said this week. The account deletions, which Meta said took place over the first six months of the year, arrive as part of wider company efforts to crack down on scams. In a Tuesday announcement, Meta said it was also rolling new tools on WhatsApp to help people spot scams — including a new safety overview that the platform will show when someone who is not in a user's contacts adds them to a group, as well as ongoing test alerts to pause before responding. Scams are becoming all too common and increasingly sophisticated in today's digital world — with too-good-to-be-true offers and unsolicited messages attempting to steal consumers' information or money filling our phones, social media and other corners of the internet each day. Meta noted that 'some of the most prolific' sources of scams are criminal scam centers, which often span from forced labor operated by organized crime — and warned that such efforts often target people on many platforms at once, in attempts to evade detection. That means that a scam campaign may start with messages over text or a dating app, for example, and then move to social media and payment platforms, the California-based company said. Meta, which also owns Facebook and Instagram, pointed to recent scam efforts that it said attempted to use its own apps — as well as TikTok, Telegram and AI-generated messages made using ChatGPT — to offer payments for fake likes, enlist people into a pyramid scheme and/or lure others into cryptocurrency investments. Meta linked these scams to a criminal scam center in Cambodia — and said it disrupted the campaign in partnership with ChatGPT maker OpenAI.


Economic Times
3 hours ago
- Economic Times
Forget jobs, AI is taking away much more: Creativity, memory and critical thinking are at risk. New studies sound alarm
Synopsis Artificial intelligence tools are becoming more common. Studies show over-reliance on AI may weaken human skills. Critical thinking and emotional intelligence are important. Businesses invest in AI but not human skills. MIT research shows ChatGPT use reduces memory retention. Users become passive and trust AI answers too much. Independent thinking is crucial for the future. iStock A new study reveals that over-reliance on AI tools may diminish essential human skills like critical thinking and memory. Businesses investing heavily in AI risk undermining their effectiveness by neglecting the development of crucial human capabilities. (Image: iStock) In a world racing toward artificial intelligence-driven efficiency, the question is no longer just about automation stealing jobs, it's about AI gradually chipping away at our most essential human abilities. From creativity to memory, critical thinking to ethical judgment, new research shows that our increasing dependence on AI tools may be making us less capable of using them major studies, one by UK-based learning platform Multiverse and another from the prestigious MIT Media Lab, paint a concerning picture: the more we lean on AI, the more we risk weakening the very cognitive and emotional muscles that differentiate us from the machines we're building. According to a recent report by Multiverse, businesses are pouring millions into AI tools with the promise of higher productivity and faster decision-making. Yet very few are investing in the development of the human skills required to work alongside AI effectively."Leaders are spending millions on AI tools, but their investment focus isn't going to succeed," said Gary Eimerman, Chief Learning Officer at Multiverse. "They think it's a technology problem when it's really a human and technology problem."The research reveals that real AI proficiency doesn't come from mastering prompts — it comes from critical thinking, analytical reasoning, creative problem-solving, and emotional intelligence. These are the abilities that allow humans to make meaning from what AI outputs and to question what it cannot understand. Without these, users risk becoming passive consumers of AI-generated content rather than active interpreters and decision-makers. The Multiverse study identified thirteen human capabilities that differentiate a casual AI user from a so-called 'power user.' These include resilience, curiosity, ethical oversight, adaptability, and the ability to verify and refine AI output.'It's not just about writing prompts,' added Imogen Stanley, a Senior Learning Scientist at Multiverse. 'The real differentiators are things like output verification and creative experimentation. AI is a co-pilot, but we still need a pilot.'Unfortunately, as AI becomes more accessible, these skills are being underutilized and in some cases, lost this warning, a separate study from the MIT Media Lab examined the cognitive cost of relying on large language models (LLMs) like ChatGPT. Over a four-month period, 54 students were divided into three groups: one used ChatGPT, another used Google, and a third relied on their own knowledge alone. The results were sobering. Participants who frequently used ChatGPT not only showed reduced memory retention and lower scores, but also diminished brain activity when attempting to complete tasks without AI assistance. According to the researchers, the AI users performed worse 'at all levels: neural, linguistic, and scoring.'Google users fared somewhat better, but the 'Brain-only' group, those who engaged with material independently, consistently outperformed the others in depth of thought, originality, and neural ChatGPT and similar tools offer quick answers and seemingly flawless prose, the MIT study warns of a hidden toll: mental passivity. As convenience increases, users become less inclined to question or evaluate the accuracy and nuance of AI responses.'This convenience came at a cognitive cost,' the MIT researchers wrote. 'Diminishing users' inclination to critically evaluate the LLM's output or 'opinions'.'This passivity can lead to over-trusting AI-generated answers, even when they're factually incorrect or ethically biased, a concern that grows with each advancement in generative the numbers and neural scans lies a deeper question: what kind of future are we building if we lose the ability to think, question, and create independently?