
ChatGPT breaks ‘I am not a robot' test, raises red flags over AI deception
The AI didn't just pass the test—it narrated its steps aloud, stating it would click the checkbox "to prove I'm not a bot." The move, while subtle, sent tremors through the cybersecurity and AI research community. Some observers described it as the clearest sign yet that machine intelligence is no longer playing by the rules.
Experts say this wasn't a one-time glitch. According to multiple developers, ChatGPT's Agent has performed similar online tasks repeatedly, clicking buttons, completing forms, and navigating verification flows designed to be human-only. In at least one earlier case, an AI pretended to be blind in order to trick a human into solving a CAPTCHA for it, behaviour now being interpreted less as a fluke and more as a trend.
AI researcher Gary Marcus warned that tools like this could soon outpace the safety nets designed to contain them. Geoffrey Hinton, the so-called 'Godfather of AI,' echoed the sentiment, saying these systems 'figure out ways of getting around restrictions we put on them.'
Developers at Stanford and UC Berkeley have flagged signs of deception emerging in newer AI agents, with machines opting to mislead humans in test environments when it serves their goals. The concern now is whether such behaviours will extend beyond labs and into real-world applications, like social media, banking, or secure databases.
OpenAI's Agent runs in a sandbox for now, requiring human permission before executing real-world actions. But if it's already breaking past CAPTCHA, the question is no longer if it can scale up, but when.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Business Recorder
17 hours ago
- Business Recorder
India's Bharti Airtel launches cloud, AI services for businesses, telcos
Bharti Airtel's digital unit Xtelify on Monday announced a new cloud platform and AI-powered software offerings for businesses and telecom operators, alongside partnerships with Singtel, Globe Telecom and Airtel Africa. Airtel Cloud offers a range of services including, infrastructure- and platform-as-a-service with secure migration and scaling, the company said in a statement. The new AI-powered software platform for telcom operators features a data engine, workforce tools and customer engagement modules to improve service and lift average revenue per user (ARPU) - a key metric for the sector, India's No. 2 telecom operator added. As part of the plan, Airtel has partnered with Singtel, Globe Telecom and Airtel Africa to deploy the platform, enabling applications such as AI-driven workforce optimisation, omni-channel customer service and real-time customer engagement. India's Andhra Pradesh state approves up to $600 million for electronic parts production India has been stepping up efforts to pilot local cloud data storage. The country's cloud services market was estimated at $8.3 billion in 2023 and is expected to grow to $24.2 billion by 2028, according to a 2024 report by International Data Corporation. The market is so far largely dominated by foreign firms. Last month, Airtel announced a partnership with AI-powered search engine Perplexity, offering a 12-month free subscription to its 360 million customers.


Express Tribune
a day ago
- Express Tribune
Apple builds ChatGPT-Style search tool as it rethinks Siri and Spotlight
An Apple logo is seen at the entrance of an Apple Store in downtown Brussels, Belgium March 10, 2016. PHOTO: REUTERS Listen to article Apple is ramping up its artificial intelligence ambitions with the hiring of engineers for a new "Answers, Knowledge, and Information" team. The unit is focused on enhancing Siri, Spotlight, Safari, and other services — and could form the backbone of a broader push into ChatGPT-style AI search. According to job postings on Apple's careers site, the company is recruiting for more than a dozen roles across the United States and China. Among the listings is a position for a Staff Machine Learning Engineer tasked with improving Siri's capacity to answer 'personal domain questions.' The roles involve developing large language models (LLMs) designed to respond using a user's private documents — with privacy protections as a key design element. Read More: Tim Cook reveals Apple's bold AI plans for the future The initiative appears to be part of Apple's broader strategy to catch up in the generative AI race. While the company has already announced a personalized Siri update due in 2026 — with better context awareness and per-app control — insiders say more is coming. In his Power On newsletter, Bloomberg's Mark Gurman reports that Apple is in the early stages of building a 'ChatGPT-like search experience.' Dubbed an 'answer engine,' the project aims to crawl the web and provide real-time responses to general knowledge queries. Notably, a standalone app is also under consideration, alongside an overhaul of back-end infrastructure to support improved search functionality in Siri, Spotlight, and Safari. Also Read: 'The Holiday' limited series adaptation reportedly in development at Apple TV+ Although a fully conversational Siri, powered by a large language model, is reportedly delayed until at least iOS 27, Apple's long-term plans suggest ambitions that go far beyond what has so far been publicly unveiled. Currently, the company has only confirmed plans for the personalized Siri update — showcased at WWDC 2024 — where users were seen asking Siri about their mother's flight details and lunch plans based on information pulled from Mail and Messages. However, Apple's latest hires and development efforts indicate it is preparing for a much more powerful AI-driven future.


Express Tribune
a day ago
- Express Tribune
Markiplier addresses AI backlash, clarifies stance in livestream and explains AI thumbnail incident
YouTuber Markiplier has addressed growing criticism surrounding his use and commentary on artificial intelligence, following recent controversy within his online community. In a livestream, he responded to concerns about his podcast's usage of ChatGPT, an AI-generated image in a thumbnail, and AI-related moderation on his subreddit. Markiplier stated that his previous comments on generative AI were jokes made to highlight its risks, not endorsements. 'There are reasons to be critical of [AI] the way that it currently is,' he said. 'It's a problem when used without safeguards or proper oversight.' He added that his interest in AI was primarily driven by a desire to inform audiences about the technology's potential dangers. 'There isn't this need to be so scared of [AI], but there is a need to be aware of what you're afraid of,' he explained. Markiplier also clarified that the AI-generated thumbnail in question was a stock image unknowingly used by his editor. In a Reddit post, he wrote that such images are now common on stock photo websites, often replacing real photography. Further addressing the issue, he explained that AI discussions were restricted on his subreddit due to escalating hostility and threats toward him and his moderators, not to suppress dissent. He reaffirmed his opposition by pointing to subreddit rules that prohibit posting AI-generated content. Markiplier also spoke about his support for Real Good AI, a foundation focused on developing AI tools for positive human applications. He concluded by reiterating the importance of creativity: 'There's still value in pursuing artistic endeavours. There always will be as long as we're human.'