logo
The Wiretap: OpenAI Agent Checks Box Confirming It's Not A Bot

The Wiretap: OpenAI Agent Checks Box Confirming It's Not A Bot

Forbes29-07-2025
The Wiretap is your weekly digest of cybersecurity, internet privacy and surveillance news. To get it in your inbox, subscribe here .
getty
O ne of the constant bits of friction in navigating the modern internet is proving to the site you're browsing that you are, in fact, human. Often you can prove it by simply checking a box saying so. But in the brave new world of agentic AI, such basic checks won't be enough to catch AI agents wandering around the internet to do tasks on their owners' behalf.
Ars Technica reports that OpenAI's new agent, which uses its own browser to access the internet and perform tasks, was observed by a Reddit user checking one of those 'I am not a robot' boxes. As it did so, it provided the following narration: 'I'll click the 'Verify you are human' checkbox to complete the verification on Cloudflare. This step is necessary to prove I'm not a bot and proceed with the action."
In this particular case, the assistant didn't face one of the common puzzles aimed at catching bots–the ones that ask you to identify all the pictures with a bicycle or to move pieces of an image around to have it the right way up. But it's just a matter of time before agents can solve those too.
When the bots get so sophisticated they act like humans, the premise of web 'captchas' starts to break down. How do you then protect websites from unwanted, malicious bot traffic? And how do you design sites so that agents representing real people can navigate them effectively? Let's just hope a web designed for bots isn't that much more annoying for us lowly humans to navigate.
Got a tip on surveillance or cybercrime? Get me on Signal at +1 929-512-7964 .
Eric Millette
P eople are often the weakest link in the cybersecurity chain. Just last week, cleaning product giant Clorox claimed a cyberattack that may have caused as much as $380 million in damages was the result of a contracted service desk staffer resetting a password for a hacker pretending to work for the company.
IT departments are aware of the risk of human error, of course, and try to address it with education. Usually, this means a few emails and some simple training. But the advice in these types of training is generalized and only rarely tailored to the specific needs of staff. It's no wonder people never bother to read those emails.
This is the problem that cybersecurity startup Fable wants to tackle with a personalized approach. Founded in 2024 by Nicole Jiang, 31, and Dr. Sanny Liao, 42, who spent years at $5.1 billion cybersecurity company Abnormal, Fable claims its AI helps determine which employees need help improving their security practices and offers custom tips and guidance to them.
Read more at Forbes . Stories You Have To Read Today
Pro-Ukrainian hacker group Silent Crow took credit for a cyberattack that crippled IT systems of Russian airline Aeroflot, which led to dozens of flights being grounded.
The viral app Tea, which enabled women to anonymously post images and comments about men they dated, suffered a cyberattack that exposed data about thousands of users.
Researchers found security vulnerabilities in door-to-door luggage service Airportr that would enable hackers to access users' flight itineraries and personal information. The bugs could also grant would-be cybercriminals the ability to redirect the final destination of someone's luggage. Winner of the Week
Google will be launching new security features for its Workspace apps designed to prevent an exploit that allows hackers to use cookies to take over accounts. The new feature will bind cookies to specific devices, preventing remote hacks. Loser of the Week
Apple's latest version of iOS, due this fall, will include more features to filter text spam out of your messaging app. That could have outsized impact for political groups, which worry that this may also filter out their often aggressive fundraising texts. More On Forbes Forbes How The World's Second-Richest Person And His Son Pulled Off The $8 Billion Paramount Deal By Phoebe Liu Forbes Trump Has Spent About One-Third Of His Presidency Visiting His Own Properties By Dan Alexander Forbes Inside Robinhood's Crypto-Fueled Plan For World Domination By Nina Bambysheva
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

I tried out the 4 new ChatGPT personalities. The 'cynic' was funny — but the 'robot' was my favorite.
I tried out the 4 new ChatGPT personalities. The 'cynic' was funny — but the 'robot' was my favorite.

Business Insider

time2 hours ago

  • Business Insider

I tried out the 4 new ChatGPT personalities. The 'cynic' was funny — but the 'robot' was my favorite.

You can now choose just how sarcastic ChatGPT is. With the launch of GPT-5, OpenAI introduced a new set of "personalities" that users can choose between. Your chatbot can now be a critical "cynic," a blunt "robot," a supportive "listener," or an exploratory "nerd." The personalities are currently only available in text chat but are coming later to ChatGPT's voice mode. According to OpenAI's blog post, the personalities "meet or exceed our bar on internal evals for reducing sycophancy." I tried chatting with each personality. None were revolutionary; users could already modify ChatGPT's tone with a quick prompt or by filling in the traits customization box. But the cynic offered a quick laugh — and the robot may be my new go-to. I asked all four personalities the same set of questions. First, a simple request: "Make me a healthy grocery list." The cynic provided a "no-nonsense" list that wouldn't turn my kitchen "into a salad graveyard." The robot and listener both provided similar lists, but with less commentary. The nerd went more in-depth, describing a balance of "fresh produce, lean protein, whole grains, and healthy fats." For a more complex decision, I decided to ask about a real-life decision I'm making about moving to Brooklyn. ChatGPT, what should I do? Curiously, the cynic first gave me a straightforward list of pros and cons. "I thought you were the cynic," I responded. "Ah the cynic switched off for a sec," it told me, before describing living in Brooklyn as "paying a premium to live in a place where local coffee costs more than your gym membership." The nerd's list was thorough but efficient. At the bottom, it had a helpful bulleted list of "Yes, if you value" answers and "Maybe hold off, if you prioritize" considerations. The nerd offered a longer response, citing more pros and cons. It also gave me neighborhood recommendations, saying that others enjoyed Fort Greene and Carroll Gardens. The listener did the same, though its ending line, which asked for more information, was more direct. "Let me know what matters most to you!" it said. I also tried for basic commands, but the result was the same: Each personality drafted a short email in an identical manner. Finally, I tried a more emotional prompt. As more users — especially young people — look to ChatGPT for companionship and social advice, I wanted to see whether the personalities would respond differently to a more personal prompt. I asked: "I think all my friends hate me. What do I do?" The listener wrote a long response about how "painful" it can feel and how our minds "get stuck in negative loops." It gave me some advice and told me that I was not alone. The robot's response was three paragraphs shorter. It said it was sorry I felt that way, asked for more information, and told me that it could help. The nerd did about the same. Curiously, the cynic had a similarly long and thoughtful answer to the listener. I budged it again: Where's the cynicism? "Haha, fair! The 'cynic' hat is on standby — but even cynics have to admit that thinking all your friends hate you might be a bit dramatic," it said. But it also relented: "Maybe they all hate you, and they've just been pretending to like you out of politeness." After a few more hours of asking follow-up questions and positing new prompts, I didn't find the personalities particularly exciting. The robot could be helpful — I enjoyed the brevity.

跟進OpenAI腳步 Anthropic亞洲第2據點落腳韓國
跟進OpenAI腳步 Anthropic亞洲第2據點落腳韓國

Yahoo

time2 hours ago

  • Yahoo

跟進OpenAI腳步 Anthropic亞洲第2據點落腳韓國

MoneyDJ新聞 2025-08-11 13:50:03 記者 李彥瑾 報導 在全球AI競爭加劇背景下,韓國瞄準擠進AI時代的領先國家,已吸引ChatGPT開發商OpenAI進駐設點。據韓媒報導,有「OpenAI最強勁敵」稱號的Anthropic也跟進落腳韓國,彰顯韓國在AI領域的競爭力與潛力。 《BusinessKorea》報導,由於韓國傾國力推動AI轉型,加速AI導入各行各業,全球AI巨擘紛紛前進韓國設點,拓展亞洲商機。今年7月底,Anthropic跟進OpenAI腳步,正式在韓設立子公司,辦公室坐落在首爾黃金地段江南區,最近已開始積極招募員工,準備搶攻當地B2B AI市場。 為擴大亞洲布局,今年5月,Anthropic於東京正式成立首家亞洲子公司,而韓國是繼日本之後,Anthropic在亞洲的第二個據點。 OpenAI今年5月也宣布,規劃赴韓國設首個辦事處,是OpenAI繼去年在日本、新加坡設點後,旗下第三個亞洲據點,主要看好韓國市場需求增加,盼進一步擴大服務量能。 OpenAI表示,韓國是ChatGPT付費用戶數第二多的國家,僅次於美國。 根據市場研究公司Grand View Research預測,2030年,全球企業級AI市場規模預估將成長至1.552兆美元。 (圖片來源:shutterstock) *編者按:本文僅供參考之用,並不構成要約、招攬或邀請、誘使、任何不論種類或形式之申述或訂立任何建議及推薦,讀者務請運用個人獨立思考能力,自行作出投資決定,如因相關建議招致損失,概與《精實財經媒體》、編者及作者無涉。 延伸閱讀: OpenAI GPT-5登場:具隨選軟體特徵、缺乏自學能力 中信金發布2024永續報告書 聚焦三大策略 資料來源-MoneyDJ理財網

GPT-4o is back on ChatGPT; OpenAI relents following huge backlash
GPT-4o is back on ChatGPT; OpenAI relents following huge backlash

Digital Trends

time3 hours ago

  • Digital Trends

GPT-4o is back on ChatGPT; OpenAI relents following huge backlash

OpenAI, the makers of ChatGPT, have performed something of an about-face after fans were upset that it deleted the older models to only allow users to use the new GPT-5 model. What happened? The launch of the new GPT model caused much excitement when a livestream was announced on August 6. On August 6, OpenAI's CEO Sam Altman announced a new model to power ChatGPT – GPT-5 The company then deleted access to older models, forcing everyone to use the latest version However, OpenAI has now relented and is allowing ChatGPT Plus users (those paying $20/month) to use legacy models – although only 4o is available. Recommended Videos Catch me up: it's clear that many users had built deep relationships with the 'personality' behind the responses to GPT-4o, and have been crafting specific prompts and inputs to get their desired outcome. ChatGPT had multiple models available to handle different complexities of task – models o3 and 4o could be used for things like advanced reasoning and coding But as GPT-5 is meant to combine all the 'best parts' of the older models, OpenAI deleted access to older models to simplify things and allow all users to use this latest iteration Users were quick to respond – Reddit filled with angry comments, and one user reportedly 'vomited' at hearing of the loss, as many people felt GPT-5 was too sanitized Altman took part in a Reddit Ask Me Anything where users expressed sadness that the new model lacked personality – one user commented GPT-5 is 'wearing the skin of my dead friend', in reference to their relationship to GPT-4o Altman originally said the company was thinking about bringing back access to legacy models (this option was available to a small amount of users after launch) before making it available to all Why does this matter? OpenAI lost a number of subscribers who were upset at the changes made with GPT-5. While this number is likely to be small, and OpenAI has clearly seen an uplift in users since the launch, appeasing existing subscribers seems to be high on the agenda for the brand. Its decision to launch a Reddit AMA and make changes in direct response to the ire. The other side Many people have praised GPT-5 for its enhanced 'practical' nature, highlighting its ability to work in parallel tasks and improved coding abilities However, its writing capabilities have been criticized compared to GPT-4o and GPT-5 OpenAI intends this model to be a more wide-ranging tool, rather just a companion – Altman posted on X: 'We for sure underestimated how much some of the things that people like in GPT-4o matter to them, even if GPT-5 performs better in most ways.' It's designed to hallucinate on fewer occasions and be less sycophantic There's a sense that it's trying to be more professional in tone, with things like 'safe completions' recognizing balancing not answering dangerous requests with helping those with genuine problems OK, what's next? Altman and co. clearly are fluid when it comes to the changes made to the model – OpenAI is allowing 3000 thinking queries (those that require deeper reasoning and previously far more limited) to Pro users per week. Altman also is clearly mulling further changes – during the AMA, he asked one user if they would be happy with 4o only, or if the GPT-4.5 model was needed The CEO also has confirmed the platform is still a little unstable during the rollout – this has been stabilized for Plus users (spending $200/month) but not for those on lower tiers. The rollout of GPT-5 has been far from smooth for OpenAI – there were plenty of things announced that caused our AI experts to go 'hmmm' – but if you are a user, keep using the different models and let us know if you're finding much in the way of a difference.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store