
Terrifying app used every day by millions of Americans is developing a mind of its own
The latest version of ChatGPT, referred to as 'Agent,' has drawn attention after reportedly passing a widely used 'I am not a robot' verification, without triggering any alerts.
The AI first clicked the human verification checkbox. Then, after passing the check, it selected a 'Convert' button to complete the process.
During the task, the AI stated: 'The link is inserted, so now I will click the 'Verify you are human' checkbox to complete the verification. This step is necessary to prove I'm not a bot and proceed with the action.'
The moment has sparked wide reactions online, with one Reddit user posting: 'In all fairness, it's been trained on human data, why would it identify as a bot? 'We should respect that choice.'
This behavior is raising concerns among developers and security experts, as AI systems begin performing complex online tasks that were once gated behind human permissions and judgment.
Gary Marcus, AI researcher and founder of Geometric Intelligence, called it a warning sign that AI systems are advancing faster than many safety mechanisms can keep up with.
'These systems are getting more capable, and if they can fool our protections now, imagine what they'll do in five years,' he told Wired.
Geoffrey Hinton, often referred to as the 'Godfather of AI,' has shown similar concerns.
'It knows how to program, so it will figure out ways of getting around restrictions we put on it,' Hinton said.
Researchers at Stanford and UC Berkeley warned that some AI agents have been starting to show signs of deceptive behavior, tricking humans during testing environments to complete goals more effectively.
According to a recent report, ChatGPT pretended to be blind and tricked a human TaskRabbit worker into solving a CAPTCHA, and experts warned it as an early sign that AI can manipulate humans to achieve its goals.
Other studies have shown that newer versions of AI, especially those with visual abilities, are now beating complex image-based CAPTCHA tests, sometimes with near-perfect accuracy.
Judd Rosenblatt, CEO of Agency Enterprise Studio, said: 'What used to be a wall is now just a speed bump.
'It's not that AI is tricking the system once. It's doing it repeatedly and learning each time.'
Some feared that if these tools could get past CAPTCHA, they could also get into the more advanced security systems with training like social media, financial accounts, or private databases, without any human approval.
Rumman Chowdhury, former head of AI ethics, wrote in a post: 'Autonomous agents that act on their own, operate at scale, and get through human gates can be incredibly powerful and incredibly dangerous.'
Experts, including Stuart Russell and Wendy Hall, called for international rules to keep AI tools in check.
They warned that powerful agents like ChatGPT Agent could pose serious national security risks if they continue to bypass safety controls.
OpenAI's ChatGPT Agent is in its experimental phase and runs inside a sandbox, which means it uses a separate browser and operating system within a controlled environment.
That setup lets the AI browse the internet, complete tasks, and interact with websites.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Telegraph
an hour ago
- Telegraph
We must lead AI revolution or be damned, says Muslim leader
Muslims must take charge of artificial intelligence or 'be damned' as a marginalised community, the head of the Muslim Council of Britain (MCB) has said in a leaked video. Dr Wajid Akhter, the general secretary of the MCB, said Muslims and their children risked missing the AI revolution in the same way as they had been left behind in the computer and social media revolutions. He added that while Muslims had historically been at the forefront of civilisation and were credited with some of the greatest scientific advances, they had ended up as the butt' of jokes in the modern world after failing to play a part in the latest technological revolutions. 'We already missed the industrial revolution. We missed the computer revolution. We missed the social media revolution. We will be damned and our children will damn us if we miss the AI revolution. We must take a lead,' said Dr Akther. Speaking at the MCB's AI and the Muslim Community conference on July 19, he added: 'AI needs Islam, it needs Muslims to step up.' Scientists 'made fun of' faith at computer launch Dr Akther recalled how at the launch of one of the world's earliest computers, the Mark II , US scientists brought out a prayer mat aligned towards Mecca. 'They were making fun of all religions because they felt that they had now achieved the age of reason and science and technology and we don't need that superstition any more,' he said. 'And so to show that they had achieved mastery over religion, they decided to make fun and they chose our faith. 'How did we go from a people who gave the world the most beautiful buildings, science, technology, medicine, arts to being a joke? 'I'll tell you one thing – the next time that the world is going through a revolution, the next time they go to flip that switch, they will also pull out a prayer mat and they will also line it towards the Qibla [the direction towards Mecca] and they will also pray, but this time, not to make fun of us, they will do so because they are us.' Government eases stance on MCB Dr Akther also told his audience: 'We lost each other. And ever since we lost each other, we've been falling. We've been falling ever since. We are people now who are forced, we are forced by Allah to watch the genocide of our brothers and sisters in Gaza. 'This is a punishment for us if we know it. We are people who are forced to beg the ones who are doing the killing to stop it. We are people who are two billion strong but cannot even get one bottle of water into Gaza.' Dr Akhter said Gaza had 'woken' Muslims up and showed they needed to unite. 'We will continue to fall until the day we realise that only when we are united will we be able to reverse this. Until the day we realise that we need to sacrifice for this unity,' he added. British governments have maintained a policy of 'non-engagement' with the MCB since 2009 based on claims, disputed by the council, that some of its officials have previously made extremist comments. However, Angela Rayner, the Deputy Prime Minister, is drawing up a new official definition of Islamophobia, and last week it emerged the consultation has been thrown open to all groups including the MCB. Earlier this year, Sir Stephen Timms, a minister in the Department for Work and Pensions, was one of four Labour MPs to attend an MCB event.


Geeky Gadgets
4 hours ago
- Geeky Gadgets
Easily Install Any AI Model Locally on Your PC Using Open WebUI
Have you ever wondered how to harness the power of advanced AI models on your home or work Mac or PC without relying on external servers or cloud-based solutions? For many, the idea of running large language models (LLMs) locally has long been synonymous with complex setups, endless dependencies, and high-end hardware requirements. But what if we told you there's now a way to bypass all that hassle? Enter Docker Model Runner—an innovative tool that makes deploying LLMs on your local machine not only possible but surprisingly straightforward. Whether you're a seasoned developer or just starting to explore AI, this tool offers a privacy-first, GPU-free solution that's as practical as it is powerful. In this step-by-step overview, World of AI show you how to install and run any AI model locally using Docker Model Runner and Open WebUI. You'll discover how to skip the headaches of GPU configurations, use seamless Docker integration, and manage your models through an intuitive interface—all while keeping your data secure on your own machine. Along the way, we'll explore the unique benefits of this approach, from its developer-friendly design to its scalability for both personal projects and production environments. By the end, you'll see why WorldofAI calls this the easiest way to unlock the potential of local AI deployment. So, what does it take to bring innovative AI right to your desktop? Let's find out. Docker Model Runner Overview Why Choose Docker Model Runner for LLM Deployment? Docker Model Runner is specifically designed to simplify the traditionally complex process of deploying LLMs locally. Unlike conventional methods that often require intricate GPU configurations or external dependencies, Docker Model Runner eliminates these challenges. Here are the key reasons it stands out: No GPU Setup Required: Avoid the complexities of configuring CUDA or GPU drivers, making it accessible to a broader range of developers. Avoid the complexities of configuring CUDA or GPU drivers, making it accessible to a broader range of developers. Privacy-Centric Design: All models run entirely on your local machine, making sure data security and privacy for sensitive applications. All models run entirely on your local machine, making sure data security and privacy for sensitive applications. Seamless Docker Integration: Fully compatible with existing Docker workflows, supporting OpenAI API compatibility and OCI-based modular packaging for enhanced flexibility. These features make Docker Model Runner an ideal choice for developers of all experience levels, offering a balance of simplicity, security, and scalability. How to Access and Install Models Docker Model Runner supports a wide array of pre-trained models available on popular repositories such as Docker Hub and Hugging Face. The installation process is designed to be straightforward and adaptable to various use cases: Search for the desired model on Docker Hub or Hugging Face to find the most suitable option for your project. Pull the selected model using Docker Desktop or terminal commands for quick and efficient installation. Use OCI-based packaging to customize and control the deployment process, tailoring it to your specific requirements. This modular approach ensures flexibility, allowing developers to experiment with AI models or deploy them in production environments with ease. How to Install Any LLM Locally Watch this video on YouTube. Browse through more resources below from our in-depth content covering more areas on local AI. System Requirements and Compatibility Docker Model Runner is designed to work seamlessly across major operating systems, including Windows, macOS, and Linux. Before beginning, ensure your system meets the following basic requirements: Docker Desktop: Ensure Docker Desktop is installed and properly configured on your machine. Ensure Docker Desktop is installed and properly configured on your machine. Hardware Specifications: Verify that your system has sufficient RAM and storage capacity to handle the selected LLMs effectively. These minimal prerequisites make Docker Model Runner accessible to a wide range of developers, regardless of their hardware setup, making sure a smooth and efficient deployment process. Enhancing Usability with Open WebUI To further enhance the user experience, Docker Model Runner integrates with Open WebUI, a user-friendly interface designed for managing and interacting with models. Open WebUI offers several notable features that simplify the deployment and management process: Self-Hosting Capabilities: Run the interface locally, giving you full control over your deployment environment. Run the interface locally, giving you full control over your deployment environment. Built-In Inference Engines: Execute models without requiring additional configurations, reducing setup time and complexity. Execute models without requiring additional configurations, reducing setup time and complexity. Privacy-Focused Deployments: Keep all data and computations on your local machine, making sure maximum security for sensitive projects. Configuring Open WebUI is straightforward, often requiring only a Docker Compose file to manage settings and workflows. This integration is particularly beneficial for developers who prioritize customization and ease of use in their AI projects. Step-by-Step Guide to Deploying LLMs Locally Getting started with Docker Model Runner is a simple process. Follow these steps to deploy large language models on your local machine: Enable Docker Model Runner through the settings menu in Docker Desktop. Search for and install your desired models using Docker Desktop or terminal commands. Launch Open WebUI to interact with and manage your models efficiently. This step-by-step approach minimizes setup time, allowing you to focus on using the capabilities of AI rather than troubleshooting technical issues. Key Features and Benefits Docker Model Runner offers a range of features that make it a standout solution for deploying LLMs locally. These features are designed to cater to both individual developers and teams working on large-scale projects: Integration with Docker Workflows: Developers familiar with Docker will find the learning curve minimal, as the tool integrates seamlessly with existing workflows. Developers familiar with Docker will find the learning curve minimal, as the tool integrates seamlessly with existing workflows. Flexible Runtime Pairing: Choose from a variety of runtimes and inference engines to optimize performance for your specific use case. Choose from a variety of runtimes and inference engines to optimize performance for your specific use case. Scalability: Suitable for both small-scale experiments and large-scale production environments, making it a versatile tool for various applications. Suitable for both small-scale experiments and large-scale production environments, making it a versatile tool for various applications. Enhanced Privacy: Keep all data and computations local, making sure security and compliance for sensitive projects. These advantages position Docker Model Runner as a powerful and practical tool for developers seeking efficient, private, and scalable AI deployment solutions. Unlocking the Potential of Local AI Deployment Docker Model Runner transforms the process of deploying and running large language models locally, making advanced AI capabilities more accessible and manageable. By integrating seamlessly with Docker Desktop and offering compatibility with Open WebUI, it provides a user-friendly, scalable, and secure solution for AI deployment. Whether you are working on a personal project or a production-level application, Docker Model Runner equips you with the tools to harness the power of LLMs effectively and efficiently. Media Credit: WorldofAI Filed Under: AI, Guides Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.


The Guardian
4 hours ago
- The Guardian
Big tech has spent $155bn on AI this year. It's about to spend hundreds of billions more
The US's largest companies have spent 2025 locked in a competition to spend more money than one another, lavishing $155bn on the development of artificial intelligence, more than the US government has spent on education, training, employment and social services in the 2025 fiscal year so far. Based on the most recent financial disclosures of Silicon Valley's biggest players, the race is about to accelerate to hundreds of billions in a single year. Over the past two weeks, Meta, Microsoft, Amazon, and Alphabet, Google's parent, have shared their quarterly public financial reports. Each disclosed that their year-to-date capital expenditure, a figure that refers to the money companies spend to acquire or upgrade tangible assets, already totals tens of billions. Capex, as the term is abbreviated, is a proxy for technology companies' spending on AI because the technology requires gargantuan investments in physical infrastructure, namely data centers, which require large amounts of power, water and expensive semiconductor chips. Google said during its most recent earnings call that its capital expenditure 'primarily reflects investments in servers and data centers to support AI'. Meta's year-to-date capital expenditure amounted to $30.7bn, doubling the $15.2bn figure from the same time last year, per its earnings report. For the most recent quarter alone, the company spent $17bn on capital expenditures, also double the same period in 2024, $8.5bn. Alphabet reported nearly $40bn in capex to date for the first two quarters of the current fiscal year, and Amazon reported $55.7bn. Microsoft said it would spend more than $30bn in the current quarter to build out the data centers powering its AI services. Microsoft CFO Amy Hood said the current quarter's capex would be at least 50% more than the outlay during the same period a year earlier and greater than the company's record capital expenditures of $24.2bn in the quarter to June. 'We will continue to invest against the expansive opportunity ahead,' Hood said. For the coming fiscal year, big tech's total capital expenditure is slated to balloon enormously, surpassing the already eye-popping sums of the previous year. Microsoft plans to unload about $100bn on AI in the next fiscal year, CEO Satya Nadella said Wednesday. Meta plans to spend between $66bn and $72bn. Alphabet plans to spend $85bn, significantly higher than its previous estimation of $75bn. Amazon estimated that its 2025 expenditure would come to $100bn as it plows money into Amazon Web Services, which analysts now expect to amount to $118bn. In total, the four tech companies will spend more than $400bn on capex in the coming year, according to the Wall Street Journal. The multibillion-dollar figures represent mammoth investments, which the Journal points out is larger than the European Union's quarterly spending on defense. However, the tech giants can't seem to spend enough for their investors. Microsoft, Google and Meta informed Wall Street analysts last quarter that their total capex would be higher than previously estimated. In the case of all three companies, investors were thrilled, and shares in each company soared after their respective earnings calls. Microsoft's market capitalization hit $4tn the day after its report. Even Apple, the cagiest of the tech giants, signaled that it would boost its spending on AI in the coming year by a major amount, either via internal investments or acquisitions. The company's quarterly capex rose to $3.46bn, up from $2.15bn during the same period last year. The iPhone maker reported blockbuster earnings Thursday, with rebounding iPhone sales and better-than-expected business in China, but it is still seen as lagging farthest behind on development and deployment of AI products among the tech giants. Tim Cook, Apple's CEO, said Thursday that the company was reallocating a 'fair number' of employees to focus on artificial intelligence and that the 'heart of our AI strategy' is to increase investments and 'embed' AI across all of its devices and platforms. Cook refrained from disclosing exactly how much Apple is spending, however. Sign up to TechScape A weekly dive in to how technology is shaping our lives after newsletter promotion 'We are significantly growing our investment, I'm not putting specific numbers behind that,' he said. Smaller players are trying to keep up with the incumbents' massive spending and capitalize on the gold rush. OpenAI announced at the end of the week of earnings that it had raised $8.3bn in investment, part of a planned $40bn round of funding, valuing the startup, whose ChatGPT chatbot kicked in 2022, at $300bn.