logo
Anthropic is putting a limit on a Claude AI feature because people are using it '24/7'

Anthropic is putting a limit on a Claude AI feature because people are using it '24/7'

Tom's Guide29-07-2025
Anthropic has implemented weekly rate limits on its Claude Code tool because it says programmers are using it too much.
According to a series of posts from the company, users have been running Claude Code — the agentic side of one of the best ChatGPT alternatives — "continuously in the background, 24/7."
Claude Code is only available through Anthropic's paid-for Pro and Max subscription plans. It sits inside the Claude chatbot and uses natural language to help with coding tasks, from generating the code and debugging it to building out applications with a supplied SDK and integrating with Github.
The most expensive Max plan, like other AI providers, runs to $200 a month, so you may think the customers are just trying to get their money's worth.
But running the tool continuously adds up to some serious costs for Anthropic which has to manage both compute load and environmental costs to provide the service.
Some of the biggest Claude Code fans are running it continuously in the background, 24/7.These uses are remarkable and we want to enable them. But a few outlying cases are very costly to support. For example, one user consumed tens of thousands in model usage on a $200 plan.July 28, 2025
The company says affected users will start seeing the new weekly rate limits begin in August 28 and says it estimates they'll "apply to less than 5% of subscribers based on current usage."
However, it also notes one user consumed "tens of thousands in model usage" on a $200 plan.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
"We're still exploring the best ways to ensure as broad access as possible to Claude Code," the company said in a follow-up tweet. "If you're a power user who has feedback on how we can support your specific use of Claude Code, we'd love to hear from you."
For anyone that does go over the limits, they'll have the option to purchase more usage above and beyond their subscription payments. And if you've not used Claude yourself yet, here's why we think Anthropic's AI chatbot is quietly crushing it when compared to ChatGPT or Google Gemini.
Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

New Models From OpenAI, Anthropic, Google – All At The Same Time
New Models From OpenAI, Anthropic, Google – All At The Same Time

Forbes

time39 minutes ago

  • Forbes

New Models From OpenAI, Anthropic, Google – All At The Same Time

It's Christmas in August – at least, for those tech-wonks who are interested in new model releases. Today's news is a very full stocking of brand new LLM editions from three of the biggies – OpenAI, Anthropic, and Google. I'll go over these one by one, discussing what these most recent model iterations bring to the table. OpenAI OSS Models First, the tech community is getting an eye on OpenAI OSS 120b and OSS 20b, the first open-weight systems from this company since ChatGPT 2. Now, coverage from Computerworld and elsewhere points out that, although these models have Apache licenses, they are not fully open source in the conventional way, ,but partly open: the weights are open source, while the training data is not. Powered by one 80GB GPU chip, the larger OSS models, according to the above report, 'achieves parity' with the o4-mini model vis a vis reasoning power. The smaller one can run on smartphones and other edge devices. The models come quantized with MXFP4, a low-precision data type for accelerating matrix multiplications. Let Them Work Another interesting aspect of the new OSS models has to do with chain of thought, something that has revolutionized inference, while raising questions about comparative methodology. Basically, we want the LLMs to be accurate, but engineers have found that, in many cases, restricting or overly guiding systems causes them to 'hide' CoT. So OpenAI has chosen not to optimize the models in this way. 'OpenAI is intentionally leaving Chain of Thought (CoTs) unfiltered during training to preserve their usefulness for monitoring, based on the concern that optimization could cause models to hide their real reasoning,' writes Roger Montti at Search Engine Journal. 'This, however, could result in hallucinations.' Montti cites the following model card report from OpenAI: 'In our recent research, we found that monitoring a reasoning model's chain of thought can be helpful for detecting misbehavior. We further found that models could learn to hide their thinking while still misbehaving if their CoTs were directly pressured against having 'bad thoughts.'…In accord with these concerns, we decided not to put any direct optimization pressure on the CoT for either of our two open-weight models. We hope that this gives developers the opportunity to implement CoT monitoring systems in their projects and enables the research community to further study CoT monitorability.' So, the models are allowed to have these 'bad thoughts' in aid of, I suppose, transparency. OpenAI is then honest about the higher chance for hallucinations, so that users know that this trade-off has been made. Claude Opus 4.1 Here's how spokespersons rolled out the announcement of this new model Aug. 5: 'Today we're releasing Claude Opus 4.1, an upgrade to Claude Opus 4 on agentic tasks, real-world coding, and reasoning. We plan to release substantially larger improvements to our models in the coming weeks. Opus 4.1 is now available to paid Claude users and in Claude Code. It's also on our API, Amazon Bedrock, and Google Cloud's Vertex AI. Pricing is the same as Opus 4.' What's under the hood? The new Opus 4.1 model ups SWE-Bench Verified marks, and boosts agentic research skills. A breakdown of capabilities shows a 2-point increase in SWE-based agentic coding (72.5% - 74.5%) and improvement in graduate-level reasoning with GPQA Diamond (79.6% - 80.9%) over Opus 4, and slight increases in visual reasoning and agentic tool use. For a model set that pioneered human-like user capabilities, this continues to push the envelope. As for strategy: 'The release comes as Anthropic has achieved spectacular growth, with annual recurring revenue jumping five-fold from $1 billion to $5 billion in just seven months, according to industry data,' writes Michael Nunez at VentureBeat. 'However, the company's meteoric rise has created a dangerous dependency: nearly half of its $3.1 billion in API revenue stems from just two customers — coding assistant Cursor and Microsoft's GitHub Copilot — generating $1.4 billion combined. … The upgrade represents Anthropic's latest move to fortify its position before OpenAI launches GPT-5, expected to challenge Claude's coding supremacy. Some industry watchers questioned whether the timing suggests urgency rather than readiness.' Regardless, this is big news in and of itself, including for the millions of users who rely on Claude for business process engineering or anything else. Genie 3 This is the latest in the series of Genie models coming out of Google's DeepMind lab that create controlled environments. In other words, this is a gaming world model. Proponents of the new model cite longer-term memory over Genie 2's limit of about 10 seconds, as well as better visual fidelity and real-time responses. 'DeepMind claims that the new system can generate entire worlds that you can interact with consistently for several minutes in up to 720p resolution,' reports Joshua Hawkins at BGR. 'Additionally, the company says that the system will be able to respond to what it calls 'promptable world events' with real-time latency. Based on what the videos show off, it seems like Google has taken a major step forward in creating entire video game worlds using AI.' 'Genie 3 is the first real-time interactive general-purpose world model,' said DeepMind's Shlomi Fruchter in a press statement according to a TechCrunch piece suggesting that the lab considers Genie 3 to be a 'stepping stone to AGI,' a big claim in these interesting times. 'It goes beyond narrow world models that existed before. It's not specific to any particular environment. It can generate both photo-realistic and imaginary worlds, and everything in between.' All of these new models are getting their first rafts of public users today! It's enough to make your head spin, especially if you're responsible for any kind of implementation. What do you choose? To be fair, there is some amount of specialization involved. But many professionals closest to the industry would tell you it's the speed of innovation that's challenging: given the track record of most companies, by the time you get something worked into business operations, it's likely to already be obsolete! Stay tuned.

A $30 Million Paycheck? Big Law Has Lost the Plot
A $30 Million Paycheck? Big Law Has Lost the Plot

Bloomberg

time2 hours ago

  • Bloomberg

A $30 Million Paycheck? Big Law Has Lost the Plot

Opinion Newsletter Jessica Karl, Columnist Save This is Bloomberg Opinion Today, the total billable hours of Bloomberg Opinion's opinions. Sign up here. If you were a lawyer earning tens of millions of dollars a year — compensation that Chris Bryant says is not unheard of in prestigious partner circles — would you be using ChatGPT to do your job? Because this individual certainly didn't bat an eyelash when they typed this inquiry in the search bar:

Meta bans millions of WhatsApp accounts linked to scam operations
Meta bans millions of WhatsApp accounts linked to scam operations

The Hill

time2 hours ago

  • The Hill

Meta bans millions of WhatsApp accounts linked to scam operations

Meta took down 6.8 million WhatsApp accounts tied to scam operations on Tuesday after victims reported financial fraud schemes. The company said many of the scam sources were based in Southeast Asia at criminal scam centers. 'Based on our investigative insights into the latest enforcement efforts, we proactively detected and took down accounts before scam centers were able to operationalize them,' Meta said in a Tuesday release. 'These scam centers typically run many scam campaigns at once — from cryptocurrency investments to pyramid schemes. There is always a catch and it should be a red flag for everyone: you have to pay upfront to get promised returns or earnings,' they wrote. In an effort to ensure users are protected, the company said it would flag when people were added to group messages by someone who isn't in their contact list and urge individuals to pause before engaging with unfamiliar messages where they're encouraged to communicate on other social platforms. 'Scams may start with a text message or on a dating app, then move to social media, private messaging apps and ultimately payment or crypto platforms,' Meta said. 'In the course of just one scam, they often try to cycle people through many different platforms to ensure that any one service has only a limited view into the entire scam, making it more challenging to detect,' the company added. The Tuesday release highlighted an incident with Cambodian users urging people to enlist in a rent a scooter pyramid scheme with an initial text message generated by ChatGPT. The message contained a link to a WhatsApp chat which redirected the target to Telegram where they were told to like TikTok videos. 'We banned ChatGPT accounts that were generating short recruitment-style messages in English, Spanish, Swahili, Kinyarwanda, German, and Haitian Creole. These messages offered recipients high salaries for trivial tasks — such as liking social media posts — and encouraged them to recruit others,' OpenAI wrote in their June report focused on disrupting malicious artificial intelligence efforts. 'The operation appeared highly centralized and likely originated from Cambodia. Using AI-powered translation tools, we were able to investigate and disrupt the campaign's use of OpenAI services swiftly,' the company added. The Federal Trade Commission has reported a steady increase in social media fraud. The agency said more money was reported lost to fraud originating on social media than any other method of contact from January 2021 to June 2023 — with losses totaling $2.7 billion.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store