logo
#

Latest news with #NaeemShabir

New Claude update lets users pick up conversations where they left off: How Anthropic AI chatbot's feature works
New Claude update lets users pick up conversations where they left off: How Anthropic AI chatbot's feature works

Mint

time5 days ago

  • Business
  • Mint

New Claude update lets users pick up conversations where they left off: How Anthropic AI chatbot's feature works

Anthropic, the San Francisco-based artificial intelligence firm, has announced a new capability for its Claude chatbot that lets it retrieve and refer to earlier conversations with a user. Announced on Monday, the update is currently being rolled out to paid subscribers on the Max, Team, and Enterprise plans, with the Pro plan expected to receive access in the near future. It remains unclear whether the feature will be made available to users on the free tier. The new capability enables users to continue conversations seamlessly from where they left off, eliminating the need to manually search through earlier chats to resume discussions on ongoing projects. This feature operates by retrieving data from past interactions when requested by the user or when the chatbot deems it necessary. While referencing earlier conversations is already available in other AI chatbots, such as OpenAI's ChatGPT and Google's Gemini, which provide this function to all users including those on free plans, Anthropic has been relatively slow in integrating similar enhancements into Claude. The company had previously introduced two-way voice conversations and web search functions only recently, in May 2025. The timing of this new feature comes shortly after Anthropic implemented weekly rate limits for paid users, a response to some individuals exploiting the previous policy that reset limits every five hours. Reports indicated that a small group of users were running Claude Code continuously, resulting in usage costs amounting to tens of thousands of dollars. Some users have expressed concerns that retrieving extensive information from previous, information-heavy chats might cause them to reach their rate limits more quickly. However, Anthropic has yet to clarify whether the new feature affects token consumption or usage quotas. An X user named Naeem Shabir commented on Anthropic's official post, stating, 'How will this impact usage limits? I'm Excited to test it out with this advancement in persistent memory across chats. Despite it being something that ChatGPT had a while ago, I am curious whether your implementation differs from theirs at all. This resolves a big issue for me because I had to regularly start new chats when the context limit/window was reached 🙏.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store