
Claude just got a major personalization upgrade — here's what's new
Today's update rolls out four big personalization features:
Anthropic's upgrades shift Claude from a static assistant into a more adaptive partner, making it feel far more personal.
So now, users doing everything from drafting professional emails or outlining a novel, to planning a trip, can expect Claude to adapt its personality and structure to fit the task.
The chat history search also removes friction for long-term projects by letting you pick up exactly where you left off.
Anthropic is also testing a memory feature that could take personalization further by letting Claude remember details from previous conversations, similar to what users experience with ChatGPT. While this could make the assistant even more helpful for ongoing work, it raises familiar questions around privacy and data retention.
If those memory capabilities launch alongside the new personalization tools, Claude could become one of the most context-aware AI assistants yet, and a serious competitor to ChatGPT for users who value long-term consistency.Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
33 minutes ago
- Yahoo
Anthropic says some Claude models can now end ‘harmful or abusive' conversations
Anthropic has announced new capabilities that will allow some of its newest, largest models to end conversations in what the company describes as 'rare, extreme cases of persistently harmful or abusive user interactions.' Strikingly, Anthropic says it's doing this not to protect the human user, but rather the AI model itself. To be clear, the company isn't claiming that its Claude AI models are sentient or can be harmed by their conversations with users. In its own words, Anthropic remains 'highly uncertain about the potential moral status of Claude and other LLMs, now or in the future.' However, its announcement points to a recent program created to study what it calls 'model welfare' and says Anthropic is essentially taking a just-in-case approach, 'working to identify and implement low-cost interventions to mitigate risks to model welfare, in case such welfare is possible.' This latest change is currently limited to Claude Opus 4 and 4.1. And again, it's only supposed to happen in 'extreme edge cases,' such as 'requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale violence or acts of terror.' While those types of requests could potentially create legal or publicity problems for Anthropic itself (witness recent reporting around how ChatGPT can potentially reinforce or contribute to its users' delusional thinking), the company says that in pre-deployment testing, Claude Opus 4 showed a 'strong preference against' responding to these requests and a 'pattern of apparent distress' when it did so. As for these new conversation-ending capabilities, the company says, 'In all cases, Claude is only to use its conversation-ending ability as a last resort when multiple attempts at redirection have failed and hope of a productive interaction has been exhausted, or when a user explicitly asks Claude to end a chat.' Anthropic also says Claude has been 'directed not to use this ability in cases where users might be at imminent risk of harming themselves or others.' When Claude does end a conversation, Anthropic says users will still be able to start new conversations from the same account, and to create new branches of the troublesome conversation by editing their responses. 'We're treating this feature as an ongoing experiment and will continue refining our approach,' the company says. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Business Insider
an hour ago
- Business Insider
Here's why Google might have to sell Chrome, and which companies want to buy it
Chrome is the world's most popular web browser. But how much longer it belongs to Google is an open question. A court last year ruled that Google had violated antitrust laws by maintaining a monopoly on internet search. A second ruling in April found Google also monopolized open-web digital ad markets. The Justice Department asked a judge to force Google to divest its premier web browser to remedy the case. A court is expected to rule on that by the end of this month. Chrome, a free web browser developed by Google, is an important distribution tool for Google Search and its other services. It also provides insights into users' search habits and is the most popular web browser on the market. Being forced to sell Chrome would be an undeniable blow to Google and its parent company, Alphabet Inc. Analysts at Barclays said such an action could be a black swan scenario for Google stock, sparking an estimated 15% to 25% decline. Google denies it's a monopoly. It said in a blog post in May that offloading the web browser to another party could render it "obsolete" and "expose billions of people to cyber-attacks." Although the judge has not yet decided Chrome's ultimate fate, competitors are already lining up to gladly take it off Google's hands. an AI search chat platform, confirmed to Business Insider that it made a $35 billion bid for Chrome this week. JP Morgan and several private equity firms backed the bid. is a division of the digital marketing company Public Good, which acquired in July. Public Good President Melissa Anderson and CEO Danny Bibi told Business Insider they reached out to Google on Wednesday. "Given the number of worldwide users Chrome has, it's a really just phenomenal way to scale user adoption," Anderson said. The pair said they're committed to using AI ethically, which means offering its search for free in an effort to make knowledge accessible for all. They also said founded in 1998, already has a network of clients, so finding potential advertisers wouldn't be a heavy lift. Perplexity Perplexity, an AI search startup, made a $34.5 billion bid for the web browser this week. The company launched an AI-native browser, Comet, in July. Although the bid is higher than Perplexity's entire valuation, The Wall Street Journal reported that several investors have agreed to back the potential deal. Perplexity said it would continue supporting Chromium, Google's open-source web browser project that's the foundation of Chrome, as part of the deal, according to the outlet. The outlet reported that Perplexity would continue to keep Google as the default search engine, but users could change that through settings. OpenAI Although OpenAI's ChatGPT turned it into the leading AI startup in Silicon Valley, the company is a tiny fraction of the size of a Big Tech mammoth like Google. Purchasing Chrome, however, would help even the playing field. During Google's antitrust hearing in April, OpenAI's head of ChatGPT testified that the company would be interested in acquiring Chrome if Google were forced to divest. "Yes, we would, as would many other parties," Nick Turley told the court, according to Bloomberg. OpenAI CEO Sam Altman also recently said he'd be interested in snapping up Chrome. "If Chrome is really going to sell, we should take a look at it," Altman told a group of journalists on Thursday, according to The Verge. Yahoo Yahoo, a direct competitor of Google, would also be interested in bidding on Chrome, Bloomberg reported. Brian Provost, the general manager for Yahoo Search, said Chrome is "arguably the most important strategic player on the web" during a hearing for Google's antitrust case in April. "We would be able to pursue it with Apollo," Provost said, referring to Yahoo's owner, Apollo Global Management Inc.


TechCrunch
an hour ago
- TechCrunch
Anthropic says some Claude models can now end ‘harmful or abusive' conversations
Anthropic has announced new capabilities that will allow some of its newest, largest models to end conversations in what the company describes as 'rare, extreme cases of persistently harmful or abusive user interactions.' Strikingly, Anthropic says it's doing this not to protect the human user, but rather the AI model itself. To be clear, the company isn't claiming that its Claude AI models are sentient or can be harmed by their conversations with users. In its own words, Anthropic remains 'highly uncertain about the potential moral status of Claude and other LLMs, now or in the future.' However, its announcement points to a recent program created to study what it calls 'model welfare' and says Anthropic is essentially taking a just-in-case approach, 'working to identify and implement low-cost interventions to mitigate risks to model welfare, in case such welfare is possible.' This latest change is currently limited to Claude Opus 4 and 4.1. And again, it's only supposed to happen in 'extreme edge cases,' such as 'requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale violence or acts of terror.' While those types of requests could potentially create legal or publicity problems for Anthropic itself (witness recent reporting around how ChatGPT can potentially reinforce or contribute to its users' delusional thinking), the company says that in pre-deployment testing, Claude Opus 4 showed a 'strong preference against' responding to these requests and a 'pattern of apparent distress' when it did so. As for these new conversation-ending capabilities, the company says, 'In all cases, Claude is only to use its conversation-ending ability as a last resort when multiple attempts at redirection have failed and hope of a productive interaction has been exhausted, or when a user explicitly asks Claude to end a chat.' Anthropic also says Claude has been 'directed not to use this ability in cases where users might be at imminent risk of harming themselves or others.' Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $600+ before prices rise. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | REGISTER NOW When Claude does end a conversation, Anthropic says users will still be able to start new conversations from the same account, and to create new branches of the troublesome conversation by editing their responses. 'We're treating this feature as an ongoing experiment and will continue refining our approach,' the company says.