Anthropic brings Claude's learning mode to regular users and devs
Starting today, Claude.ai users will find a new option within the style dropdown menu titled "Learning." The experience here is similar to the one Anthropic offers with Claude for Education. When you turn learning mode on, the chatbot will employ a Socratic approach, trying to guide you through your question. However, unlike the real-life Socrates, who was famous for bombarding strangers with endless questions, you can turn off learning mode at any time.
Notably, Anthropic is also offering two different takes on the feature through Claude Code. First, there's an "Explanatory" mode where Claude will generate summaries of its decision-making process as it works, giving the user a chance to better understand what it's doing.
For those at the start of their coding career or hobby, there's also a more robust option, which is once again called "Learning." Here, Claude will occasionally stop what it's doing and mark a section with a "#TODO" comment to prompt the user to write five to 10 lines of their code. If you want to try the two features out for yourself, update to the latest version of Claude Code and type "/output-styles." You can then select between the two modes or Claude's default behavior.
According to Drew Bent, education lead at Anthropic, learning mode, particularly as it exists in Claude Code, is the company's attempt to make its chatbot into more of a collaborative tool. "I think it's great that there's a race between all of the AI labs to offer the best learning mode," he said. "In a similar way, I hope we can inspire something similar with coding agents."
Bent says the original learning mode came out of conversations Anthropic had with university students, who kept referring back to the concept of brain rot. "We found that they themselves realized that when they just copy and paste something directly from a chat bot, it's not good for their long-term learning," he said. When it came time to adapt the feature to Claude Code, the company wanted to balance the needs of new programmers with those like Bent who have been coding for a decade or more.
"Learning mode is designed to help all of those audiences not just complete tasks, but also help them grow and learn in the process and better understand their code base," Bent said. His hope is that the new tools will allow any coder to become a "really good engineering manager." In practice, that means those users won't necessarily write most of the code on a project, but they will develop a keen eye for how everything fits together and what sections of code might need some more work.
Looking forward, Bent says Anthropic doesn't "have all the answers, but needless to say, we're trying to think through other features we can build" that expand on what it's doing with learning mode. To that end, the company is opening up Claude Code's new Output Styles to developers, allowing them to build their own learning modes. Users too can modify how Claude communicates by creating their own custom prompts for the chatbot.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
3 hours ago
- Yahoo
Anthropic says some Claude models can now end ‘harmful or abusive' conversations
Anthropic has announced new capabilities that will allow some of its newest, largest models to end conversations in what the company describes as 'rare, extreme cases of persistently harmful or abusive user interactions.' Strikingly, Anthropic says it's doing this not to protect the human user, but rather the AI model itself. To be clear, the company isn't claiming that its Claude AI models are sentient or can be harmed by their conversations with users. In its own words, Anthropic remains 'highly uncertain about the potential moral status of Claude and other LLMs, now or in the future.' However, its announcement points to a recent program created to study what it calls 'model welfare' and says Anthropic is essentially taking a just-in-case approach, 'working to identify and implement low-cost interventions to mitigate risks to model welfare, in case such welfare is possible.' This latest change is currently limited to Claude Opus 4 and 4.1. And again, it's only supposed to happen in 'extreme edge cases,' such as 'requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale violence or acts of terror.' While those types of requests could potentially create legal or publicity problems for Anthropic itself (witness recent reporting around how ChatGPT can potentially reinforce or contribute to its users' delusional thinking), the company says that in pre-deployment testing, Claude Opus 4 showed a 'strong preference against' responding to these requests and a 'pattern of apparent distress' when it did so. As for these new conversation-ending capabilities, the company says, 'In all cases, Claude is only to use its conversation-ending ability as a last resort when multiple attempts at redirection have failed and hope of a productive interaction has been exhausted, or when a user explicitly asks Claude to end a chat.' Anthropic also says Claude has been 'directed not to use this ability in cases where users might be at imminent risk of harming themselves or others.' When Claude does end a conversation, Anthropic says users will still be able to start new conversations from the same account, and to create new branches of the troublesome conversation by editing their responses. 'We're treating this feature as an ongoing experiment and will continue refining our approach,' the company says. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Fast Company
3 hours ago
- Fast Company
Mark Cuban and Sam Altman just warned about disappearing jobs and the need to learn AI
OpenAI CEO Sam Altman isn't shy about discussing the future of AI. As the CEO of a market leading company, his predictions carry plenty of weight, such as his worry that AI could make things go 'horribly wrong,' or that AI agents will completely transform the workplace. Nor is billionaire Mark Cuban, who also sees vast changes to an AI-dominated workplace. Altman's recent remarks to finance executives at a Federal Reserve conference on large banks and capital requirements included his belief that entire job categories will be eaten up by AI. He said customer service is all but completely ready for an AI takeover right now, as reported by the Guardian newspaper. 'That's a category where I just say, you know what, when you call customer support, you're on target and AI, and that's fine,' he said. When a user calls a hotline now, AI answers, and it's like 'a super-smart, capable person,' Altman explained, adding that 'there's no phone tree, there's no transfers. It can do everything that any customer support agent at that company could do. It does not make mistakes. It's very quick. You call once, the thing just happens, it's done.' You may have already encountered an AI customer service system, or at the very least spoken briefly to one before being forwarded to a person with the info you're seeking. And anecdotally, if Altman's promise of no mistakes proves true, then that's a huge sell for customer service departments—and consumer satisfaction. (We all know how frustrating it can be calling these lines.) What an AI can offer under these circumstances is also clearly defined: customers probably call with a discrete set of common issues, and the AI can be trained on what to do. Subscribe to the Daily newsletter. Fast Company's trending stories delivered to you every day Privacy Policy | Fast Company Newsletters But the next industry Altman said was ripe for an AI takeover is more complex, requiring deep knowledge and empathy, and there are much higher stakes at play. According to the AI CEO, AI is already better than human doctors. It can, 'most of the time,' surpass human physician skills, he argued, suggesting it's 'a better diagnostician than most doctors in the world.' But then he pointed out a very human truth: 'people still go to doctors,' he said, and he added that he felt the same, 'maybe I'm a dinosaur here, but I really do not want to, like, entrust my medical fate to ChatGPT with no human doctor in the loop.' That at least aligns with warnings from medical experts who say that while AI may be useful for medical advice under some circumstances, like helping to make medical notes, it's just too subject to misinformation errors to be trusted to give mental health advice or diagnoses, for example. In fact a group of therapists recently warned of the danger in doing so. Altman also told the bankers that he's worried near future AIs could be used by bad actors, perhaps based overseas, to attack the U.S. financial system. He cited the issue of AI voice clones as a direct risk. While he's not predicting AI will steal banking jobs here, he is essentially warning that the entire industry could be upended by AI, used the wrong way. You may think Altman is being unnecessarily doomy here. In this case, you may be more aligned with the thinking of billionaire entrepreneur Mark Cuban. He's just suggested that in his expert mind, AI will become a 'baseline' workplace skill inside five years. Essentially he thinks that 'like email or Excel,' everyone, from fresh graduates to practiced entrepreneurs, will have to master AI to succeed at their tasks. in an interview with Fortune, Cuban predicted that thanks to the force multiplying effects AI can have, 'we'll see more people working for themselves' thanks to the rise of AI assistants, possibly powered by agent AI tech, which can transform 'solo founders into full teams.' And worse, if you're note already using AI to 'move faster or make smarter decisions, you're behind,' he said. While framed more positively than Altman's statements, a closer look says Cuban is still predicting whole classes of jobs will disappear inside five years. Why would a startup CEO need a personal assistant, a coding expert or a marketing adviser if all those tasks could be done by next-gen AI? advertisement All of this, while interesting, could be dismissed as mere PR for the AI industry, but you should actually care about this expert advice. Altman's warnings could have you looking at what tasks you already feel comfortable outsourcing to an AI tool instead of a human worker. And then, taking Cuban's advice, you should consider taking time to properly educate yourself about the promises and risks of AI technology, and also plan on upskilling or reskilling your existing staff. The potential efficiencies AI promises mean they could — By Kit Eaton This article originally appeared on Fast Company's sister publication, Inc. Inc. is the voice of the American entrepreneur. We inspire, inform, and document the most fascinating people in business: the risk-takers, the innovators, and the ultra-driven go-getters that represent the most dynamic force in the American economy.


TechCrunch
4 hours ago
- TechCrunch
Anthropic says some Claude models can now end ‘harmful or abusive' conversations
Anthropic has announced new capabilities that will allow some of its newest, largest models to end conversations in what the company describes as 'rare, extreme cases of persistently harmful or abusive user interactions.' Strikingly, Anthropic says it's doing this not to protect the human user, but rather the AI model itself. To be clear, the company isn't claiming that its Claude AI models are sentient or can be harmed by their conversations with users. In its own words, Anthropic remains 'highly uncertain about the potential moral status of Claude and other LLMs, now or in the future.' However, its announcement points to a recent program created to study what it calls 'model welfare' and says Anthropic is essentially taking a just-in-case approach, 'working to identify and implement low-cost interventions to mitigate risks to model welfare, in case such welfare is possible.' This latest change is currently limited to Claude Opus 4 and 4.1. And again, it's only supposed to happen in 'extreme edge cases,' such as 'requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale violence or acts of terror.' While those types of requests could potentially create legal or publicity problems for Anthropic itself (witness recent reporting around how ChatGPT can potentially reinforce or contribute to its users' delusional thinking), the company says that in pre-deployment testing, Claude Opus 4 showed a 'strong preference against' responding to these requests and a 'pattern of apparent distress' when it did so. As for these new conversation-ending capabilities, the company says, 'In all cases, Claude is only to use its conversation-ending ability as a last resort when multiple attempts at redirection have failed and hope of a productive interaction has been exhausted, or when a user explicitly asks Claude to end a chat.' Anthropic also says Claude has been 'directed not to use this ability in cases where users might be at imminent risk of harming themselves or others.' Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $600+ before prices rise. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | REGISTER NOW When Claude does end a conversation, Anthropic says users will still be able to start new conversations from the same account, and to create new branches of the troublesome conversation by editing their responses. 'We're treating this feature as an ongoing experiment and will continue refining our approach,' the company says.