logo
#

Latest news with #Sonnet3.7

‘How Long Do We Have?' Nithin Kamath On AI Threat To India's White-Collar Jobs
‘How Long Do We Have?' Nithin Kamath On AI Threat To India's White-Collar Jobs

News18

time3 days ago

  • Business
  • News18

‘How Long Do We Have?' Nithin Kamath On AI Threat To India's White-Collar Jobs

Last Updated: New information shared by Zerodha co-founder Nithin Kamath shows a clear warning: Artificial intelligence can now handle longer and more complicated tasks than ever before. Artificial intelligence isn't merely progressing; it's accelerating at a breakneck speed, posing a significant challenge to India's service-based economy. According to new insights from Zerodha co-founder Nithin Kamath, AI systems are increasingly capable of managing longer and more intricate tasks than ever before. This rapid evolution means many jobs traditionally performed by humans could soon become obsolete. It raises urgent questions about what the future holds for employment in the country. On X (formerly Twitter), Nithin Kamath posted two images – one chart shows that the length of tasks AI can complete at a 50 per cent rate is doubling every seven months. Just a few years ago, earlier models like GPT-2 were limited to handling only brief, simple queries lasting seconds. Today, GPT-4o can sustain performance for nearly an hour. The most advanced systems, such as Sonnet 3.7, are already managing complex, multi-hour assignments like training image classifiers—tasks that have moved beyond experimental phases and are becoming true substitutes for human work. A second, more concerning chart tracks AI's accuracy across different task lengths and complexities. For short and simple tasks, leading models now achieve over 90 per cent accuracy. Even more complicated and ambiguous challenges, once thought to be beyond AI's capabilities, are now seeing significant improvements. Only the most difficult category—long, complicated tasks—still lags with less than 30 per cent success. However, that gap is steadily closing. One user remarked, 'It's hard to predict, but it feels like we're nearing a major turning point." Another added, 'Not much time left, but ultimately this progress will improve our lives. It could lead to sustainable abundance and even communism—after all, capitalism is paving the way." A different commenter emphasised the importance of adaptability, saying, 'As long as people keep reskilling and adding value to their organizations, they'll stay relevant." Meanwhile, someone else reassured others by pointing out, 'There's still plenty of time because AI relies heavily on human knowledge as middleware." The discussion highlighted a mix of concern and cautious optimism about the future impact of AI. First Published:

Anthropic launches Claude Opus 4 and Claude Sonnet 4 AI models
Anthropic launches Claude Opus 4 and Claude Sonnet 4 AI models

Time of India

time23-05-2025

  • Business
  • Time of India

Anthropic launches Claude Opus 4 and Claude Sonnet 4 AI models

Anthropic has launched its next-generation AI models, Claude Opus 4 and Claude Sonnet 4 , positioning them as major advancements in AI coding, reasoning, and agentic capabilities. The release is part of the broader Claude 4 update , which also includes new developer tools, improved memory functions, and enhanced agent workflows. Both models introduce 'extended thinking with tool use' in beta, allowing Claude to alternate between internal reasoning and external tools like web search. They also support parallel tool execution, improved memory handling, and enhanced instruction-following. When granted file access, Opus 4 can create and reference memory files, increasing contextual understanding and long-term coherence. Claude Opus 4 Claude Opus 4 is being touted by the company as the most powerful coding model to date with benchmarks like SWE-bench (72.5%) and Terminal-bench (43.2%). The AI model is said to be capable of sustained performance across complex, multi-step tasks. The model powers long-running workflows and supports agent applications that require persistent focus and reasoning. Claude Sonnet 4 Claude Sonnet 4, on the other hand, is said to be a substantial upgrade from Sonnet 3.7, balancing high performance with efficiency. It is claimed to achieve a 72.7% score on SWE-bench and is designed for both internal use and third-party deployment. The model has already been adopted by GitHub as the core of its new Copilot coding agent. Your iPhone's NEW Home is India: Apple's new Manufacturing HUB! Both models are accessible via the Claude Pro, Max, Team, and Enterprise plans, with Sonnet 4 also available to free-tier users. Pricing remains unchanged from the previous generation. The Claude 4 launch is available now across the Anthropic API, Amazon Bedrock, and Google Cloud Vertex AI. Anthropic notes that both models are 65% less likely to take shortcuts in agentic tasks, compared to Sonnet 3.7. They've also introduced 'thinking summaries' to condense longer reasoning chains for easier interpretation, while offering a Developer Mode for advanced users who need full process transparency. Anthropic has rolled out four new capabilities on its API: a code execution tool, an MCP connector, a Files API, and a prompt caching function. These tools aim to help developers build more robust AI-driven applications and agents.

Anthropic's new Claude 4 AI models can reason over many steps
Anthropic's new Claude 4 AI models can reason over many steps

Yahoo

time22-05-2025

  • Business
  • Yahoo

Anthropic's new Claude 4 AI models can reason over many steps

During its inaugural developer conference Thursday, Anthropic launched two new AI models that the startup claims are among the industry's best, at least in terms of how they score on popular benchmarks. Claude Opus 4 and Claude Sonnet 4, part of Anthropic's new family of models, Claude 4, can analyze large data sets, execute long-horizon tasks, and take complex actions, according to the company. Both models were tuned to perform well on programming tasks, Anthropic says, making them well-suited for writing and editing code. Both paying users and users of the company's free chatbot apps will get access to Sonnet 4 but only paying users will get access to Opus 4. For Anthropic's API, via Amazon's Bedrock platform and Google's Vertex AI, Opus 4 will be priced at $15/$75 per million tokens (input/output) and Sonnet 4 at $3/$15 per million tokens (input/output). Tokens are the raw bits of data that AI models work with, with a million tokens being equivalent to about 750,000 words — roughly 163,000 words longer than "War and Peace." Anthropic's Claude 4 models arrive as the company looks to substantially grow revenue. Reportedly, the outfit, founded by ex-OpenAI researchers, aims to notch $12 billion in earnings in 2027, up from a projected $2.2 billion this year. Anthropic recently closed a $2.5 billion credit facility and raised billions of dollars from Amazon and other investors in anticipation of the rising costs associated with developing frontier models. Rivals haven't made it easy to maintain pole position in the AI race. While Anthropic launched a new flagship AI model earlier this year, Claude Sonnet 3.7, alongside an agentic coding tool called Claude Code, competitors including OpenAI and Google have raced to outdo the company with powerful models and dev tooling of their own. Anthropic is playing for keeps with Claude 4. The more capable of the two models introduced today, Opus 4, can maintain "focused effort" across many steps in a workflow, Anthropic says. Meanwhile, Sonnet 4 — designed as a "drop-in replacement" for Sonnet 3.7 — improves in coding and math compared to Anthropic's previous models and more precisely follows instructions, according to the company. The Claude 4 family is also less likely than Sonnet 3.7 to engage in "reward hacking," claims Anthropic. Reward hacking, also known as specification gaming, is a behavior where models take shortcuts and loopholes to complete tasks. To be clear, these improvements haven't yielded the world's best models by every benchmark. For example, while Opus 4 beats Google's Gemini 2.5 Pro and OpenAI's o3 and GPT-4.1 on SWE-bench Verified, which is designed to evaluate a model's coding abilities, it can't surpass o3 on the multimodal evaluation MMMU or GPQA Diamond, a set of PhD-level biology-, physics-, and chemistry-related questions. Still, Anthropic is releasing Opus 4 under stricter safeguards, including beefed-up harmful content detectors and cybersecurity defenses. The company claims its internal testing found that Opus 4 may "substantially increase" the ability of someone with a STEM background to obtain, produce, or deploy chemical, biological, or nuclear weapons, reaching Anthropic's "ASL-3" model specification. Both Opus 4 and Sonnet 4 are "hybrid" models, Anthropic says — capable of near-instant responses and extended thinking for deeper reasoning (to the extent AI can "reason" and "think" as humans understand these concepts). With reasoning mode switched on, the models can take more time to consider possible solutions to a given problem before answering. As the models reason, they'll show a "user-friendly" summary of their thought process, Anthropic says. Why not show the whole thing? Partially to protect Anthropic's "competitive advantages," the company admits in a draft blog post provided to TechCrunch. Opus 4 and Sonnet 4 can use multiple tools, like search engines, in parallel, and alternate between reasoning and tools to improve the quality of their answers. They can also extract and save facts in "memory" to handle tasks more reliably, building what Anthropic describes as "tacit knowledge" over time. To make the models more programmer-friendly, Anthropic is rolling out upgrades to the aforementioned Claude Code. Claude Code, which lets developers run specific tasks through Anthropic's models directly from a terminal, now integrates with IDEs and offers an SDK that lets devs connect it with third-party applications. The Claude Code SDK, announced earlier this week, enables running Claude Code as a sub-process on supported operating systems, providing a way to build AI-powered coding assistants and tools that leverage Claude models' capabilities. Anthropic has released Claude Code extensions and connectors for Microsoft's VS Code, JetBrains, and GitHub. The GitHub connector allows developers to tag Claude Code to respond to reviewer feedback, as well as to attempt to fix errors in — or otherwise modify — code. AI models still struggle to code quality software. Code-generating AI tends to introduce security vulnerabilities and errors, owing to weaknesses in areas like the ability to understand programming logic. Yet their promise to boost coding productivity is pushing companies — and developers — to rapidly adopt them. Anthropic, acutely aware of this, is promising more frequent model updates. "We're [...] shifting to more frequent model updates, delivering a steady stream of improvements that bring breakthrough capabilities to customers faster," wrote the startup in its draft post. "This approach keeps you at the cutting edge as we continuously refine and enhance our models." This article originally appeared on TechCrunch at Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Google Launches Gemini 2.5 With Focus on Complex Reasoning and AI Agent Capabilities
Google Launches Gemini 2.5 With Focus on Complex Reasoning and AI Agent Capabilities

Yahoo

time25-03-2025

  • Business
  • Yahoo

Google Launches Gemini 2.5 With Focus on Complex Reasoning and AI Agent Capabilities

Google (NASDAQ:GOOG) introduced Gemini 2.5 on Tuesday, its latest large language model designed to bring advanced reasoning capabilities to artificial intelligence applications. The company described Gemini 2.5 as a thinking model that improves response accuracy by processing information more deeply before answering. According to a company blog post, the model analyzes data, applies context, draws logical conclusions, and makes decisionskey components of what it defines as reasoning in AI. Gemini 2.5 is built on an upgraded base model combined with refined post-training, which Google said allows for better performance and supports the development of more capable and context-aware AI agents. The launch includes Gemini 2.5 Pro Experimental, described by Google as its most advanced model for complex, multimodal tasks. The company said it outperforms comparable models, including OpenAI's o3-mini and GPT-4.5, Claude's Sonnet 3.7, Grok 3 Beta, and DeepSeek's R1. Gemini 2.5 Pro Experimental is currently accessible through Google's AI Studio, the Gemini app for Advanced plan subscribers, and is expected to arrive on Vertex AI soon. Pricing details will be provided in the coming weeks. This article first appeared on GuruFocus.

Anthropic's latest flagship AI might not have been incredibly costly to train
Anthropic's latest flagship AI might not have been incredibly costly to train

Yahoo

time26-02-2025

  • Business
  • Yahoo

Anthropic's latest flagship AI might not have been incredibly costly to train

Anthropic's newest flagship AI model, Claude 3.7 Sonnet, cost "a few tens of millions of dollars" to train using less than 10^26 FLOPs of computing power. That's according to Wharton professor Ethan Mollick, who in an X post on Monday relayed a clarification he'd received from Anthropic's PR. "I was contacted by Anthropic who told me that Sonnet 3.7 would not be considered a 10^26 FLOP model and cost a few tens of millions of dollars," he wrote, "though future models will be much bigger." TechCrunch reached out to Anthropic for confirmation but hadn't received a response as of publication time. Assuming Claude 3.7 Sonnet indeed cost just "a few tens of millions of dollars" to train, not factoring in related expenses, it's a sign of how relatively cheap it's becoming to release state-of-the-art models. Claude 3.5, Sonnet's predecessor, released in fall 2024, similarly cost a few tens of millions of dollars to train, Anthropic CEO Dario Amodei revealed in a recent essay. Those totals compare pretty favorably to the training price tags of 2023's top models. To develop its GPT-4 model, OpenAI spent more than $100 million, according to OpenAI CEO Sam Altman. Meanwhile, Google spent close to $200 million to train its Gemini Ultra model, a Stanford study estimated. That being said, Amodei expects future AI models to cost billions of dollars. Certainly, training costs don't capture work like safety testing and fundamental research. Moreover, as the AI industry embraces "reasoning" models that work on problems for extended periods of time, the computing costs of running models will likely continue to rise. Sign in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store