logo
Anthropic looks to beat GPT-5 and Grok 4 with this one major upgrade

Anthropic looks to beat GPT-5 and Grok 4 with this one major upgrade

Tom's Guide3 days ago
GPT-5 might be the big talking point in AI right now, but Anthropic's Claude is looking for ways to fight back and compete in the crowded market. The company's latest trick is to up its prompt length. This feature, exclusively for enterprise customers, is in part looking to bring developers over to its tool. The context window, meaning the amount of text that the model can consider, has been raised to 1 million.
That is, as it sounds, absolutely massive. It translates to roughly 750,000 words. Compared to Claude's previous limit, it is roughly five times higher and more than double the amount offered by GPT-5 right now. However, this new feature will only be made available through Anthropic's cloud partners, including Amazon Bedrock and Google Cloud's Vertex AI. This means it is only going to apply to a small number of Anthropic users.
This is an area where Anthropic has seen large amounts of growth in recent years, deploying one of the more successful business-focused AI plans. It has been selling it to partners, including Microsoft's GitHub Copilot, Windsurf, and Anysphere's Cursor.
However, while it does have a grasp on this market right now, the competition is getting competitive, even with these new longer context windows. Both Grok 4 and GPT-5 claim to have some of the best coding capabilities available in AI tools right now.
With the rollout of GPT-5, OpenAI, which has frequently been the first choice for people, could steal away business. OpenAI has largely been a consumer-focused brand, whereas Anthropic has made a lot of its money on the business side. But Altman has shown interest in this other area, too.
Right now, the advancement in context length does give Anthropic a major advantage.
To keep up with the progression from both Grok and ChatGPT, Anthropic announced Claude Opus 4.1 recently. This brought with it improvements to the coding capabilities of the model.
Right now, the advancement in context length does give Anthropic a major advantage. However, it isn't an entirely unique feature. Google's Gemini 2.5 Pro offers a 2-million context window, and Meta's Llama 4 Scout goes up to a whopping 10 million.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
Standing out in this market is challenging. While the improvement in context windows from Anthropic makes a big difference, it is unlikely to be enough to make a massive difference.
Especially considering some research seems to show that, even with a larger context window at hand, AI can't usually handle incredibly long prompts. Either way, Anthropic is looking for ways to stay competitive.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Anthropic's Claude AI now has the ability to end 'distressing' conversations
Anthropic's Claude AI now has the ability to end 'distressing' conversations

Engadget

time7 minutes ago

  • Engadget

Anthropic's Claude AI now has the ability to end 'distressing' conversations

Anthropic's latest feature for two of its Claude AI models could be the beginning of the end for the AI jailbreaking community. The company announced in a post on its website that the Claude Opus 4 and 4.1 models now have the power to end a conversation with users. According to Anthropic, this feature will only be used in "rare, extreme cases of persistently harmful or abusive user interactions." To clarify, Anthropic said those two Claude models could exit harmful conversations, like "requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale violence or acts of terror." With Claude Opus 4 and 4.1, these models will only end a conversation "as a last resort when multiple attempts at redirection have failed and hope of a productive interaction has been exhausted," according to Anthropic. However, Anthropic claims most users won't experience Claude cutting a conversation short, even when talking about highly controversial topics, since this feature will be reserved for "extreme edge cases." Anthropic's example of Claude ending a conversation (Anthropic) In the scenarios where Claude ends a chat, users can no longer send any new messages in that conversation, but can start a new one immediately. Anthropic added that if a conversation is ended, it won't affect other chats and users can even go back and edit or retry previous messages to steer towards a different conversational route. For Anthropic, this move is part of its research program that studies the idea of AI welfare. While the idea of anthropomorphizing AI models remains an ongoing debate, the company said the ability to exit a "potentially distressing interaction" was a low-cost way to manage risks for AI welfare. Anthropic is still experimenting with this feature and encourages its users to provide feedback when they encounter such a scenario.

OpenAI is at a classic strategy crossroads involving its ‘moat'—which Warren Buffett believes can make or break a business
OpenAI is at a classic strategy crossroads involving its ‘moat'—which Warren Buffett believes can make or break a business

Yahoo

time4 hours ago

  • Yahoo

OpenAI is at a classic strategy crossroads involving its ‘moat'—which Warren Buffett believes can make or break a business

It's an epochal moment as history's latest general-purpose technology, AI, forms itself into an industry. Much depends on these early days, especially the fate of the industry's leader by a mile, Open AI. In terms of the last general-purpose technology, the internet, will it become a colossus like Google or be forgotten like AltaVista? No one can know, but here's how to think about it. OpenAI's domination of the industry is striking. As the creator of ChatGPT, it recently attracted 78% of daily unique visitors to core model websites, with six competitors splitting up the rest, according to a recent 40-page report from J.P. Morgan. Even with that vast lead, the report shows, OpenAI is expanding its margin over its much smaller competitors, including even Gemini, which is part of Google and its giant parent, Alphabet (2024 revenue: $350 billion). The great question now is whether OpenAI can possibly maintain its wide lead (history would say no) or at least continue as the industry leader. The answer depends heavily on OpenAI's moat, a Warren Buffett term for any factor that protects the company and cannot be easily breached–think of Coca-Cola's brand or BNSF Railroad's economies of scale, to mention two of Buffett's successful investments. On that count the J.P. Morgan analysts are not optimistic. Specifically, they acknowledge that while OpenAI has led the industry in innovating its models, that strategy is 'an increasingly fragile moat.' Example: The company's most recent model, GPT-5, included multiple advances yet underwhelmed many users. As competitors inevitably catch up, the analysts conclude, 'Model commoditization is an increasingly likely outcome.' With innovations suffering short lives, OpenAI must now become 'a more product-focused, diversified organization that can operate at scale while retaining its position' at the top of the industry–skills the company has yet to demonstrate. Bottom line, OpenAI can maintain its leading rank in the industry, but it won't be easy, and betting on it could be risky. Yet a different view suggests OpenAI is much closer to creating a sustainable moat. It comes from Robert Siegel, a management lecturer at Stanford's Graduate School of Business who is also a venture capitalist and former executive at various companies, many in technology. He argues that OpenAI is already well along the road to achieving a valuable attribute, stickiness: The longer customers use something, the less likely they are to switch to a competitor. In OpenAI's case, 'people will only move to Perplexity or Gemini or other solutions if they get a better result,' he says. Yet that becomes unlikely because AI learns; the more you use a particular AI engine, the more it learns about you and what you want. 'If you keep putting questions into ChatGPT, which learns your behaviors better, and you like it, there's no reason to leave as long as it's competitive.' Now combine that logic with OpenAI's behavior. 'It seems like their strategy is to be ubiquitous,' Siegel says, putting ChatGPT in front of as many people as possible so the software can start learning about them before any competitor can get there first. Most famously, OpenAI released ChatGPT 3.5 to the public in 2022 for free, attracting a million users in five days and 100 million in two months. In addition, the company raised much investment early in the game, having been founded in 2015. Thus, Siegel says, OpenAI can 'continue to run hard and use capital as a moat so they can do all the things they need to do to be everywhere.' But Siegel, the J.P. Morgan analysts, and everyone else knows plenty can always go wrong. An obvious threat to OpenAI and most of its competitors is an open-source model such as China's DeepSeek, which appears to perform well at significantly lower costs. The venture capital that has poured into OpenAI could dry up as hundreds of other AI startups compete for financing. J.P. Morgan and Siegel agree that OpenAI's complex unconventional governance structure must be reformed; though a recently proposed structure has not been officially disclosed, it is reportedly topped by a nonprofit, which might worry profit-seeking investors. As for moats, OpenAI is obviously in the best position to build or strengthen one. But looking into the era of AI, the whole concept of the corporate moat may become meaningless. How long will it be, if it hasn't been done already, before a competitor asks its own AI engine, 'How do we defeat OpenAI's moat?' This story was originally featured on Sign in to access your portfolio

Silicon Valley talent keeps getting recycled, so this CEO uses a ‘moneyball' approach for uncovering hidden AI geniuses in the new era
Silicon Valley talent keeps getting recycled, so this CEO uses a ‘moneyball' approach for uncovering hidden AI geniuses in the new era

Yahoo

time4 hours ago

  • Yahoo

Silicon Valley talent keeps getting recycled, so this CEO uses a ‘moneyball' approach for uncovering hidden AI geniuses in the new era

The AI talent war among major tech companies is escalating, with firms like Meta offering extravagant $100 million signing bonuses to attract top researchers from competitors like OpenAI. But HelloSky has emerged to diversify the recruitment pool, using AI-driven data to map candidates' real-world impact and uncover hidden talent beyond traditional Silicon Valley networks. As AI becomes more ubiquitous, the need for the top-tier talent at tech firms becomes even more important—and it's starting a war among Big Tech, which is simultaneously churning through layoffs and poaching people from each other with eye-popping pay packages. Meta, for example, is dishing out $100 million signing bonuses to woo top OpenAI researchers. Others are scrambling to retain staff with massive bonuses and noncompete agreements. With such a seemingly small pool of researchers with the savvy to usher in new waves of AI developments, it's no wonder salaries have gotten so high. That's why one tech executive said companies will need to stop 'recycling' candidates from the same old Silicon Valley and Big Tech talent pools to make innovation happen. 'There's different biases and filters about people's pedigree or where they came from. But if you could truly map all of that and just give credit for some people that maybe went through alternate pathways [then you can] truly stack rank,' Alex Bates, founder and CEO of AI executive recruiting platform HelloSky, told Fortune. (In April, HelloSky announced the close of a $5.5 million oversubscribed seed round from investors like Caldwell Partners, Karmel Capital, True, Hunt Scanlon Ventures as well as prominent angel investors from Google and Cisco Systems). That's why Bates developed HelloSky, which consolidates candidate, company, talent, investor, and assessment data into a single GenAI-powered platform to help companies find candidates they might not have otherwise. Many tech companies pull from previous job descriptions and resume submissions to poach top talent, explained Bates, who also authored Augmented Mind about the relationship between humans and AI. Meta CEO Mark Zuckerberg even reportedly maintains a literal list of all the top talent he wants to poach for his Superintelligence Labs and has been heavily involved in his own company's recruiting strategies. But the AI talent wars will make it more difficult than ever to fill seats with experienced candidates. Even OpenAI CEO Sam Altman recently lamented about how few candidates AI-focused companies have to pull from. 'The bet, the hope is they know how to discover the remaining ideas to get to superintelligence—that there are going to be a handful of algorithmic ideas and, you know, medium-sized handful of people who can figure them out,' Altman recently told CNBC. The 'moneyball' for finding top talent Bates refers to his platform as 'moneyball' for unearthing top talent—essentially a 'complete map' of real domain experts who may not be well-networked in Silicon Valley. Using AI, HelloSky can tag different candidates, map connections, and find people who may not have as much of a social media or job board presence, but have the necessary experience to succeed in high-level jobs. The platform scours not just resumes, but actual code contributions, peer-reviewed research, and even trending open-source projects, prioritizing measurable impact over flashy degrees. That way, companies can find candidates who have demonstrated outsized results in small, scrappy teams or other niche communities, similar to how the Oakland A's Billy Beane joined forces with Ivy League grad Peter Brand to reinvent traditional baseball scouting, which was depicted in the book and movie Moneyball. It's a 'big unlock for everything from hiring people, partnering, acquiring whatever, just everyone interested in this space,' Bates said. 'There's a lot of hidden talent globally.' HelloSky can also sense when certain candidates 'embellish' their experience on job platforms or fill in the gaps for people whose online presence is sparse. 'Maybe they said they had a billion-dollar IPO, but [really] they left two years before the IPO. We can surface that,' Bates said. 'But also we can give credit to people that maybe didn't brag sufficiently.' This helps companies find their 'diamond in the rough,' he added. Bates also predicts search firms and internal recruiters will start forcing assessments more on candidates to ensure they're the right fit for the job. 'If you can really target well and not waste so much time talking to the wrong people, then you can go much deeper into these next-gen behavioral assessment frameworks,' he said. 'I think that'll be the wave of the future.' This story was originally featured on

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store