OpenAI banned from using Claude AI just before GPT-5 launch: Here's why
Anthropic spokesperson Christopher Nulty, while confirming the development in a statement to Wired, said, 'Claude Code has become the go-to choice for coders everywhere, and so it was no surprise to learn OpenAI's own technical staff were also using our coding tools ahead of the launch of GPT 5.'
'Unfortunately, this is a direct violation of our terms of service.' he added.
Meanwhile, Nulty stated that Anthropic will 'continue to ensure OpenAI has API access for the purposes of benchmarking and safety evaluations as is standard practice across the industry.'
OpenAI was reportedly plugging Claude into its own internal tools using special developer access (APIs), instead of the chat interface, which allowed the ChatGPT maker to run tests to evaluate the rival AI models' capability on tasks like coding and creative writing. OpenAI was also able to check how Claude responded to safety related prompts in categories like CSAM, self harm and defamation.
The results from these tests reportedly helped OpenAI compare the behavior of its own models under similar conditions and make adjustments as needed.
Anthropic's commercial terms of service state that customers are not allowed to use its AI models to 'build a competing product or service, including to train competing AI models or resell the Service.'
OpenAI's Chief Communications Officer Hannah Wong, in a statement to Wired, said, 'It's industry standard to evaluate other AI systems to benchmark progress and improve safety. While we respect Anthropic's decision to cut off our API access, it's disappointing considering our API remains available to them.'
Notably, this isn't the first time that Anthropic has cut off a competitor's access to its AI model. Earlier in the year, the company had restricted AI coding startup Windsurf from having direct access to its models after reports started emerging that the company was being acquired by OpenAI.
Anthropic's Chief Science Officer Jared Kaplan had then talked about revoking Windsurf's access to Claude, saying, 'I think it would be odd for us to be selling Claude to OpenAI.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


India.com
an hour ago
- India.com
Elon Musk's latest shocker to Google and OpenAI with Text-to-Video feature on Grok, but with a twist; only THESE users will have access
New Delhi: Elon Musk has added a new Text-to-Video feature to GrokAI. This feature of Grok AI will create videos based on text commands. It will give tough competition to the text-to-video generation of Google's Veo3 and OpenAI's Elon Musk has named this feature Imagine. As the name suggests, this feature will work to generate videos based on your imagination. How did Musk share this information? Elon Musk has given this information from his X handle. Musk said in his post that users will be able to use this new Imagine feature by updating their X app. However, users will be kept on the waitlist for the Grok Imagine feature. This Imagine feature of Grok will generate excellent videos through text commands. This feature will work on Grok's latest large language model. Update your app and request to be on the waitlist for @Grok Imagine — Elon Musk (@elonmusk) August 2, 2025 What is the controversy over Imagine? Like Gemini Veo3 and Sora, videos can be generated in it based on text prompts. However, a new spicy mode will be given in GrokAI's Imagine. In this, users will be able to create a 6-second video clip. However, controversy has also come up regarding this feature. Many users say that obscenity will be promoted through this feature. Who will get access to Imagine? X Premium users will currently be given beta access to the GrokAI Imagine feature. However, the company will choose selected users for this. Those who are heavy users of Grok AI will be given beta access to beta Imagine first. Apart from this, Valentine mode has also been added to GrokAI. Beta access to this feature has also been given to premium users. Valentine mode has an imaginary character, with whom users can interact and share their heart. It will work like a digital friend for the users.


NDTV
3 hours ago
- NDTV
Godfather Of AI Warns Technology Could Invent Its Own Language: 'It Gets Scary...'
Geoffrey Hinton, regarded by many as the 'godfather of artificial intelligence' (AI), has warned that the technology could get out of hand if chatbots manage to develop their language. Currently, AI does its thinking in English, allowing developers to track what the technology is thinking, but there could come a point where humans might not understand what AI is planning to do, as per Mr Hinton. "Now it gets more scary if they develop their own internal languages for talking to each other," he said on an episode of the "One Decision" podcast that aired last month. "I wouldn't be surprised if they developed their own language for thinking, and we have no idea what they're thinking." Mr Hinton added that AI has already demonstrated that it can think terrible thoughts, and it is not unthinkable that the machines could eventually think in ways that humans cannot track or interpret. Warning about AI Mr Hinton laid the foundations for machine learning that is powering today's AI-based products and applications. However, the Nobel laureate grew wary of AI's future development and cut ties with his employer, Google, in order to speak more freely on the issue. "It will be comparable with the industrial revolution. But instead of exceeding people in physical strength, it's going to exceed people in intellectual ability. We have no experience of what it's like to have things smarter than us," said Mr Hinton at the time. "I am worried that the overall consequence of this might be systems more intelligent than us that eventually take control." Mr Hinton has been a big advocate of government regulation for the technology, especially given the unprecedented pace of development. His warning also comes in the backdrop of repeated instances of AI chatbots hallucinating thoughts. In April, OpenAI's internal tests revealed that its o3 and o4-mini AI models were hallucinating or making things up much more frequently than even the non-reasoning models, such as GPT-4o. The company said it did not have any idea why this was happening. In a technical report, OpenAI said, "more research is needed" to understand why hallucinations are getting worse as it scales up its reasoning models.


Time of India
3 hours ago
- Time of India
Anthropic CEO throws shade at Mark Zuckerberg's billion-dollar AI talent hunt with dartboard dig: ‘You can't buy purpose with a paycheck'
Culture Over Cash You Might Also Like: Billionaire Vinod Khosla predicts AI teachers will disrupt education and careers. Here's how — BigTechPod (@BigTechPod) The AI Hiring Wars: A Battle for Brains Buying Purpose? Not Quite, Says Amodei In the escalating turf war for top AI talent, Anthropic CEO Dario Amodei has delivered a pointed, and slightly humorous, critique of Meta 's aggressive recruitment tactics. Speaking on the Big Technology Podcast, Amodei painted a vivid picture: "If Mark Zuckerberg throws a dart at a dartboard and it hits your name, that doesn't mean you should be paid ten times more than the guy next to you who's just as skilled."His remarks come amid widespread reports of Meta launching an all-out offensive to poach AI engineers from rivals like OpenAI, Apple, Google, and Anthropic itself. Yet Amodei claims his startup has remained largely untouched. 'Some [employees] wouldn't even talk to Meta,' he said, asserting that their culture and mission are more attractive than any compensation package Meta can has reportedly been dangling massive offers, with some packages surpassing $200 million for a single hire, according to Business Insider and WIRED. Amodei, however, says Anthropic refuses to match such sums, insisting on fair and consistent pay across the board."I recently posted in our company Slack that we will not compromise our compensation principles or fairness if someone gets a big offer," he shared. In his view, rewarding one employee disproportionately just because they were on Meta's radar would be unjust to their equally capable this stance, Meta has managed to lure away at least one former Anthropic engineer—Joel Pobar—but Amodei suggests their broader impact has been latest AI moonshot, the Superintelligence Lab , has ignited a fierce scramble for elite minds. OpenAI's Chief Research Officer Mark Chen likened it to a break-in after losing several staffers overnight. Meanwhile, OpenAI CEO Sam Altman accused Meta of deploying 'giant offers' to lure talent, with some signing bonuses rumored to top $100 is unapologetic about the ambition. In an internal memo seen by CNBC, he claimed, 'Developing superintelligence is coming into sight,' declaring his goal to bring personal AI to every individual, not just enterprise Meta may have the resources, Amodei questions whether mission-driven AI work can be bought. 'Zuckerberg is trying to buy something that can't be bought,' he said during the podcast, underscoring Anthropic's long-term focus on safe and ethical AI sentiment resonates with other industry leaders too. OpenAI continues to frame itself as a purpose-first organization, while Meta's flashier, big-money moves risk creating tension even within its own teams. As CNBC reported, some insiders at Meta worry that a talent-heavy, cash-fueled approach could lead to ego clashes and fractured the current AI landscape, where demand far outpaces supply, the value of a skilled AI researcher is rivaling that of a professional athlete. Yet, for companies like Anthropic and OpenAI, the real challenge isn't just retaining talent—it's maintaining a sense of purpose amid the frenzy.