
Anthropic unveils custom AI models for U.S. national security customers
Anthropic says that it has released a new set of AI models tailored for U.S. national security customers.
The new models, a custom set of 'Claude Gov' models, were 'built based on direct feedback from our government customers to address real-world operational needs,' writes Anthropic in a blog post. Compared to Anthropic's consumer- and enterprise-focused models, the new custom Claude Gov models were designed to be applied to government operations like strategic planning, operational support, and intelligence analysis.
'[These] models are already deployed by agencies at the highest level of U.S. national security, and access to these models is limited to those who operate in such classified environments,' writes Anthropic in its post. '[They] underwent the same rigorous safety testing as all of our Claude models.'
Anthropic has increasingly engaged U.S. government customers as it looks for dependable new sources of revenue. In November, the company teamed up with Palantir and AWS, the cloud computing division of Anthropic's major partner and investor Amazon, to sell Anthropic's AI to defense customers.
Anthropic says that its new custom Claude Gov models better handle classified material, 'refuse less' when engaging with classified information, and have a greater understanding of documents within intelligence and defense contexts. The models also have 'enhanced proficiency' in languages and dialects critical to national security operations, Anthropic says, as well as 'improved understanding and interpretation of complex cybersecurity data for intelligence analysis.'
Anthropic isn't the only top AI lab going after defense contracts.
OpenAI is seeking to establish a closer relationship with the U.S. Defense Department, and Meta recently revealed that it's making its Llama models available to defense partners. Google is refining a version of its Gemini AI capable of working within classified environments. Meanwhile, Cohere, which primarily builds AI products for businesses, is also collaborating with Palantir to deploy its AI models, TechCrunch exclusively reported early last December.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
2 hours ago
- Yahoo
Palantir (PLTR) Expands AI Footprint With Fedrigoni Supply Chain Deal
Palantir (PLTR, Financials) is bringing its AI software into the world of specialty paper. Warning! GuruFocus has detected 10 Warning Signs with APLD. The company said Tuesday it's partnering with Fedrigoni, an Italian firm known for packaging materials and premium labels, to help streamline global operations using Palantir's tech. What started as a pilot on stock optimization is now growing into a broader digital overhaul. Fedrigoni operates in 28 countries and wants to use AI to better forecast demand, reduce waste, and move faster in changing markets. Palantir executives say it's a sign their AI tools can work far beyond Silicon Valley. This is a great example of AI driving real impact in a traditional sector, said Francois Bohuon, who leads Palantir's business in Europe. This article first appeared on GuruFocus.


Bloomberg
2 hours ago
- Bloomberg
Amazon Security Chief Urges Hands-Off Approach to AI Regulation
Inc. 's chief security officer joined officials at other major tech companies in calling for fewer regulations in artificial intelligence, warning that government involvement in this rapidly emerging area risks narrowing the scope of the company's work. 'The tension with regulation of any kind is that it tends to retard progress,' Steve Schmidt, who heads security for both Amazon and AWS, told Bloomberg News on Tuesday. 'So the way we tend to focus on standards is to let the industry figure out what the right standards are, and that will be driven by our customers.'

Miami Herald
3 hours ago
- Miami Herald
Anthropic shows the limits of AI as it scraps blog experiment
One of the market's fastest-growing artificial intelligence (AI) startups just terminated one of its initiatives shortly after launching it. Since the launch of ChatGPT in 2022, users have marveled at the chatbot's writing ability, often asking it to break down technical topics. Students quickly seized on these new tools and began using them to write their academic papers, launching an ongoing debate that still has no clear resolution. Don't miss the move: Subscribe to TheStreet's free daily newsletter As the technology sector's new growth market has evolved, Anthropic has emerged as one of its leading startups, known primarily for its Claud AI model, a popular choice in the tech community. But a recent revelation from the company raises questions about the future of AI as a tool for writing. Early in June 2025, Anthropic quickly launched an initiative intended to demonstrate its models' writing abilities, only to abruptly cease it one week later. Many people outside the tech community likely missed that Anthropic kicked off June 2025 with the launch of Claude Explains, a blog with posts written by its AI models. It featured pieces that instructed readers on how to "Simplify complex codebases with Claude" and similar highly technical topics tied to AI. Related: Anthropic rivals experience growing AI worker problem After only a week of operation, though, Anthropic has opted to scrap the AI blog initiative, quietly removing it from its webpage over the weekend of June 8. Users who click on its former address are now directed back to the company's home page, which does not mention the former blog. According to TechCrunch, while AI models did the writing, human editors still oversaw its posts for accuracy, described as "subject matter experts and editorial teams." A spokesperson for the company is quoted as saying, "[The blog] is an early example of how teams can use AI to augment their work and provide greater value to their users. Rather than replacing human expertise, we're showing how AI can amplify what subject matter experts can accomplish." However, now the entire project appears to have ended as quickly as it began, indicating that Anthropic decided the results produced by the blog simply did not justify the resources required to maintain it. But one expert has laid out some reasons why the company likely opted for this course of action. In an article on LinkedIn, AI 4 Writers owner Thomas Testi discussed the early demise of Claude Explains, highlighting a lack of transparency as a factor that worked against it. More AI News: Hollywood legend bets big on controversial technologyOne AI stock makes up 78% of Nvidia's investment portfolioChatGPT suffers major outage with no restoration date Despite Anthropic's attempts to frame Claude Explains as a resource for AI enthusiasts, Testi also notes that many social media users saw it as a "thinly veiled attempt to automate content marketing," an accurate characterization, in his view. Shortly before the launch of Clade Explains, Anthropic CEO Dario Amodei caught many people's attention when he predicted that AI would lead to a "bloodbath" for white-collar workers, particularly for those in early career stages. Related: OpenAI teams up with legendary Apple exec However, based on the decision made by his team to terminate its AI blog, it seems that Anthropic doesn't have the technology to completely replace technical writers and bloggers, at least not yet. Additionally, despite its initial framing as a blog written by AI, Claude Explains' content still had oversight from human editors. There is also the possibility that Anthropic might have opted against its AI blogging initiative due to the tendency of AI models to make false claims. According to a recent report from the MIT Sloan School of Management: The fact that Anthropic has not addressed its decision to terminate the blog suggests it wants to move on quickly, rather than issuing a statement. Related: Heavily shorted AI stock is rapidly climbing the Fortune 500 The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc.