Latest news with #ClaudeGov


CNBC
7 days ago
- Business
- CNBC
Anthropic is giving Claude to the U.S. government for $1 as AI companies try to win key agencies
Anthropic on Tuesday announced it will offer Claude for Enterprise and Claude for Government to all three branches of the U.S. government for $1 per agency for a year. The announcement comes as major artificial intelligence companies have been pushing to strengthen their ties to policymakers and regulators. Earlier this month, Anthropic's competitor OpenAI announced it will give its ChatGPT Enterprise product to U.S. federal agencies for $1 through the next year. Anthropic said it partnered with the U.S. General Services Administration to bring its technology to participating agencies in the Executive, Legislative and Judicial branches. The company is also offering technical support to help agencies implement its AI. "America's AI leadership requires that our government institutions have access to the most capable, secure AI tools available," Anthropic CEO Dario Amodei said in a statement. In June, Anthropic released a set of Claude Gov models that were built exclusively for U.S. national security customers. The following month, the U.S. Department of Defense announced contract awards of up to $200 million for AI development at Anthropic, Google, OpenAI and Elon Musk's xAI. That same day, xAI announced Grok for Government, which is a suite of products that make the company's models available to U.S. government customers. OpenAI is planning to open its first office in Washington, D.C., early next year, and it launched a launched a new offering called OpenAI for Government in June.


Hindustan Times
30-07-2025
- Business
- Hindustan Times
How spy agencies are experimenting with the newest AI models
ON THE SAME day as Donald Trump's inauguration as president DeepSeek, a Chinese company, released a world-class large language model (LLM). It was a wake-up call, observed Mr Trump. Mark Warner, vice-chair of the Senate Intelligence Committee, says that America's intelligence community (IC), a group of 18 agencies and organisations, was 'caught off guard'. Last year the Biden administration grew concerned that Chinese spies and soldiers might leap ahead in the adoption of artificial intelligence (AI). It ordered its own intelligence agencies, the Pentagon and the Department of Energy (which builds nuclear weapons), to experiment more aggressively with cutting-edge models and work more closely with 'frontier' AI labs—principally Anthropic, Google DeepMind and OpenAI. On July 14th the Pentagon awarded contracts worth up to $200m each to Anthropic, Google and OpenAI, as well as to Elon Musk's xAI—whose chatbot recently (and briefly) self-identified as Hitler after an update went awry—to experiment with 'agentic' models. These can act on behalf of their users by breaking down complex tasks into steps and exercise control over other devices, such as cars or computers. The frontier labs are busy in the spy world as well as the military one. Much of the early adoption has been in the area of LLM chatbots crunching top-secret data. In January Microsoft said that 26 of its cloud-computing products had been authorised for use in spy agencies. In June Anthropic said it had launched Claude Gov, which had been 'already deployed by agencies at the highest level of us national security'. The models are now widely used in every American intelligence agency, alongside those from competing labs. AI firms typically fine-tune their models to suit the spooks. Claude, Anthropic's public-facing model, might reject documents with classified markings as part of its general safety features; Claude Gov is tweaked to avoid this. It also has 'enhanced proficiency' in the languages and dialects that government users might need. The models typically run on secure servers disconnected from the public internet. A new breed of agentic models is now being built inside the agencies. The same process is under way in Europe. 'In generative AI we have tried to be very, very fast followers of the frontier models,' says a British source. 'Everyone in UKIC [the UK intelligence community] has access to top-secret [LLM] capability.' Mistral, a French firm, and Europe's only real AI champion, has a partnership with AMIAD, France's military-AI agency. Mistral's Saba model is trained on data from the Middle East and South Asia, making it particularly proficient in Arabic and smaller regional languages, such as Tamil. In January +972 Magazine reported that the Israeli armed forces' use of GPT-4, then OpenAI's most advanced LLM, increased 20-fold after the start of the Gaza war. Despite all this, progress has been slow, says Katrina Mulligan, a former defence and intelligence official who leads OpenAI's partnerships in this area. 'Adoption of AI in the national-security space probably isn't where we want it to be yet.' The NSA, America's signals-intelligence agency, which has worked on earlier forms of AI, such as voice-recognition, for decades, is a pocket of excellence, says an insider. But many agencies still want to build their own 'wrappers' around the labs' chatbots, a process that often leaves them far behind the latest public models. 'The transformational piece is not just using it as a chatbot,' says Tarun Chhabra, who led technology policy for Joe Biden's National Security Council and is now the head of national-security policy at Anthropic. 'The transformational piece is: once you start using it, then how do I re-engineer the way I do the mission?' A game of AI spy Sceptics believe that these hopes are inflated. Richard Carter of the Alan Turing Institute, Britain's national institute for AI, argues that what intelligence services in America and Britain really want is for the labs to significantly reduce 'hallucinations' in existing LLMs. British agencies use a technique called 'retrieval augmented generation', in which one algorithm searches for reliable information and feeds it to an LLM, to minimise hallucinations, says the unnamed British source. 'What you need in the IC is consistency, reliability, transparency and explainability,' Dr Carter warns. Instead, labs are focusing on more advanced agentic models. Mistral, for example, is thought to have shown would-be clients a demonstration in which each stream of information, such as satellite images or voice intercepts, is paired with one AI agent, speeding up decision-making. Alternatively, imagine an AI agent tasked with identifying, researching and then contacting hundreds of Iranian nuclear scientists to encourage them to defect. 'We haven't thought enough about how agents might be used in a war-fighting context,' adds Mr Chhabra. The problem with agentic models, warns Dr Carter, is that they recursively generate their own prompts in response to a task, making them more unpredictable and increasing the risk of compounding errors. OpenAI's most recent agentic model, ChatGPT agent, hallucinates in around 8% of answers, a higher rate than the company's earlier o3 model, according to an evaluation published by the firm. Some AI labs see such concerns as bureaucratic rigidity, but it is simply a healthy conservatism, says Dr Carter. 'What you have, particularly in the GCHQ,' he says, referring to the NSA's British counterpart, 'is an incredibly talented engineering workforce that are naturally quite sceptical about new technology.' This also relates to a wider debate about where the future of AI lies. Dr Carter is among those who argue that the architecture of today's general-purpose LLMs is not designed for the sort of cause-effect reasoning that gives them a solid grasp on the world. In his view, the priority for intelligence agencies should be to push for new types of reasoning models. Others warn that China might be racing ahead. 'There still remains a huge gap in our understanding as to how and how far China has moved to use DeepSeek' for military and intelligence gaps, says Philip Reiner of the Institute for Security and Technology, a think-tank in Silicon Valley. 'They probably don't have similar guardrails like we have on the models themselves and so they're possibly going to be able to get more powerful insights, faster,' he says. On July 23rd, the Trump administration ordered the Pentagon and intelligence agencies to regularly assess how quickly America's national-security agencies are adopting AI relative to competitors such as China, and to 'establish an approach for continuous adaptation'. Almost everyone agrees on this. Senator Warner argues that American spooks have been doing a 'crappy job' tracking China's progress. 'The acquisition of technology [and] penetration of Chinese tech companies is still quite low.' The biggest risk, says Ms Mulligan, is not that America rushes into the technology before understanding the risks. 'It's that DoD and the IC keep doing things the way they've always done them. What keeps me up at night is the real possibility that we could win the race to AGI [artificial general intelligence]...and lose the race on adoption.'


Business Insider
16-07-2025
- Business
- Business Insider
Pentagon Awards $800M in AI Contracts: How Google, Microsoft, and Palantir Could Gain
The U.S. Department of Defense has awarded contracts worth up to $800 million to four major AI players: xAI, OpenAI, Google (GOOG), and Anthropic. Each agreement has a ceiling of $200 million and focuses on developing 'agentic' AI systems that can interpret data, make decisions, and operate autonomously in secure, classified environments. Elevate Your Investing Strategy: Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence. Make smarter investment decisions with TipRanks' Smart Investor Picks, delivered to your inbox every week. These contracts are part of the Pentagon's 'commercial-first' strategy to speed up the adoption of scalable, secure AI across military operations. Applications range from logistics and cybersecurity to battlefield planning. The initiative also reflects growing concerns about staying competitive with global rivals, such as China, in defense technology. What's the Plan, Stan? Google is contributing through its cloud division. Google Cloud holds an Impact Level 6 (IL6) clearance, the highest level of security authorization for handling classified government data. That clearance makes it a strong candidate to support sensitive AI workloads. While the company has not specified which products are involved, its secure cloud infrastructure, such as Tensor Processing Units and the Agentspace stack, could play a key role. Although $200 million is minor compared to Google's 2 trillion market cap, defense partnerships may boost long-term positioning in the enterprise and government cloud space. Microsoft (MSFT), while not a direct contract recipient, is closely tied to the effort. OpenAI's tools are deployed using Microsoft Azure, which already meets Department of Defense security standards. OpenAI's new 'OpenAI for Government' initiative includes models for cyber defense, logistics, and secure communications. As OpenAI's exclusive infrastructure partner, Microsoft could benefit as Azure becomes more embedded in government AI workflows. Another major player, Palantir Technologies (PLTR), may also benefit indirectly. Anthropic's Claude Gov models are designed for use in secure, classified networks. Palantir, known for its data integration and analytics platforms across U.S. defense agencies, could act as a distribution and deployment partner. While there is no formal link between Anthropic and Palantir in this contract, existing collaborations suggest that integration is possible. xAI, founded by Elon Musk, introduced 'Grok for Government' and secured a spot on the General Services Administration (GSA) schedule. That placement allows federal agencies to purchase its tools more easily. While xAI remains private, and its chatbot has drawn scrutiny for erratic responses, inclusion in this award round signals an entry point into national security work. Using TipRanks' Comparison Tool, we've assembled and compared the three publicly traded companies appearing in the piece. This helps you gain a broader perspective on each stock and its overall standings. The Takeaway for Investors These contracts are more about strategic positioning than immediate revenue. AI companies with government-grade security, scalable infrastructure, and integration potential may gain long-term advantages as defense spending on AI accelerates.


India Today
12-06-2025
- Business
- India Today
Microsoft is making a special AI Copilot for the US military
Microsoft is developing a special version of its Copilot AI assistant tailored for the US military, with availability expected by summer 2025. In a blog post written for its government customers, Microsoft confirmed that Copilot for the Department of Defense (DoD) is currently under development. 'For DoD environments, Microsoft 365 Copilot is expected to become available no earlier than summer 2025,' the company wrote. 'Work is ongoing to ensure the offering meets the necessary security and compliance standards.'advertisementCopilot is Microsoft's primary generative AI platform and is already integrated into tools like Word, PowerPoint and Excel for general users. A military-grade version, however, requires stronger safeguards and has to meet stringent compliance rules set for high-security also stated in a March update that it is working to bring Copilot to GCC High, its cloud platform for US government clients. 'We are planning on a general availability (GA) release this calendar year,' the company said. Microsoft's Chief Commercial Officer Judson Althoff reportedly also told employees recently that a customer with more than one million Microsoft 365 licenses is adopting Copilot. While the customer was not named, the Defence Department, with over 2.8 million military and civilian employees, fits the development of a defence-specific Copilot underscores how AI is becoming a vital part of US government infrastructure. On July 4, the General Services Administration (GSA) is expected to launch – a platform designed to help US government agencies access powerful AI tools from companies like OpenAI, Google, Anthropic, and eventually Amazon Web Services and to a report by 404 Media, the project includes a chatbot assistant, a model-agnostic API, and a console to monitor AI usage across federal departments. 'We want to start implementing more AI at the agency level and be an example for how other agencies can start leveraging AI,' Thomas Shedd, head of the GSA's Technology Transformation Services, reportedly told his of the more innovative features is the use of analytics to track how government teams are using AI. This data could help highlight success stories and identify areas where more training is growing focus on AI in defence isn't limited to Microsoft and the GSA. AI company Anthropic recently announced its own line of custom AI models for the US government, branded 'Claude Gov'. These tools are already in use by top national security agencies and are designed to assist with tasks like intelligence analysis, cybersecurity, and threat detection. 'Access to these models is limited to those who operate in classified environments,' Anthropic stated. The Claude Gov models are built with enhanced capabilities, including the ability to handle sensitive data and understand defence-specific language and Meta is also deepening its ties with the defence sector. The Mark Zuckerberg-owned company is partnering with Anduril, a defence startup founded by Oculus creator Palmer Luckey, to develop virtual and augmented reality headsets for US service members. 'We're proud to partner with Anduril to help bring these technologies to the American service members that protect our interests at home and abroad,' said Meta CEO Mark Watch


India Today
06-06-2025
- Business
- India Today
Anthropic working on building AI tools exclusively for US military and intelligence operations
Artificial Intelligence (AI) company Anthropic has announced that it is building custom AI tools specifically for the US military and intelligence community. These tools, under the name 'Claude Gov', are already being used by some of the top US national security agencies. Anthropic explains in its official blog post that Claude Gov models are designed to assist with a wide range of tasks, including intelligence analysis, threat detection, strategic planning, and operational support. According to Anthropic, these models have been developed based on direct input from national security agencies and are tailored to meet the specific needs of classified introducing a custom set of Claude Gov models built exclusively for US national security customers,' the company said. 'Access to these models is limited to those who operate in such classified environments.'Anthropic claims that Claude Gov has undergone the same safety checks as its regular AI models but has added capabilities. These include better handling of classified materials, improved understanding of intelligence and defence-related documents, stronger language and dialect skills critical to global operations, and deeper insights into cybersecurity data. While the company has not disclosed which agencies are currently using Claude Gov, it stressed that all deployments are within highly classified environments, and the models are strictly limited to national security use. Anthropic also reiterated its 'unwavering commitment to safety and responsible AI development.'Anthropic's move highlights a growing trend of tech companies building advanced AI tools for defence. advertisementEarlier this year, OpenAI introduced ChatGPT Gov, a tailored version of ChatGPT that was built exclusively for the US government. ChatGPT Gov tools run within Microsoft's Azure cloud, giving agencies full control over how it's deployed and managed. The Gov model shares many features with ChatGPT Enterprise, but it places added emphasis on meeting government standards for data privacy, oversight, and responsible AI usage. Besides Anthropic and OpenAI, Meta is also working with the US government to offer its tech for military month, Meta CEO Mark Zuckerberg revealed a partnership with Anduril Industries, founded by Oculus creator Palmer Luckey, to develop augmented and virtual reality gear for the US military. The two companies are working on a project called EagleEye, which aims to create a full ecosystem of wearable tech including helmets and smart glasses that give soldiers better battlefield awareness. Anduril has said these wearable systems will allow soldiers to control autonomous drones and robots using intuitive, AR-powered interfaces.'Meta has spent the last decade building AI and AR to enable the computing platform of the future,' Zuckerberg said. 'We're proud to partner with Anduril to help bring these technologies to the American service members that protect our interests at home and abroad.'Together, these developments point to a larger shift in the US defence industry, where traditional military tools are being paired with advanced AI and wearable tech.