Latest news with #ClaudeGov


India Today
2 days ago
- Business
- India Today
Microsoft is making a special AI Copilot for the US military
Microsoft is developing a special version of its Copilot AI assistant tailored for the US military, with availability expected by summer 2025. In a blog post written for its government customers, Microsoft confirmed that Copilot for the Department of Defense (DoD) is currently under development. 'For DoD environments, Microsoft 365 Copilot is expected to become available no earlier than summer 2025,' the company wrote. 'Work is ongoing to ensure the offering meets the necessary security and compliance standards.'advertisementCopilot is Microsoft's primary generative AI platform and is already integrated into tools like Word, PowerPoint and Excel for general users. A military-grade version, however, requires stronger safeguards and has to meet stringent compliance rules set for high-security also stated in a March update that it is working to bring Copilot to GCC High, its cloud platform for US government clients. 'We are planning on a general availability (GA) release this calendar year,' the company said. Microsoft's Chief Commercial Officer Judson Althoff reportedly also told employees recently that a customer with more than one million Microsoft 365 licenses is adopting Copilot. While the customer was not named, the Defence Department, with over 2.8 million military and civilian employees, fits the development of a defence-specific Copilot underscores how AI is becoming a vital part of US government infrastructure. On July 4, the General Services Administration (GSA) is expected to launch – a platform designed to help US government agencies access powerful AI tools from companies like OpenAI, Google, Anthropic, and eventually Amazon Web Services and to a report by 404 Media, the project includes a chatbot assistant, a model-agnostic API, and a console to monitor AI usage across federal departments. 'We want to start implementing more AI at the agency level and be an example for how other agencies can start leveraging AI,' Thomas Shedd, head of the GSA's Technology Transformation Services, reportedly told his of the more innovative features is the use of analytics to track how government teams are using AI. This data could help highlight success stories and identify areas where more training is growing focus on AI in defence isn't limited to Microsoft and the GSA. AI company Anthropic recently announced its own line of custom AI models for the US government, branded 'Claude Gov'. These tools are already in use by top national security agencies and are designed to assist with tasks like intelligence analysis, cybersecurity, and threat detection. 'Access to these models is limited to those who operate in classified environments,' Anthropic stated. The Claude Gov models are built with enhanced capabilities, including the ability to handle sensitive data and understand defence-specific language and Meta is also deepening its ties with the defence sector. The Mark Zuckerberg-owned company is partnering with Anduril, a defence startup founded by Oculus creator Palmer Luckey, to develop virtual and augmented reality headsets for US service members. 'We're proud to partner with Anduril to help bring these technologies to the American service members that protect our interests at home and abroad,' said Meta CEO Mark Watch


India Today
06-06-2025
- Business
- India Today
Anthropic working on building AI tools exclusively for US military and intelligence operations
Artificial Intelligence (AI) company Anthropic has announced that it is building custom AI tools specifically for the US military and intelligence community. These tools, under the name 'Claude Gov', are already being used by some of the top US national security agencies. Anthropic explains in its official blog post that Claude Gov models are designed to assist with a wide range of tasks, including intelligence analysis, threat detection, strategic planning, and operational support. According to Anthropic, these models have been developed based on direct input from national security agencies and are tailored to meet the specific needs of classified introducing a custom set of Claude Gov models built exclusively for US national security customers,' the company said. 'Access to these models is limited to those who operate in such classified environments.'Anthropic claims that Claude Gov has undergone the same safety checks as its regular AI models but has added capabilities. These include better handling of classified materials, improved understanding of intelligence and defence-related documents, stronger language and dialect skills critical to global operations, and deeper insights into cybersecurity data. While the company has not disclosed which agencies are currently using Claude Gov, it stressed that all deployments are within highly classified environments, and the models are strictly limited to national security use. Anthropic also reiterated its 'unwavering commitment to safety and responsible AI development.'Anthropic's move highlights a growing trend of tech companies building advanced AI tools for defence. advertisementEarlier this year, OpenAI introduced ChatGPT Gov, a tailored version of ChatGPT that was built exclusively for the US government. ChatGPT Gov tools run within Microsoft's Azure cloud, giving agencies full control over how it's deployed and managed. The Gov model shares many features with ChatGPT Enterprise, but it places added emphasis on meeting government standards for data privacy, oversight, and responsible AI usage. Besides Anthropic and OpenAI, Meta is also working with the US government to offer its tech for military month, Meta CEO Mark Zuckerberg revealed a partnership with Anduril Industries, founded by Oculus creator Palmer Luckey, to develop augmented and virtual reality gear for the US military. The two companies are working on a project called EagleEye, which aims to create a full ecosystem of wearable tech including helmets and smart glasses that give soldiers better battlefield awareness. Anduril has said these wearable systems will allow soldiers to control autonomous drones and robots using intuitive, AR-powered interfaces.'Meta has spent the last decade building AI and AR to enable the computing platform of the future,' Zuckerberg said. 'We're proud to partner with Anduril to help bring these technologies to the American service members that protect our interests at home and abroad.'Together, these developments point to a larger shift in the US defence industry, where traditional military tools are being paired with advanced AI and wearable tech.


The Verge
05-06-2025
- Business
- The Verge
Anthropic launches new Claude service for military and intelligence use
Anthropic on Thursday announced Claude Gov, its product designed specifically for U.S. defense and intelligence agencies. The AI models have looser guardrails for government use and are trained to better analyze classified information. The company said the models it's announcing 'are already deployed by agencies at the highest level of U.S. national security,' and that access to those models will be limited to government agencies handling classified information. The company did not confirm how long they had been in use. Claude Gov models are specifically designed to uniquely handle government needs, like threat assessment and intelligence analysis, per Anthropic's blog post. And although the company said they 'underwent the same rigorous safety testing as all of our Claude models,' the models have certain specifications for national security work. For example, they 'refuse less when engaging with classified information' that's fed into them, something consumer-facing Claude is trained to flag and avoid. Claude Gov's models also have greater understanding of documents and context within defense and intelligence, according to Anthropic, and better proficiency in languages and dialects relevant to national security. Use of AI by government agencies has long been scrutinized because of its potential harms and ripple effects for minorities and vulnerable communities. There's been a long list of wrongful arrests across multiple U.S. states due to police use of facial recognition, documented evidence of bias in predictive policing, and discrimination in government algorithms that assess welfare aid. For years, there's also been an industry-wide controversy over large tech companies like Microsoft, Google and Amazon allowing the military — particularly in Israel — to use their AI products, with campaigns and public protests under the No Tech for Apartheid movement. Anthropic's usage policy specifically dictates that any user must 'Not Create or Facilitate the Exchange of Illegal or Highly Regulated Weapons or Goods,' including using Anthropic's products or services to 'produce, modify, design, market, or distribute weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life.' At least eleven months ago, the company said it created a set of contractual exceptions to its usage policy that are 'carefully calibrated to enable beneficial uses by carefully selected government agencies.' Certain restrictions — such as disinformation campaigns, the design or use of weapons, the construction of censorship systems, and malicious cyber operations — would remain prohibited. But Anthropic can decide to 'tailor use restrictions to the mission and legal authorities of a government entity,' although it will aim to 'balance enabling beneficial uses of our products and services with mitigating potential harms.' Claude Gov is Anthropic's answer to ChatGPT Gov, OpenAI's product for U.S. government agencies, which it launched in January. It's also part of a broader trend of AI giants and startups alike looking to bolster their businesses with government agencies, especially in an uncertain regulatory landscape. When OpenAI announced ChatGPT Gov, the company said that within the past year, more than 90,000 employees of federal, state, and local governments had used its technology to translate documents, generate summaries, draft policy memos, write code, build applications, and more. Anthropic declined to share numbers or use cases of the same sort, but the company is part of Palantir's FedStart program, a SaaS offering for companies who want to deploy federal government-facing software. Scale AI, the AI giant that provides training data to industry leaders like OpenAI, Google, Microsoft, and Meta, signed a deal with the Department of Defense in March for a first-of-its-kind AI agent program for U.S. military planning. And since then, it's expanded its business to world governments, recently inking a five-year deal with Qatar to provide automation tools for civil service, healthcare, transportation, and more.
Yahoo
05-06-2025
- Business
- Yahoo
Anthropic Unveils Claude Gov for US Security Clients
Anthropic recently unveiled Claude Gov, a new set of AI models tailored just for U.S. national security agencies. With backing from Amazon (NASDAQ:AMZN) and Google (NASDAQ:GOOG), these models are already in use at top-security clearancesand only those with the right credentials can access them. Warning! GuruFocus has detected 2 Warning Sign with AMZN. Built with direct input from defense and intelligence teams, Claude Gov goes beyond standard Claude models by handling classified materials more smoothly (fewer automatic refusals) and understanding sensitive documents in context. It's also been optimized for critical languages and dialects, plus it can tackle complex cybersecurity data for real-time threat analysis. While Anthropic hasn't shared contract details, winning government business could provide steady revenue and set it apart from bigger AI rivals. If you're following AI stocks or industry moves, keep an eye out for any announcements about new agency deals or feature upgradesespecially since Anthropic just rolled out Opus 4 and Sonnet 4 for coding and advanced reasoning. But there's more on Anthropic's plate: Reddit (NYSE:RDDT) filed a lawsuit in California this week, accusing Anthropic of using Reddit user data to train Claude without a license or permission. Reddit says it tried to negotiate a licensing agreement, but when talks stalled, Anthropic's bots allegedly kept hitting Reddit servers over 100,000 times. This lawsuit raises questions about Anthropic's data practices and could invite closer legal scrutinyno small thing now that it's working on classified government projects. Keep your ears open for how this lawsuit unfolds, because its outcome could impact Anthropic's reputation and future partnerships. This article first appeared on GuruFocus.
Yahoo
05-06-2025
- Business
- Yahoo
Anthropic Unveils Claude Gov for US Security Clients
Anthropic recently unveiled Claude Gov, a new set of AI models tailored just for U.S. national security agencies. With backing from Amazon (NASDAQ:AMZN) and Google (NASDAQ:GOOG), these models are already in use at top-security clearancesand only those with the right credentials can access them. Warning! GuruFocus has detected 2 Warning Sign with AMZN. Built with direct input from defense and intelligence teams, Claude Gov goes beyond standard Claude models by handling classified materials more smoothly (fewer automatic refusals) and understanding sensitive documents in context. It's also been optimized for critical languages and dialects, plus it can tackle complex cybersecurity data for real-time threat analysis. While Anthropic hasn't shared contract details, winning government business could provide steady revenue and set it apart from bigger AI rivals. If you're following AI stocks or industry moves, keep an eye out for any announcements about new agency deals or feature upgradesespecially since Anthropic just rolled out Opus 4 and Sonnet 4 for coding and advanced reasoning. But there's more on Anthropic's plate: Reddit (NYSE:RDDT) filed a lawsuit in California this week, accusing Anthropic of using Reddit user data to train Claude without a license or permission. Reddit says it tried to negotiate a licensing agreement, but when talks stalled, Anthropic's bots allegedly kept hitting Reddit servers over 100,000 times. This lawsuit raises questions about Anthropic's data practices and could invite closer legal scrutinyno small thing now that it's working on classified government projects. Keep your ears open for how this lawsuit unfolds, because its outcome could impact Anthropic's reputation and future partnerships. This article first appeared on GuruFocus. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data