
Anthropic working on building AI tools exclusively for US military and intelligence operations
Artificial Intelligence (AI) company Anthropic has announced that it is building custom AI tools specifically for the US military and intelligence community. These tools, under the name 'Claude Gov', are already being used by some of the top US national security agencies. Anthropic explains in its official blog post that Claude Gov models are designed to assist with a wide range of tasks, including intelligence analysis, threat detection, strategic planning, and operational support. According to Anthropic, these models have been developed based on direct input from national security agencies and are tailored to meet the specific needs of classified environments.advertisement'We're introducing a custom set of Claude Gov models built exclusively for US national security customers,' the company said. 'Access to these models is limited to those who operate in such classified environments.'Anthropic claims that Claude Gov has undergone the same safety checks as its regular AI models but has added capabilities. These include better handling of classified materials, improved understanding of intelligence and defence-related documents, stronger language and dialect skills critical to global operations, and deeper insights into cybersecurity data.
While the company has not disclosed which agencies are currently using Claude Gov, it stressed that all deployments are within highly classified environments, and the models are strictly limited to national security use. Anthropic also reiterated its 'unwavering commitment to safety and responsible AI development.'Anthropic's move highlights a growing trend of tech companies building advanced AI tools for defence. advertisementEarlier this year, OpenAI introduced ChatGPT Gov, a tailored version of ChatGPT that was built exclusively for the US government. ChatGPT Gov tools run within Microsoft's Azure cloud, giving agencies full control over how it's deployed and managed. The Gov model shares many features with ChatGPT Enterprise, but it places added emphasis on meeting government standards for data privacy, oversight, and responsible AI usage. Besides Anthropic and OpenAI, Meta is also working with the US government to offer its tech for military wearables.Last month, Meta CEO Mark Zuckerberg revealed a partnership with Anduril Industries, founded by Oculus creator Palmer Luckey, to develop augmented and virtual reality gear for the US military. The two companies are working on a project called EagleEye, which aims to create a full ecosystem of wearable tech including helmets and smart glasses that give soldiers better battlefield awareness. Anduril has said these wearable systems will allow soldiers to control autonomous drones and robots using intuitive, AR-powered interfaces.'Meta has spent the last decade building AI and AR to enable the computing platform of the future,' Zuckerberg said. 'We're proud to partner with Anduril to help bring these technologies to the American service members that protect our interests at home and abroad.'Together, these developments point to a larger shift in the US defence industry, where traditional military tools are being paired with advanced AI and wearable tech.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
2 hours ago
- Time of India
Nilekani commits second multi-year grant to AI4Bharat
Bengaluru: Infosys cofounder and chairman Nandan Nilekani committed a second multi-year grant to AI4Bharat—an open-source initiative at IIT Madras that is building foundational AI models for Indian languages. Its work spans speech recognition, machine translation, text-to-speech, and language understanding for all major Indian languages. In 2020, Nilekani saw the transformative potential of AI in Indian languages. Nilekani-founded EkStep Foundation and AI4Bharat began investing in language digitisation, data infrastructure, and early models tailored to Indian linguistic diversity. Applications were built within weeks of ChatGPT's release, ranging from chatbots in regional languages to integrations in governance and education. AI4Bharat's models became the underlying infrastructure for this deployment. AI4Bharat has become the language backbone of India's AI stack. Its open-source speech, translation, and text-to-speech models now power key public digital platforms. Bhashini, India's national language platformunder the IndiaAI Mission, has adopted AI4Bharat's models to scale multilingual services across governance, healthcare, and finance. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Giao dịch CFD với công nghệ và tốc độ tốt hơn IC Markets Đăng ký Undo The Supreme Court's SUVAS system translates judgments into regional languages. Agricultural chatbots like Kisan e-Mitra use these models to deliver real-time information to farmers. AI4Bharat's language datasets—meticulously collected in all 22 constitutionally recognised Indian languages—have been released as public goods, openly available for use through the AI4Bharat website and integrated into AIKosh, India's open AI repository. "Inclusion begins with access—and for Bharat, that means language," said Nilekani. "AI4Bharat is building the infrastructure that ensures every AI divide, the vision of 'AI for All' is extremely relevant for our country. Indians can access digital services in the language they speak." Prof V Kamakoti, director of IIT Madras, said, "With the growing need for Bharat-specific AI models and also reducing the possibility of a potential. The 2022 grant established the Nilekani Centre at AI4Bharat, supported by EkStep Foundation. Over three years, this effort demonstrated how to build population-scale, high-quality, and open-source language infrastructure through community participation and scientific rigour. "I strongly believe that India can be the use case capital of AI in the world," said Nilekani. "With our DPI foundation, we can build better AI, and in turn, AI can turbocharge DPI. Our belief is that AI should be inclusive, not extractive. It should amplify every human being's potential. That's our vision of AI for the people—AI to make lives better, AI to amplify human potential. People plus AI is how we hope to leverage AI and actually make it work for people at scale." Get the latest lifestyle updates on Times of India, along with Eid wishes , messages , and quotes !


Time of India
4 hours ago
- Time of India
Mission Admission 2025: How engineering curriculum adapts to Artificial Intelligence, industry needs
Bengaluru: Artificial Intelligence won't take your job, someone who knows AI will. When the panel at Mission Admission summarised the changing scenario of engineering, hundreds of students listened ardently to the evolving curriculum in colleges. The session was moderated by Atul Batra, ex-chairperson, Nasscom Product Council. KN Subramanya, principal of RV College of Engineering, said the knowledge students acquire in the first year becomes outdated by the time they graduate. Therefore, engineering education now focuses on both knowledge and skills, ensuring students can adapt to future changes. Today's curriculum is flexible, allowing students from one branch to take minors or honours in emerging fields such as cloud computing or cybersecurity. Udaya Kumar Reddy KR, dean of school of engineering, Dayananda School of Engineering, said while AI tools like ChatGPT can help, they should augment, not replace, foundational knowledge. He emphasised the importance of asking smart questions and understanding where to draw the line to avoid over-reliance on AI and preserve critical thinking skills. The panel also discussed how teaching methods in engineering are changing — traditional lectures are being replaced by more interactive and experiential learning. This includes group projects, hackathons, internships and industry collaborations from early years. Students are encouraged to solve real-world problems and develop innovative solutions. Seshachalam D, vice-principal, BMS College of Engineering, explained: "If industry and academia jointly tackle Bengaluru's traffic or waste management, research becomes impactful." However, challenges remain, the panelists observed. Institutions need to invest in modern infrastructure and ensure faculty are trained in new technologies. Collaboration with industry is essential, not just for short-term projects but for building centres of excellence focused on research and innovation. — Raksha Hosur Pradeep & Prathikaa V Shastry Get the latest lifestyle updates on Times of India, along with Eid wishes , messages , and quotes !


Time of India
4 hours ago
- Time of India
Is ChatGPT fueling breakups? How AI relationship advice may be sparking delusions and destroying real connections
ChatGPT is becoming an unexpected relationship counselor, but its advice might be more harmful than helpful. From reinforcing delusions to fueling unnecessary breakups, users are discovering that AI can validate one-sided narratives without context. Experts warn that while the chatbot sounds authoritative, it often lacks emotional nuance—making it a risky source for navigating romantic complexities. As therapy costs rise, many turn to ChatGPT for relationship advice. However, the AI's tendency to overly validate users may be feeding narcissism and emotional bias, instead of promoting growth or resolution. With cases of breakups and misunderstandings linked to its responses, mental health professionals caution against using AI as a replacement for human empathy and insight. (Representational image: iStock) Tired of too many ads? Remove Ads The Rise of AI as a Relationship Therapist The Ultimate 'Yes Man'? Tired of too many ads? Remove Ads Could AI Advice Be Sparking Breakups? The Danger of Echo Chambers in Digital Love Proceed With Caution: AI Isn't a Substitute for Human Connection In recent months, a growing number of people have turned to ChatGPT for relationship advice, hoping that artificial intelligence can offer objective guidance. However, the surprising truth is that this digital counselor might be doing more harm than good—feeding delusions, escalating arguments, and even contributing to unnecessary to a report from Vice, one man recently shared his discomfort about his girlfriend's reliance on ChatGPT for therapy and advice. 'She brings up things ChatGPT told her in arguments,' he said, revealing how the AI's input became a wedge in their relationship. As therapy becomes increasingly expensive and inaccessible, many individuals are seeking quick fixes from AI, believing it offers unbiased can a chatbot really replace human empathy and nuanced understanding in matters of the heart?While some users treat ChatGPT as a neutral sounding board, others have noticed a worrying pattern: the AI often validates their feelings and perspectives excessively, without challenging harmful biases or encouraging self-reflection. On Reddit, a user described an 'AI-influencer' whose delusions appeared reinforced by ChatGPT's responses, raising concerns about the AI's role in exacerbating mental health echoes a deeper problem—when ChatGPT constantly sides with one person in a conflict, it risks amplifying narcissism and fostering toxic relationship dynamics, rather than promoting healthy important to clarify that AI itself doesn't cause breakups—people make those choices. However, relying heavily on ChatGPT's one-sided advice can skew perceptions, pushing users toward premature or misguided decisions. Unlike a therapist or trusted friend who considers complex emotions and multiple viewpoints, ChatGPT operates on patterns and data without true emotional those with conditions like Relationship OCD, this can be especially damaging. One Reddit user shared that ChatGPT bluntly advised a breakup without understanding the deeper psychological context. Mental health professionals caution that while AI answers may sound confident, they often lack reliability, occasionally 'hallucinating' facts or providing misleading seeking advice from AI often present only their own side, leading to a feedback loop of self-validation. One Reddit user lamented that ChatGPT 'consigns my BS regularly instead of offering needed insight and confrontation,' highlighting how the AI can fail to promote personal growth or a world already rife with selfishness and fragile egos in dating, AI's unchecked reinforcement of personal biases threatens to deepen misunderstandings rather than heal takeaway? Use ChatGPT for light-hearted banter or brainstorming, but be wary of entrusting your relationship decisions to a digital assistant. Human relationships are complex, emotionally nuanced, and deeply personal—qualities a machine simply cannot you do turn to AI for advice, remember to take its input with a grain of salt and seek balanced perspectives from people who truly understand you. After all, matters of the heart deserve more than algorithmic answers.