&w=3840&q=100)
How US built new tool to stop AI from making nuclear weapons
Anthropic, whose AI bot Claude is a direct competitor to OpenAI's ChatGPT, said it has been working with the US government for over a year to build in the safeguard.
Today, everyone is obsessed with Artificial Intelligence (AI).
AI is said to have the potential to change society forever, in good ways and bad. Many hope it will cure humans of disease, extend our lifespans solve climate change, and unlock the secrets of the universe.
Others fear it will cause some jobs to go away forever, leaving millions out of work and society on the brink. Others imagine a dark, dystopian future with AI ruling over humanity – perhaps in the aftermath of it ordering nuclear strikes.
STORY CONTINUES BELOW THIS AD
Now, some are taking steps to at least safeguard its AI models from being used as tools to build nuclear weapons.
But what happened? What do we know?
Let's take a closer look
What happened?
Anthropic, an AI start-up backed by Amazon and Google, has developed a new tool to stop its AI from being used for the nefarious means of building a nuclear bomb. Anthropic's Claude is a direct competitor to OpenAI's ChatGPT.
Anthropic said it has been working with the US government for over a year to build in the safeguard. The company said it has coordinated with the National Nuclear Security Administration (NNSA) to figure out a 'classifier' that can halt 'concerning' conversations — for example, how to build a nuclear reactor or bomb – on its AI system.
Anthropics said the program sprung out of its 2024 exercises with the US Department of Energy. The NNSA falls under the US Energy Department. It is tasked with making sure the United States 'maintains a safe, secure, and reliable nuclear stockpile through the application of unparalleled science, technology, engineering, and manufacturing.' The NNSA's Office of Defence Programs is in charge of maintaining and modernising the country's nuclear stockpile.
How did it do it?
The company said it was able to put together a list of gauges that can help Claude identify 'potentially concerning conversations about nuclear weapons development'.
The classifier acts like a spam filter in the email and identifies real-time threats. The company has claimed that the classifier can determine with almost 95 per cent accuracy if the person carrying on the conversation with the AI bot is intending to cause harm. The company said the classifier identified 94.8 per cent of nuclear weapons queries. However, it inaccurately classified 5.2 per cent of the queries as dangerous.
STORY CONTINUES BELOW THIS AD
The company said it was able to put together a list of gauges that can help Claude identify 'potentially concerning conversations about nuclear weapons development'.
Anthropic has said that it has already employed the classifier in some of its Claude models.
'As AI models become more capable, we need to keep a close eye on whether they can provide users with dangerous technical knowledge in ways that could threaten national security,' Anthropic has said.
The company has vowed to share what it has learnt with the Frontier Model Forum, an AI industry body it has co-founded alongside Amazon, Meta, OpenAI, Microsoft and Google, in order to help other companies build similar programmes.
Anthropic earlier in August said it would offer its Claude AI model to the US government for $1 (Rs 87), joining the ranks of AI start-ups proposing lucrative deals to win federal contracts.
This came days after OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude were added to the US government's list of approved AI vendors.
'America's AI leadership requires that our government institutions have access to the most capable, secure AI tools available,' CEO Dario Amodei said.
STORY CONTINUES BELOW THIS AD
Rival OpenAI had also announced a similar offer in August, wherein ChatGPT Enterprise was made available to participating US federal agencies for $1 per agency for the next year.
With inputs from agencies

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Hans India
15 minutes ago
- Hans India
TGBIE hosts training session on integrating AI in education
Hyderabad: The Telangana Board of Intermediate Education (TGBIE) conducted a one-day training program for AI Champions and District Academic Monitoring Officers (DAMOs) at its headquarters in Vidya Bhavan on Thursday. The session focused on the implementation of Artificial Intelligence (AI) tools and Facial Recognition Systems (FRS) in Government Junior Colleges across the state. The initiative was led by Dr Yogita Rana, Secretary to the Government, Education Department, who emphasized the transformative potential of AI in academic progress. She highlighted tools like Gemini, ChatGPT, Meta, and Google's LLM as powerful resources for enhancing student learning outcomes. During the workshop, TGBIE officials instructed participants to actively adopt AI-based solutions tailored to student needs. A key highlight was the planned rollout of Facial Recognition Systems to monitor student attendance more effectively. The FRS will be integrated with the Centre for Good Governance (CGG) portal for real-time tracking and accountability. The Director of Intermediate Education stressed the importance of field-level implementation and urged officers to organize Mega Parent-Teacher Meetings in every Government Junior College. These meetings aim to foster greater parental involvement and awareness of technological advancements in education. The training also underscored the broader vision of the Telangana government to incorporate AI-driven learning environments. AI Champions and DAMOs were briefed on the strategic role of AI in curriculum delivery, student assessment, and personalized learning pathways. This move aligns with national efforts to digitize education and equip institutions with cutting-edge tools. By integrating AI and FRS, the Board aims to improve transparency, attendance tracking, and academic performance monitoring. Officers were urged to ensure swift and effective implementation of these technologies, making Telangana a model for AI-enabled education in India.


Indian Express
an hour ago
- Indian Express
AI-created film ‘Chiranjeevi Hanuman' raises questions about creative labour and ownership. It must initiate a conversation
A month after controversy broke out over the news that the Tamil-dubbed version of the 2013 Hindi film Raanjhanaa, titled Ambikapathy, was being re-released with a new AI-generated ending, the announcement of a new film, Chiranjeevi Hanuman — The Eternal, made using AI, has sharpened fears over what the technology could mean for one of India's biggest creative industries. While the use of the new technology in the two films may vary in terms of scale — from a single scene in Ambikapathy to the entire project in Chiranjeevi Hanuman — it raises equally uncomfortable questions about creative ownership, money and power. 'And so it begins…who…needs writers and directors when it's 'Made in AI'', posted filmmaker Vikramaditya Motwane on social media, with director Anurag Kashyap, too, questioning the wisdom of supporting projects made with technology that could ultimately make the people involved in filmmaking redundant. Their unease underscores the crux of the matter: In a time when prompts fed into a machine are all one needs to make a movie, what happens to those whose livelihood depends on the collaborative, time-consuming process that is cinema? Yet, as Kashyap himself pointed out, there is no wishing away the technology itself. Nearly three years after the debut of OpenAI's ChatGPT, with reactions that swung between a sense of awe over its capabilities and paranoia about a Terminator-like takeover by machines, attitudes have settled in favour of general acceptance. For many, including artists, writers and other creative professionals, it has become one more thread in the fabric of routine, both at home and in the workplace. But apprehensions persist, including over ethical use: In the case of Ambikapathy, for example, the producers were accused of using AI to undermine the creative vision behind the original. The central question, then, is this: At what point does AI go from being a tool — such as in the 2024 film The Brutalist, where it was used to enhance the Hungarian accent of certain characters — to a competitor, as appears to be the case with Chiranjeevi Hanuman and other AI-created films like Maharaja in Denims and Naisha? This is an issue that film industries elsewhere, too, have been grappling with; the use of AI was, notably, a flashpoint in the nearly year-long strike by writers and actors in Hollywood in 2023. In that case, negotiations between industry stakeholders led to the creation of guardrails in how the technology could be used without pitting artists against machines. As Indian cinema swings between excitement and apprehension over a future with AI, this is the conversation that must now take place.


Hindustan Times
2 hours ago
- Hindustan Times
AI-powered robots can take your phone apart
THE WORLD'S rubbish heaps are filling up with valuable electronics. According to the UN, some 62m tonnes of e-waste were produced in 2022, enough to fill a line of lorries parked bumper-to-bumper around the equator. Only 22% is recycled. Most of the rest ends up in landfills or incinerators, where in 2024 recoverable raw materials worth $63bn went to waste. That figure is expected to grow to more than $80bn by 2030. Getting those materials out of the rubbish is a challenge. Many are contaminated when e-waste is crushed during recycling, which can limit the effectiveness of specialist extraction techniques. The process is made more straightforward if products are disassembled and their components sorted by composition before crushing. Copper can then be recovered from wiring. Gold, silver and other precious metals can be leached from circuit boards, along with cobalt, lithium, manganese and nickel from batteries. Rare-earth magnets can be pulled from electric motors. The trouble is that disassembly is labour-intensive and costly. Automation is also tricky: robots are good at putting together a specific item but struggle to recognise and take apart the thousands of different devices that end up in the rubbish. A new generation of robots powered by artificial-intelligence (AI) models, however, looks to be up to the job. Some of these AI-assisted robots are being developed for in-house recycling schemes run by manufacturers, who have an intimate knowledge of how their products are put together. Apple, for example, uses a system called Daisy. A decade ago, an early version could dismantle only one type of iPhone; now, with the help of AI, Daisy can handle more than 20. Microsoft is developing a robot to disassemble computer hard drives. These are usually crushed whole to destroy any sensitive data, but if the drives are dismantled, only the platters containing data need be crushed. ABB, a Swedish-Swiss electrical-engineering group, is working with Molg, an American recycler, on a network of robotic 'minifactories' to dismantle and recover material from the electronics used in vast data centres. José Saenz and his team at the Fraunhofer Institute for Factory Operation and Automation in Magdeburg, Germany have a still more ambitious goal. They are developing a robotic system that can be used in a general recycling centre, where it would need to be flexible enough to dismantle a wide variety of e-waste, ranging from phones to electric-vehicle batteries, LED screens and solar panels. Their starting point is an AI-assisted robot that can disassemble old desktop PCs, many of which are more than a decade old. The first thing the team's robot does is identify any product it is offered. A camera photographs the item and compares the snap with pictures of different PCs. The robot also scans any labels and product codes to check whether service manuals or other disassembly tips are available online. It can search for other clues, in much the same way ChatGPT might, when asked a similar question, turn up videos posted online by people who have done the job before. All this information is analysed and stored in the robot's memory, where it can be updated and used for reference the next time such a product comes into the recycling centre. Once the identification is complete, the AI system then determines which components are worth removing, either in the form of raw materials or as complete parts to be refurbished and used again. It also checks the integrity of rivets, screws and other fasteners, because years of wear, tear and repair mean some parts may need to be cut out. Analysis done, the AI generates a disassembly sequence to operate the robot's arms. These are equipped with a selection of tools, such as drills, grippers and screwdrivers, to remove and sort items. So far, the team has got each stage in the disassembly process working in separate machines. They are now linking these together into a single robotic device able to complete the whole process. Once dismantling PCs has been mastered the team will train robots to tackle other products. The learning process will take time. Dr Saenz thinks it could be five years until they develop a commercial disassembly robot that could usefully work at a recycling centre taking apart anything from PCs to white goods and televisions. Firms that want to recycle their own, limited range of products could probably put together something more quickly. A multi-purpose robot would probably be popular, since companies are under increasingly fierce legislative pressure to take responsibility for the end-of-life management of their products, either directly or by employing specialists to recycle for them. The rise of smarter spanner-wielding robots, therefore, should encourage more firms to ensure their products are useful in death, as they were in life.