
James Cameron warns of ‘Terminator-style apocalypse' if AI weaponised
Speaking to Rolling Stone to promote the publication of Ghosts of Hiroshima, an account of the first atomic bombing by bestselling author Charles Pellegrino which Cameron intends to adapt for the big screen, the film-maker behind three of the four highest-grossing films of all time (Titanic and the first two Avatar films), said that although he relies on AI professionally, he remains concerned about what might happen if it was leveraged with nihilistic intent.
'I do think there's still a danger of a Terminator-style apocalypse where you put AI together with weapons systems, even up to the level of nuclear weapon systems, nuclear defence counterstrike, all that stuff,' Cameron said. 'Because the theatre of operations is so rapid, the decision windows are so fast, it would take a super-intelligence to be able to process it, and maybe we'll be smart and keep a human in the loop.
'But humans are fallible, and there have been a lot of mistakes made that have put us right on the brink of international incidents that could have led to nuclear war. So I don't know.'
He added: 'I feel like we're at this cusp in human development where you've got the three existential threats: climate and our overall degradation of the natural world, nuclear weapons, and super-intelligence. They're all sort of manifesting and peaking at the same time. Maybe the super-intelligence is the answer.'
Cameron's original 1984 Terminator film starring Arnold Schwarzenegger is set in a world in which humanity is ruled by an artificially intelligent defence network called Skynet.
Cameron's films, Avatar in particular, are actively engaged with AI in their execution, and the director has been positive about how the technology could help reduce production costs. Last September, he joined the board of directors of Stability AI and earlier this year said the future of blockbuster film-making hinges on being able to 'cut the cost of [VFX] in half'.
He clarified that he hoped such cost-cutting would come not from human layoffs but speed acceleration.
However, Cameron has also expressed scepticism about the capacity of AI to replace screenwriters. In 2023, he said: 'I just don't personally believe that a disembodied mind that's just regurgitating what other embodied minds have said – about the life that they've had, about love, about lying, about fear, about mortality – and just put it all together into a word salad and then regurgitate it … I don't believe that's ever going to have something that's going to move an audience. You have to be human to write that.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Guardian
an hour ago
- The Guardian
Staff at UK's top AI institute complain to watchdog about its internal culture
Staff at the UK's leading artificial intelligence institute have raised concerns about the organisation's governance and internal culture in a whistleblowing complaint to the charity watchdog. The Alan Turing Institute (ATI), a registered charity with substantial state funding, is under government pressure to overhaul its strategic focus and leadership after an intervention last month from the technology secretary, Peter Kyle. In a complaint to the Charity Commission, a group of current ATI staff raise eight points of concern and say the institute is in danger of collapse due to government threats over its funding. The complaint alleges that the board of trustees, chaired by the former Amazon UK boss Doug Gurr, has failed to fulfil core legal duties such as providing strategic direction and ensuring accountability, with staff alleging a letter of no confidence was delivered last year and not acted upon. A spokesperson for ATI said the Charity Commission had not been in touch with the institute about any complaints that may have been sent to the organisation. They added that a whistleblower complaint had been filed last year to the government's UK Research and Innovation body, which funds ATI, and a subsequent independent investigation found no concerns. The complaint comes after ATI, which is undergoing a restructuring, notified about 50 staff – or approximately 10% of its workforce – that they were at risk of redundancy. It claims ATI's funding is at risk, citing 'privately raised concerns' from unnamed industry partners, while warning that Kyle has made clear that future government support is contingent on improved delivery and leadership change. In a letter to Gurr this month, Kyle called for a switch in focus to defence and national security at ATI, as well as leadership changes. While the letter stated ATI should 'continue to receive the funding needed to implement reforms', it said its 'longer-term funding arrangement' could be reviewed next year. The complaint claims there has been no internal or external accountability for how ATI funds have been used. It alleges there is an internal culture of 'fear, exclusion, and defensiveness'. It also alleges the board has not provided adequate oversight of a series of senior leadership departures under the chief executive, Jean Innes, nor of senior leadership appointments, and that ATI's credibility with 'staff, funders, partners, and the wider public has been significantly undermined', as shown by the letter of no confidence and Kyle's intervention. The Guardian has also learned that ATI is shutting projects related to online safety, tackling the housing crisis and reducing health inequality as part of its restructuring, which is resulting in the closure or mothballing of multiple strands of research. The restructuring has triggered internal upheaval at ATI, with more than 90 staff sending a letter to the board last year warning that cost cuts were putting the organisation's reputation at risk. Sign up to First Edition Our morning email breaks down the key stories of the day, telling you what's happening and why it matters after newsletter promotion Among the projects slated for closure are work on developing AI systems to detect online harms, producing AI tools that can help policymakers tackle issues such as inequality and affordability in the housing market and measuring the impact in health inequality of major policy decisions like lockdowns. Other projects expected to close include an AI-based analysis of how the government and media interact. A project looking at social bias in AI outcomes will also be dropped. Projects being paused include a study into how AI might affect human rights and democracy, as well as research into creating a global approach to AI ethics. A spokesperson for ATI said: 'We're shaping a new phase for the Turing, and this requires substantial organisational change to ensure we deliver on the promise and unique role of the UK's national institute for data science and AI. As we move forward, we're focused on delivering real-world impact across society's biggest challenges, including responding to the national need to double down on our work in defence, national security and sovereign capabilities.' A Charity Commission spokesperson said the organisation could not confirm or deny whether it had received a complaint, in order to protect the identity of any whistleblowers.


The Guardian
2 hours ago
- The Guardian
Staff at UK's top AI institute complain to watchdog about its internal culture
Staff at the UK's leading artificial intelligence institute have raised concerns about the organisation's governance and internal culture in a whistleblowing complaint to the charity watchdog. The Alan Turing Institute (ATI), a registered charity with substantial state funding, is under government pressure to overhaul its strategic focus and leadership after an intervention last month from the technology secretary, Peter Kyle. In a complaint to the Charity Commission, a group of current ATI staff raise eight points of concern and say the institute is in danger of collapse due to government threats over its funding. The complaint alleges that the board of trustees, chaired by the former Amazon UK boss Doug Gurr, has failed to fulfil core legal duties such as providing strategic direction and ensuring accountability, with staff alleging a letter of no confidence was delivered last year and not acted upon. A spokesperson for ATI said the Charity Commission had not been in touch with the institute about any complaints that may have been sent to the organisation. They added that a whistleblower complaint had been filed last year to the government's UK Research and Innovation body, which funds ATI, and a subsequent independent investigation found no concerns. The complaint comes after ATI, which is undergoing a restructuring, notified about 50 staff – or approximately 10% of its workforce – that they were at risk of redundancy. It claims ATI's funding is at risk, citing 'privately raised concerns' from unnamed industry partners, while warning that Kyle has made clear that future government support is contingent on improved delivery and leadership change. In a letter to Gurr this month, Kyle called for a switch in focus to defence and national security at ATI, as well as leadership changes. While the letter stated ATI should 'continue to receive the funding needed to implement reforms', it said its 'longer-term funding arrangement' could be reviewed next year. The complaint claims there has been no internal or external accountability for how ATI funds have been used. It alleges there is an internal culture of 'fear, exclusion, and defensiveness'. It also alleges the board has not provided adequate oversight of a series of senior leadership departures under the chief executive, Jean Innes, nor of senior leadership appointments, and that ATI's credibility with 'staff, funders, partners, and the wider public has been significantly undermined', as shown by the letter of no confidence and Kyle's intervention. The Guardian has also learned that ATI is shutting projects related to online safety, tackling the housing crisis and reducing health inequality as part of its restructuring, which is resulting in the closure or mothballing of multiple strands of research. The restructuring has triggered internal upheaval at ATI, with more than 90 staff sending a letter to the board last year warning that cost cuts were putting the organisation's reputation at risk. Sign up to First Edition Our morning email breaks down the key stories of the day, telling you what's happening and why it matters after newsletter promotion Among the projects slated for closure are work on developing AI systems to detect online harms, producing AI tools that can help policymakers tackle issues such as inequality and affordability in the housing market and measuring the impact in health inequality of major policy decisions like lockdowns. Other projects expected to close include an AI-based analysis of how the government and media interact. A project looking at social bias in AI outcomes will also be dropped. Projects being paused include a study into how AI might affect human rights and democracy, as well as research into creating a global approach to AI ethics. A spokesperson for ATI said: 'We're shaping a new phase for the Turing, and this requires substantial organisational change to ensure we deliver on the promise and unique role of the UK's national institute for data science and AI. As we move forward, we're focused on delivering real-world impact across society's biggest challenges, including responding to the national need to double down on our work in defence, national security and sovereign capabilities.' A Charity Commission spokesperson said the organisation could not confirm or deny whether it had received a complaint, in order to protect the identity of any whistleblowers.


Reuters
3 hours ago
- Reuters
Chinese state media says Nvidia H20 chips not safe for China
BEIJING, Aug 10 (Reuters) - Nvidia's (NVDA.O), opens new tab H20 chips pose security concerns for China, a social media account affiliated with China's state media said on Sunday, after Beijing raised concerns over backdoor access in those chips. The H20 chips are also not technologically advanced or environmentally friendly, the account, Yuyuan Tantian, which is affiliated with state broadcaster CCTV, said in an article published on WeChat. "When a type of chip is neither environmentally friendly, nor advanced, nor safe, as consumers, we certainly have the option not to buy it," the article concluded. Nvidia did not immediately respond to a request for comment. H20 artificial intelligence chips were developed by Nvidia for the Chinese market after the U.S. imposed export restrictions on advanced AI chips in late 2023. The administration of U.S. President Donald Trump banned their sales in April amid escalating trade tensions with China, but reversed the ban in July. China's cyberspace watchdog said on July 31 that it had summoned Nvidia to a meeting, asking the U.S. chipmaker to explain whether its H20 chips had any backdoor security risks - a hidden method of bypassing normal authentication or security controls. Nvidia later said its products had no "backdoors" that would allow remote access or control. In its article, Yuyuan Tantian said Nvidia chips could achieve functions including "remote shutdown" through a hardware "backdoor." Yuyuan Tantian's comment followed criticism against Nvidia by People's Daily, another Chinese state media outlet. In a commentary earlier this month, People's Daily said Nvidia must produce "convincing security proofs" to eliminate Chinese users' worries over security risks in its chips and regain market trust.