logo
James Cameron's Chilling Warning About AI: "There's Danger Of Terminator-Style Apocalypse"

James Cameron's Chilling Warning About AI: "There's Danger Of Terminator-Style Apocalypse"

NDTV2 days ago
Hollywood director James Cameron has warned that integrating artificial intelligence (AI) with global weapons systems could recreate the dystopian future shown in his Terminator franchise. Cameron, who is working on a script for Terminator 7, has previously suggested that it was getting harder for him to write science fiction as modern technology continues to eclipse any fictional world he might create.
"I do think there's still a danger of a Terminator-style apocalypse where you put AI together with weapons systems, even up to the level of nuclear weapon systems, nuclear defence counterstrike, all that stuff," Cameron said in an interview with Rolling Stone.
"Because the theatre of operations is so rapid, the decision windows are so fast, it would take a super-intelligence to be able to process it, and maybe we'll be smart and keep a human in the loop. But humans are fallible, and there have been a lot of mistakes made that have put us right on the brink of international incidents that could have led to nuclear war. So I don't know."
Cameron warned that three major existential threats were peaking at the same time, which posed a major challenge to all of humanity.
I feel like we're at this cusp in human development where you've got the three existential threats: climate and our overall degradation of the natural world, nuclear weapons, and super-intelligence. They're all sort of manifesting and peaking at the same time. Maybe the super-intelligence is the answer."
Notably, Cameron's 1984 Terminator movie, starring Arnold Schwarzenegger, is set in a world where humanity is ruled by an AI defence network called Skynet.
'It gets more scary'
Cameron is not the only one to sound the alarm about AI. Geoffrey Hinton, regarded by many as the 'godfather of AI', recently stated that the technology could soon develop its own language, making it impossible for humans to track the machines.
"Now it gets more scary if they develop their own internal languages for talking to each other," said Mr Hinton.
"I wouldn't be surprised if they developed their own language for thinking, and we have no idea what they're thinking."
Mr Hinton added that AI has already demonstrated that it can think terrible thoughts, and it is not unthinkable that the machines could eventually think in ways that humans cannot track or interpret.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Deepfake: The new face of financial fraud
Deepfake: The new face of financial fraud

Deccan Herald

time8 minutes ago

  • Deccan Herald

Deepfake: The new face of financial fraud

Artificial intelligence has emerged as a powerful force shaping modern life. Powered by machine learning algorithms and predictive analytics, AI offers wide-ranging benefits to businesses and individuals alike. While AI holds promise due to the speed, accuracy and scale at which it can accomplish tasks, its flip side is the growing sophistication and proliferation of fraud. One of the most concerning developments is the rise of deepfake-driven financial fraud. Pi-Labs' report, Digital Deception Epidemic: 2024 Report on Deepfake Fraud's Toll on India, estimates that deepfake fraud could result in losses of Rs 70,000 crore in are a product of deep learning technology applied to the creation of synthetic media. Deep learning uses powerful, multilayered neural networks to analyse vast amounts of data. This capability is harnessed to create highly realistic images, audio, and videos. According to a global survey by online security firm McAfee, 70% of people cannot confidently tell the difference between a real and cloned hyper-realistic videos or voice recordings carry serious risks, including crippling financial losses, erosion of public trust, and intensified regulatory fraud relies on impersonation, leveraging AI's remarkable ability to mimic human voices and create realistic videos. For example, fraudsters can produce deepfake videos of a senior executive authorising a transaction or clone the voice of a loved one asking for financial institutions have long relied on traditional KYC processes to onboard customers. The realism achieved by deepfake fraudsters has challenged KYC checks that rely on facial recognition. Phone verification is equally vulnerable, as it takes merely 15 seconds of a person's voice to create a deepfake. .In response, financial institutions have adopted video verification to prevent deepfake-based KYC fraud. Yet this too is under threat. Deepfake technology has evolved to simulate blinking, subtle head movements, and even micro-expressions. In fact, deepfake use is currently led by video (46%) and images (32%), followed by audio (22%)..The immediate threat that financial institutions and individuals face is that of financial losses. Deepfake fraud impacted businesses across industries globally in 2024, resulting in an average financial loss of almost $450,000, as per a report by identity verification solutions company businesses, especially financial institutions, the adverse consequences of deepfake fraud go far beyond immediate financial losses, as damage to their reputation and loss of customer trust are difficult to quantify, particularly because of their long-term impact. A KPMG survey in India showed that 72% of organisations consider reputational damage as the severest impact of fraudsters using increasingly advanced technologies to find ways to attack, governments and regulatory bodies are forging more stringent regulations. .Recognising deepfakes requires a powerful and multi-faceted approach, much more advanced than traditional security to the intelligence: Advanced AI and machine learning algorithms can play an important role in detecting anomalies in facial features, verifying with digital footprints, identifying irregular head movements or facial expressions and changes in voice timbre, and detecting lip-sync errors that may escape the human eye. Biometric inputs may also be analysed in real time for signs of synthetic identity verification: Blockchain technology offers a robust solution for identity verification by creating immutable and verifiable digital identities. By decentralising and encrypting identity data, blockchain can make it harder for deepfakes to create or copy programmes are essential to equip employees with the knowledge to recognise the red flags of deepfake scams. Banking customers must be warned about avoiding unknown callers, refraining from instantly reacting to an emergency, using a code word with loved ones to quickly confirm their identity, and checking the source of videos or photos before taking any rise of deepfake fraud signals a pivot in digital risk management. Financial institutions need to embrace defences based on the latest technologies and foster a culture of vigilance to safeguard not just themselves but also the stability of the economy and the global financial system..(The writer is the chief of operations and customer success of a financial platform)

UP assembly holds AI training session for MLAs
UP assembly holds AI training session for MLAs

Hindustan Times

time4 hours ago

  • Hindustan Times

UP assembly holds AI training session for MLAs

Ahead of the monsoon session of the Uttar Pradesh legislative assembly commencing on Monday, a training session on artificial intelligence (AI) was held for the MLAs on Sunday. Experts from IIT-Kanpur conducted the session chaired by assembly speaker Satish Mahana. Over 200 legislators attended the session. An Ai training session for legislators underway at the UP legislative assembly in Lucknow on August 10. (HT photo) 'This is the first phase where we are introducing AI for the legislators. We have also started AI enabled cameras which will help in many aspects. For example, one can get to know what all a member has spoken in a period of five years within seconds with the help of AI-enabled cameras,' Mahana said. 'This is not a step of just monitoring, but many other aspects of information are also associated with it,' he added. The session was divided into two parts where first was to introduce AI to the MLAs while in the second part, they were explained about real life application of AI. However, MLAs raised doubts too. 'AI can judge your objective. Hence AI would not be able to do analysis of the data it is taking and the objective with which that data was made. Will AI be able to differentiate between genuine data from the fabricated one? And what about data safety,' asked Sirathu MLA Pallavi Patel. Aradhana Mishra, Congress legislature party leader, said, 'More than introducing AI, the legislators need to be told how to connect dots between their working needs and the AI.' 'Introducing AI is a welcome step but a debate makes it better for the assembly,' said leader of the opposition Mata Prasad Pandey. Kaimganj MLA Surabhi Singh said, 'Can AI cater to the sentiments of our voters? This is significant as an MLA has to connect directly with voters.' 'What if the data available from sources such as the internet is incorrect. Should we first focus on correct data,' asked another member in the house. The experts from IIT-K answered their queries. Special session to discuss UP's 'Vision 2047': Mahana The 24-hours special session to be conducted from August 13 to 14 will have a discussion on UP's 'Vision 2047', said the Uttar Pradesh legislative assembly speaker Satish Mahana on Sunday. The session will start at 11 am on August 13. This is the second such session after 2019. The special session will have primary debate that will be taken before the public and then tables in the assembly. 'In three-and-a-half years, sessions have been conducted smoothly and have witnessed increased participation of the members during debate and discussions. The fact is only twice the session had to be adjourned in the past three-and-a-half years. This successful conduct had a positive role of the opposition too,' Mahana added. Asked about the demand from the opposition regarding increasing days of session for debate, he said, 'The days are not less for debate. The monsoon sessions in the past have been conducted similarly.'

Bhopal Doctor loses Rs 10 Lakh to scam promising entry into Salman Khan hosted Bigg Boss; FIR Registered
Bhopal Doctor loses Rs 10 Lakh to scam promising entry into Salman Khan hosted Bigg Boss; FIR Registered

Time of India

time4 hours ago

  • Time of India

Bhopal Doctor loses Rs 10 Lakh to scam promising entry into Salman Khan hosted Bigg Boss; FIR Registered

One of Indian television's most controversial shows, Bigg Boss, is set to return with a new season. While the makers are pulling out all the stops to make it a hit, the reality series is now grabbing headlines for entirely different reasons. A Bhopal-based dermatologist, Dr. Abhineet Gupta, was duped of Rs 10 lakh by fraudsters who promised him entry into the popular reality show Bigg Boss. He has lodged an FIR at the Oshiwara Police Station in Mumbai, following an earlier complaint filed in Bhopal. During a recent press conference in Mumbai, the doctor revealed that back in 2022, a man named Karan Singh urged Abhineet to try for "Bigg Boss". He claimed to have a good identity with the makers. "He talked about giving one crore rupees, but I said that I do not have that much money. Then he went to Mumbai, and he made me talk to his colleagues on the phone. He talked about giving 60 lakh rupees and told me to pay in cash. He called me to Mumbai and arranged a meeting with Harish Shah, Senior Vice President of Endemol Company." Next, Karan asked Abhineet for money, and he transferred 10 lakh rupees to the accused. However, when the list of contestants of "Bigg Boss season 16" was announced, Abhineet's name was not in it. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Real-Time Conversations in 68 Languages? AI Just Made It Possible Enence 2.0 Undo When he questioned Karan about this, he said that Abhineet would enter mid-show as a wild card. But after the season ended, Karan changed his statement, saying that he would make him participate in the next season. However, this time as well, nothing happened. "When season 17 also ended, I asked Karan Singh to return the 10 lakh rupees. But he kept making me run around. Finally, I went to the police to lodge a complaint, but there too it was delayed a lot, and after almost two years, the FIR was registered with great difficulty," Abhineet said. The police have registered a case against the accused under section 420 of the IPC for fraud. Abhineet stated that he wants everyone to be aware of people like Karan so that they do not end up getting scammed like him. With inputs from IANS. Salman Khan says 'Ban me if you want', loses his temper at a photographer during a press meet

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store