logo
'Terminator' director James Cameron flip-flops on AI, says Hollywood is 'looking at it all wrong'

'Terminator' director James Cameron flip-flops on AI, says Hollywood is 'looking at it all wrong'

Fox News18-04-2025

James Cameron's stance on artificial intelligence has evolved over the past few years, and he feels Hollywood needs to embrace it in a few different ways.
Cameron joined the board of directors for Stability AI last year, explaining his decision on the "Boz to the Future" podcast last week.
"The goal was to understand the space, to understand what's on the minds of the developers," he said. "What are they targeting? What's their development cycle? How much resources you have to throw at it to create a new model that does a purpose-built thing, and my goal was to try to integrate it into a VFX workflow."
He continued by saying the shift to AI is a necessary one.
"And it's not just hypothetical. We have to. If we want to continue to see the kinds of movies that I've always loved and that I like to make and that I will go to see – 'Dune,' 'Dune: Part Two' or one of my films or big effects-heavy, CG-heavy films – we've got to figure out how to cut the cost of that in half.
"Now that's not about laying off half the staff at a VFX company. That's about doubling their speed to completion on a given shot, so your cadence is faster and your throughput cycle is faster, and artists get to move on and do other cool things and then other cool things, right? That's my sort of vision for that."
Cameron doesn't think films are ultimately "a big target" for companies like OpenAI.
"Their goal is not to make GenAI movies. We're a little wart on their butt," he laughed.
"Movies are just a tiny application, tiny use case, and it's too tiny right now. That's the problem. It's going to be smaller, boutique-type GenAI developer groups that I can get the attention of and say, 'Hey, I've got a problem here, it's called rotoscope,' or it's called this or that. How do I outpaint, upscale, whatever needs to be done in a workflow that already exists that's CG-based? And there are answers."
Rotoscoping is a task involving drawing or tracing live-action footage frame by frame.
Cameron said people don't get into filmmaking and VFX to do "the underwater needlepoint of roto[scoping]. And there are a lot of tasks like that," like changing pieces frame by frame for motion capture that he feels AI can make quicker and more cost effective."
"It's all common sense, right? All you have to do is maybe something like feed in a couple million images of the real world and have a dumb algorithm" figure it out, he said.
Cameron previously denounced the use of AI in 2023, citing his 1984 film "The Terminator" as a warning.
"I think the weaponization of AI is the biggest danger," he told Canadian CTV at the time.
"I think that we will get into the equivalent of a nuclear arms race with AI, and if we don't build it, the other guys are for sure going to build it, and so then it'll escalate," Cameron said.
He continued, "You could imagine an AI in a combat theater, the whole thing just being fought by the computers at a speed humans can no longer intercede, and you have no ability to deescalate."
"I warned you guys in 1984, and you didn't listen," Cameron said, referring to the plot of the film and its sequels featuring a sentient AI destroying humanity.
At the time, Cameron said he wasn't concerned about AI replacing creatives, saying, "It's never an issue of who wrote it, it's a question of is it a good story?"
"I just don't personally believe that a disembodied mind that's just regurgitating what other embodied minds have said – about the life that they've had, about love, about lying, about fear, about mortality – and just put it all together into a word salad and then regurgitate it … I don't believe that have something that's going to move an audience," he said.
On the "Boz to the Future" podcast, Cameron downplayed the negative feelings much of the entertainment industry has shared about generative AI.
"A lot of the hesitation in Hollywood and entertainment in general are issues of the source material for the training data and who deserves what and copyright protection and all that sort of thing, and I think people are looking at it all wrong, personally," he said.
He explained that everyone's brains already work essentially like an AI: "You're creating a model as you go through life to process quickly through that model every new situation that comes on. You don't say, 'Wait a second, I'm going to go back.' … It doesn't work that way."
"My point is, as a screenwriter, as a filmmaker, if I exactly copy 'Star Wars,' I'll get sued. Actually, I won't even get that far. Everybody will say it's too much like 'Star Wars,' we're going to get sued. I won't even get the money. As a screenwriter, you have a sort of built-in ethical filter that says, 'I know my sources, I know what I like. I know what I'm emulating. I also know I have to move it far enough away that it's my own independent creation.'"
Cameron agreed that there needs to be management of AI "from a legal perspective," but he thinks the focus should be on what the output is and if it's being monetized versus the input.
"You can't control my input. You can't tell me what to view and what to see and where to go. My input is whatever I choose it to be and whatever has accumulated throughout my life. My output should be judged on whether it's too close or plagiaristic," he said, acknowledging that people are considering "the monetization possibilities of libraries and all that sort of thing."
Some companies are already allowing AI companies access to their back catalogs.
Lionsgate, the studio behind films like "The Hunger Games" and "Twilight," partnered with AI research company Runway last September for the "creation and training of a new AI model, customized to Lionsgate's proprietary portfolio of film and television content," according to a statement from the company at the time.
Cameron admitted he doesn't love the idea of someone using an AI to mimic him.
"I think we should discourage the text prompt that says 'in the style of James Cameron' or in the style of 'Zac Snyder' or 'in the style of.' Make up your own style. That makes me a little bit queasy," he said. "But on the other hand, it's kind of cool to have an 'in the style of' as part of a text prompt."
The trend of generating images in specific styles has taken off on social media, creating controversy over copyright and originality.
Most recently, people were regularly sharing images generated in the style of Japanese animation company Studio Ghibli, further fueling the debate.
But Cameron didn't see an issue.
"How is Studio Ghibli harmed by fan art?" Cameron asked on the podcast, noting fan art has been around "forever."
"You can't stop fan art, you don't want to stop fan art. It's an expression. It's them echoing back something they responded to, it makes them feel participatory in that universe. These things should be encouraged."
However, he drew the line at financial gain.
"If somebody goes out and tries to make money off it in a way that harms Studio Ghibli and is provable, I think that's where you have to draw the line."
He continued, "How is that any different than selling a T-shirt and on the T-shirt is an image of Luke Skywalker? You can't do that. LucasArts can do that. You can't do that. That's piracy. It's going to get shut down. However, if you just wanted to wear one and express your fandom for 'Star Wars,' you're not harming anything."

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Meta, OpenAI, and Palantir (PLTR) Executives Join U.S. Army to Improve Military Tech
Meta, OpenAI, and Palantir (PLTR) Executives Join U.S. Army to Improve Military Tech

Business Insider

time3 hours ago

  • Business Insider

Meta, OpenAI, and Palantir (PLTR) Executives Join U.S. Army to Improve Military Tech

Top tech leaders from Meta (META), OpenAI, and Palantir (PLTR) are joining the Army Reserve as lieutenant colonels in a new unit called the 'Executive Innovation Corps,' known as Detachment 201, the Army announced Friday. The unit is part of a push to bring Silicon Valley expertise into the military. Among those being sworn in are Meta CTO Andrew Bosworth, OpenAI Chief Product Officer Kevin Weil, Palantir CTO Shyam Sankar, and Bob McGrew, an advisor at Thinking Machines and former OpenAI Chief Research Officer. Confident Investing Starts Here: Detachment 201 will let these tech leaders serve part-time as senior advisors in order to help the Army adopt advanced technologies quickly. The unit is designed to fuse cutting-edge commercial tech with military innovation, which will support projects like the Army Transformation Initiative that aims to modernize the force by using more efficient and scalable solutions. The program also allows executives to serve without leaving their day jobs, which could inspire more tech professionals to contribute in uniform. This initiative comes as the Army works to replace outdated systems and buy more commercial tech that can serve both military and civilian needs. Indeed, Meta is already partnering with defense firm Anduril on extended reality (XR) tools for soldiers. In addition, OpenAI's ChatGPT could be used to improve military productivity, while Palantir supplies AI-enabled hardware like the TITAN vehicle. The Army didn't say how fast Detachment 201 will expand, but this first wave of new members points to a growing collaboration between the tech industry and the military. Which Tech Stock Is the Better Buy?

New York passes a bill to prevent AI-fueled disasters
New York passes a bill to prevent AI-fueled disasters

Yahoo

time7 hours ago

  • Yahoo

New York passes a bill to prevent AI-fueled disasters

New York state lawmakers passed a bill on Thursday that aims to prevent frontier AI models from OpenAI, Google, and Anthropic from contributing to disaster scenarios, including the death or injury of more than 100 people, or more than $1 billion in damages. The passage of the RAISE Act represents a win for the AI safety movement, which has lost ground in recent years as Silicon Valley and the Trump Administration have prioritized speed and innovation. Safety advocates including Nobel prize laureate Geoffrey Hinton and AI research pioneer Yoshua Bengio have championed the RAISE Act. Should it become law, the bill would establish America's first set of legally mandated transparency standards for frontier AI labs. The RAISE Act has some of the same provisions and goals as California's controversial AI safety bill, SB 1047, which was ultimately vetoed. However, the co-sponsor of the bill, New York state Senator Andrew Gounardes told TechCrunch in an interview that he deliberately designed the RAISE Act such that it doesn't chill innovation among startups or academic researchers — a common criticism of SB 1047. 'The window to put in place guardrails is rapidly shrinking given how fast this technology is evolving,' said Senator Gounardes. 'The people that know [AI] the best say that these risks are incredibly likely […] That's alarming.' The RAISE Act is now headed for New York Governor Kathy Hochul's desk, where could either sign the bill into law, send it back for amendments, or veto it altogether. If signed into law, New York's AI safety bill would require the world's largest AI labs to publish thorough safety and security reports on their frontier AI models. The bill also requires AI labs to report safety incidents, such as concerning AI model behavior or bad actors stealing an AI model, should they happen. If tech companies fail to live up to these standards, the RAISE Act empowers New York's Attorney General to bring civil penalties of up to $30 million. The RAISE Act aims to narrowly regulate the world's largest companies — whether they're based in California (like OpenAI and Google) or China (like DeepSeek and Alibaba). The bill's transparency requirements apply to companies whose AI models were trained using more than $100 million in computing resources (seemingly, more than any AI model available today), and are being made available to New York residents. While similar to SB 1047 in some ways, the RAISE Act was designed to address criticisms of previous AI safety bills, according to Nathan Calvin, the Vice President of State Affairs and General Counsel at Encode, who worked on this bill and SB 1047. Notably, the RAISE Act does not require AI model developers to include a 'kill switch' on their models, nor does it hold companies that post-train frontier AI models accountable for critical harms. Nevertheless, Silicon Valley has pushed back significantly on New York's AI safety bill, New York state Assemblymember and co-sponsor of the RAISE Act Alex Bores told TechCrunch. Bores called the industry resistance unsurprising, but claimed that the RAISE Act would not limit innovation of tech companies in any way. 'The NY RAISE Act is yet another stupid, stupid state level AI bill that will only hurt the US at a time when our adversaries are racing ahead,' said Andreessen Horowitz general partner Anjney Midha in a Friday post on X. Andreessen Horowitz, alongside the startup incubator Y Combinator, were some of the fiercest opponents to SB 1047. Anthropic, the safety-focused AI lab that called for federal transparency standards for AI companies earlier this month, has not reached an official stance on the bill, co-founder Jack Clark said in a Friday post on X. However, Clark expressed some grievances over how broad the RAISE Act is, noting that it could present a risk to 'smaller companies.' When asked about Anthropic's criticism, state Senator Gounardes told TechCrunch he thought it 'misses the mark,' noting that he designed the bill not to apply to small companies. OpenAI, Google, and Meta did not respond to TechCrunch's request for comment. Another common criticism of the RAISE Act is that AI model developers simply wouldn't offer their most advanced AI models in the state of New York. That was a similar criticism brought against SB 1047, and it's largely what's played out in Europe thanks to the continent's tough regulations on technology. Assemblymember Bores told TechCrunch that the regulatory burden of the RAISE Act is relatively light, and therefore, shouldn't require tech companies to stop operating their products in New York. Given the fact that New York has the third largest GDP in the U.S., pulling out of the state is not something most companies would take lightly. 'I don't want to underestimate the political pettiness that might happen, but I am very confident that there is no economic reasons for them to not make their models available in New York,' said Assemblymember Borres. Sign in to access your portfolio

Synthetic Identity Document Fraud Surges 300% in the U.S. - Sumsub Warns E-Commerce, Healthtech and Fintech at Risk
Synthetic Identity Document Fraud Surges 300% in the U.S. - Sumsub Warns E-Commerce, Healthtech and Fintech at Risk

Yahoo

time8 hours ago

  • Yahoo

Synthetic Identity Document Fraud Surges 300% in the U.S. - Sumsub Warns E-Commerce, Healthtech and Fintech at Risk

North America sees major spike in synthetic document fraud and deepfake-related attacks, particularly across e-commerce, edtech, and healthtech sectors NEW YORK, June 12, 2025 /PRNewswire/ -- Sumsub, a global leader in verification, today released Q1 2025 identity fraud trends based on internal data, revealing a dramatic rise in AI-enabled fraud across the United States. According to platform data, deepfake fraud has surged by 1100%, while synthetic identity document fraud rose by over 300% with attackers exploiting generative AI to create fake passports, IDs, and biometric data. Sumsub analyzed millions of verification checks conducted on its platform between January and March 2025 across industries such as fintech, e-commerce, healthtech, and edtech to uncover emerging fraud trends. One of the most pressing concerns is the rise of synthetic identity document fraud, where criminals use AI tools to generate fake identity documents such as driver's licenses or passports. These synthetic identity documents are often realistic enough to bypass basic KYC checks, posing a significant challenge for businesses. Unlike synthetic identity documents, which involve the creation of entirely fake personas using a mix of real and fabricated data, synthetic identity documents refer specifically to falsified documents or images generated by AI. Fraudsters then use these AI-generated visuals to open accounts, conduct illicit transactions, or bypass compliance processes, making detection increasingly difficult without advanced verification tools North America-Specific Insights (Q1 2025): Synthetic identity document fraud spiked by 311% in North America compared to Q1 2024, making it the region's most alarming growth vector. Deepfake fraud jumped by 1100%, marking a clear signal that generative AI is being used to bypass facial recognition and biometric checks. High fraud activity was recorded in e-commerce, healthtech and edtech––industries have seen accelerated digitization post-pandemic. The U.S. in particular saw a sharp increase in fintech fraud attempts, underscoring the need for real-time, multi-layered fraud prevention solutions. Key Global Findings (Q1 2025): Synthetic identity document fraud is rising across all regions, fueled by widespread access to GenAI tools that can generate highly realistic fake IDs. Top markets for synthetic identity document fraud include: Ethiopia (2.17%) Pakistan (2.08%) Nigeria (1.52%) Other notable markets: Hong Kong (0.99%), Indonesia (0.84%), Turkey (0.80%)Deepfake-related attacks are growing globally, particularly to bypass biometric systems—now one of the fastest-evolving fraud vectors. Healthtech fraud attempts rose by 384%, signaling its emergence as a high-value target alongside fintech and e-commerce. Regional deepfakefraud surges: Canada: 3,400% Hong Kong: 1,900% Singapore: 1,500% Mainland China: 1,183% Germany: 1,100% United Kingdom: 900% United States: 700% "The pace at which fraud tactics are evolving is staggering," said Andrew Sever, CEO of Sumsub. "As generative AI becomes more accessible, so does the ability to generate synthetic identity documents and deepfakes at scale. What we're seeing is a broader trend, in which Fraud-as-a-Service is becoming a reality, where malicious actors can easily access sophisticated tools to carry out attacks. Businesses can no longer rely on outdated verification tools. It's imperative they adopt an intelligent, adaptive approach to stay ahead." To combat this new wave of fraud, Sumsub continues to invest in advanced AI-powered fraud detection, document authenticity analysis, and biometric defense tools to protect businesses and users worldwide. The company urges organizations to stay proactive by integrating multi-layered verification and continuous monitoring into their onboarding and transaction workflows. About Sumsub: Sumsub is a full-cycle verification and ongoing monitoring platform that secures the whole user journey. With Sumsub's customizable KYC, KYB, Transaction Monitoring, Fraud Prevention and Travel Rule solutions, you can orchestrate your verification process, welcome more customers worldwide, meet compliance requirements, reduce costs, and protect your business. Sumsub has over 4,000 clients across the fintech, crypto, transportation, trading, edtech, e-commerce and gaming industries including Duolingo, Bitpanda, Wirex, Avis, Bybit, Vodafone, Kaizen Gaming, and TransferGo. Methodology Note: Unless otherwise specified, all figures referenced are based on Sumsub's internal statistics, drawn from millions of identity verification checks conducted globally between Q1 2024 and Q1 2025. These insights reflect real-world patterns across multiple industries and geographies, offering a representative snapshot of digital verification trends during this period. View original content to download multimedia: SOURCE Sumsub Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store