logo
Joseph Gordon-Levitt Argues 'Your Digital Self Should Belong To You' At UN Internet Forum

Joseph Gordon-Levitt Argues 'Your Digital Self Should Belong To You' At UN Internet Forum

Yahoo11 hours ago

Addressing this week's UN Internet Governance Forum, Joseph Gordon-Levitt said he wanted to focus on one basic principle.
'Your digital self should belong to you. That the data that humans produce — or writings and our voices and the connections that the make, our ideas — should belong to us. And that any economic value that's generated from this data should be shared with the humans that produce it,' he said in a short speech.
More from Deadline
Joseph Gordon-Levitt Directing AI Thriller For Rian Johnson's T-Street; Anne Hathaway To Star
Judge Rules For Meta In AI Lawsuit Brought By Sarah Silverman And Other Authors, But Warns Of Illegally Using Copyrighted Works In Training Models
Prime Video Is Increasingly Using Generative AI To Enhance In-App Viewing - Here's Where To Expect The Tech From Live Recaps To Language Dubbing
A little shot of hope and optimism today from the United Nations. I got to speak about AI at their annual Internet Governance Forum (my first time at a UN thing 🤩) and it really does feel good to see so many people from so many places around the world convening in good faith to… pic.twitter.com/qqae5SLeDk
— Joseph Gordon-Levitt (@hitRECordJoe) June 26, 2025
Of course, that very concept is being mulled right now in Hollywood and being litigated between content producers and tech giants. Just yesterday, a federal judge ruled that Meta's unauthorized use of copyrighted works from Sarah Silverman and other authors to train generative AI models is a 'fair use,' but warned that the practice may in many circumstances be illegal.
The actual impacts of AI, said Gordon-Levitt in a fireside chat at the UN Forum, could be very broad.
'I think that the impact could mean that creativity as we know it sort of goes away, to be really honest. I hate to put it in grave terms. That's a worst-case scenario, but I think it's on the table. I don't think it has to happen that way. I think if we all figure out how to steer the ship, so to speak, the technology could be a wonderful thing for creativity, and we're sort of at that crossroads right now.'
Watch the actor's fireside chat at the event below.
AI and its impacts are familiar terrain for Gordon-Levitt. Late last year, Deadline broke the news that he was set to direct an untitled AI thriller for T-Street Productions, the production company of Rian Johnson & Ram Bergman, with Anne Hathaway set to star. And for more than a decade, the actor has run a company in the digital space; HitRecord is an online collaborative media platform he founded with his brother Dan.
Best of Deadline
'The Buccaneers' Season 2 Soundtrack: From Griff To Sabrina Carpenter
'The Buccaneers' Season 2 Release Schedule: When Do New Episodes Come Out?
'Nine Perfect Strangers' Season 2 Release Schedule: When Do New Episodes Come Out?

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

In Pursuit of Godlike Technology, Mark Zuckerberg Amps Up the A.I. Race
In Pursuit of Godlike Technology, Mark Zuckerberg Amps Up the A.I. Race

New York Times

time34 minutes ago

  • New York Times

In Pursuit of Godlike Technology, Mark Zuckerberg Amps Up the A.I. Race

In April, Mark Zuckerberg's lofty plans for the future of artificial intelligence crashed into reality. Weeks earlier, the 41-year-old chief executive of Meta had publicly boasted that his company's new A.I. model, which would power the latest chatbots and other cutting-edge experiments, would be a 'beast.' Internally, Mr. Zuckerberg told employees that he wanted it to rival the A.I. systems of competitors like OpenAI and be able to drive features such as voice-powered chatbots, people who spoke with him said. But at Meta's A.I. conference that month, the new A.I. model did not perform as well as those of rivals. Features like voice interactions were not ready. Many developers, who attended the event with high expectations, left underwhelmed. Mr. Zuckerberg knew Meta was falling behind in A.I., people close to him said, which was unacceptable. He began strategizing in a WhatsApp group with top executives, including Chris Cox, Meta's head of product, and Andrew Bosworth, the chief technology officer, about what to do. That kicked off a frenzy of activity that has reverberated across Silicon Valley. Mr. Zuckerberg demoted Meta's vice president in charge of generative A.I. He then invested $14.3 billion in the start-up Scale AI and hired Alexandr Wang, its 28-year-old founder. Meta approached other start-ups, including the A.I. search engine Perplexity, about deals. And Mr. Zuckerberg and his colleagues have embarked on a hiring binge, including reaching out this month to more than 45 A.I. researchers at rival OpenAI alone and offering them compensation packages as high as $100 million each, two people with knowledge of the matter said. Four OpenAI researchers have accepted offers from the company. Want all of The Times? Subscribe.

The A.I. Frenzy Is Escalating. Again.
The A.I. Frenzy Is Escalating. Again.

New York Times

time35 minutes ago

  • New York Times

The A.I. Frenzy Is Escalating. Again.

Silicon Valley's artificial intelligence frenzy has found a new gear. Two and a half years after OpenAI set off the artificial intelligence race with the release of the chatbot ChatGPT, tech companies are accelerating their A.I. spending, pumping hundreds of billions of dollars into their frantic effort to create systems that can mimic or even exceed the abilities of the human brain. The tech industry's giants are building data centers that can cost more than $100 billion and will consume more electricity than a million American homes. Salaries for A.I. experts are jumping as Meta offers signing bonuses to A.I. researchers that top $100 million. And venture capitalists are dialing up their spending. U.S. investment in A.I. companies rose to $65 billion in the first quarter, up 33 percent from the previous quarter and up 550 percent from the quarter before ChatGPT came out in 2022, according to data from PitchBook, which tracks the industry. 'Everyone is deeply afraid of being left behind,' said Chris V. Nicholson, an investor with the venture capital firm Page One Ventures who focuses on A.I. technologies. This astonishing spending, critics argue, comes with a huge risk. A.I. is arguably more expensive than anything the tech industry has tried to build, and there is no guarantee it will live up to its potential. But the bigger risk, many executives believe, is not spending enough to keep pace with rivals. 'The thinking from the big C.E.O.s is that they can't afford to be wrong by doing too little, but they can afford to be wrong by doing too much,' said Jordan Jacobs, a partner with the venture capital firm Radical Ventures. Want all of The Times? Subscribe.

Meta, Anthropic win legal battles over AI 'training.' The copyright war is far from over.
Meta, Anthropic win legal battles over AI 'training.' The copyright war is far from over.

Yahoo

time35 minutes ago

  • Yahoo

Meta, Anthropic win legal battles over AI 'training.' The copyright war is far from over.

Artificial intelligence developers won marginal legal battles this week when federal judges in California ruled that Anthropic ( and Meta (META) could "train" large language models (LLM) on copyrighted books. But the larger war over AI developers' use of protected works is far from over. Dozens of copyright holders have sued developers, arguing that the developers must pay rights holders before allowing generative AI software to interpret their works for profit. Rights holders also argue that the AI output cannot resemble their original works. Rob Rosenberg, an intellectual property lawyer with Telluride Legal Strategies, called Tuesday's ruling siding with AI developer Anthropic a "ground-breaking" precedent, but one that should be viewed as an opening salvo. "Judges are just starting to apply copyright law to AI systems," Rosenberg said, with many cases coming down the pike. In that ruling, California US District Judge William Alsup said that Anthropic legally utilized millions of copyrighted books to train its various LLMs, including its popular chatbot Claude. However, the judge distinguished books that Anthropic paid for from a pirated library of more than 7 million books that it also used to train Claude. As for the stolen materials, the judge said, Anthropic must face the plaintiff authors' claims that it infringed on their copyrights. In a more limited ruling favoring Meta on Wednesday, California US District Judge Vince Chhabria said that a group of 12 authors who sued the tech giant, including stand-up comedian Sarah Silverman, made "wrong arguments" that prevented him from ruling on infringement. According to the authors, Meta used their copyrighted books to train its large language model Llama. The rulings are among the first in the country to address emerging and unsettled questions over how far LLMs can go to rely on protected works. "There is no predicting what's going to come out the other end of those cases," said Courtney Lytle Sarnow, an intellectual property partner with CM Law and adjunct professor at Emory University School of Law. Sarnow and other intellectual property experts said they expect the disputes will end up in appeals to the US Supreme Court. "I think it's premature for Anthropic and others like it to be taking victory laps," said Randolph May, president of the Free State Foundation and former chair of the American Bar Association's Administrative Law and Regulatory Practice section. US copyright law, as defined by the Copyright Act, gives creators of original works an exclusive right to reproductions, distributions, and public performances of their material, according to Sarnow, including some derivative works and sequels to their original creations. Absent a license from the rights holders to use their copyrighted material, all large language models are stealing from authors, she said. But under US law, a certain level of what would otherwise be deemed stealing is, in fact, an exception permitted under the doctrine of "fair use." That doctrine makes it legal to use the material without a license for commentary and critique, to reference it for news reporting and education, and to transform it into something new and distinct that serves a purpose different from the original form. Both Anthropic and Meta argued that training their LLMs on copyrighted material didn't violate the Copyright Act because the models transformed the original authors' content into something new. In his ruling, Judge Alsup reasoned that Anthropic's use of books was "exceedingly transformative" and therefore qualified as fair use under the Copyright Act. Rosenberg and Sarnow said it's too soon to tell how courts will ultimately rule on the issue. In cases where a "transformative" use is being used as a defense, LLM defendants need to show that their use of copyrighted material did not disrupt the market for the authors' original works. Judge Chhabria criticized Alsup's ruling, calling his analysis incomplete for "brushing aside" such market concerns. "Under the fair use doctrine, harm to the market for the copyrighted work is more important than the purpose for which the copies are made," Judge Chhabria said. Anthropic still faces some other big legal challenges. Reddit sued the company earlier in June. The suit alleges Anthropic intentionally scraped Reddit users' personal data without their consent and then put their data to work training Claude. Anthropic is also defending itself against a suit from music publishers, including Universal Music Group (0VD.F), ABKCO, and Concord, alleging that Anthropic infringed on copyrights for Beyoncé, the Rolling Stones, and other artists as it trained Claude on lyrics to more than 500 songs. The company faces more peril in the case where a judge determined it must face claims from authors that it infringed on their copyrights by paying for a pirated library of more than 7 million books. For copyright infringement, willful violations can result in statutory fines up to $150,000 per violation. If Anthropic were found liable for intentionally misusing the 7 million books at issue in its case, the maximum penalties could end up north of $1 billion. Three authors brought the case, requesting that the court grant their request to pursue their claims as a class action. The judge's decision on the class certification request is pending. "The judge did not give Anthropic a free pass," Rosenberg said. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store