logo
#

Latest news with #Vermillio

Investors back start-ups aiding copyright deals to AI groups
Investors back start-ups aiding copyright deals to AI groups

Business Mayor

time13-05-2025

  • Business
  • Business Mayor

Investors back start-ups aiding copyright deals to AI groups

Stay informed with free updates Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox. Investors are backing a crop of start-ups helping the creative industries sell content to artificial intelligence groups, as OpenAI, Meta and Google face scrutiny over the use of copyrighted material to train AI models. Fledgling groups such as Pip Labs, Vermillio, Created by Humans, ProRata, Narrativ and Human Native are building tools and marketplaces where writers, publishers, music studios and moviemakers can be paid for allowing their content to be used for AI training purposes. These content licensing and data marketplace start-ups have secured $215mn in funding since 2022, according to data from Over this time, AI companies have sought media deals to obtain high-quality training data, which can also help them avoid being sued over copyright claims or being targeted by regulators. 'The licensing of content that doesn't exist on the open internet is going to be a big business,' said Dan Neely, chief executive and co-founder of Vermillio, which works with major studios and music labels such as Sony Pictures and Sony Music. The start-up, which detects whether AI outputs have copyrighted content as well as licenses content, projects the AI licensing market to expand from about $10bn in 2025 to $67.5bn by 2030. Sony Music and DNS Capital led Vermillio's latest $16mn funding round in March. The number of AI licensing deals has risen in the past year, with 16 agreed in December 2024 — a record number, according to the data by the Centre for Regulation of the Creative Economy at the University of Glasgow. Read More Sainsbury's reviews media account ChatGPT maker OpenAI and AI search engine Perplexity have each made more that 20 deals with media groups since 2023, particularly with news organisations. 'You need three things to build AI models: talent, compute and data,' said James Smith, chief executive and co-founder of UK-based Human Native. '[AI companies] have spent millions on the first two. They're just getting around [to] spending millions on the third.' Andreessen Horowitz raised $80mn for Pip Labs in August. In November, ProRata was valued at $130mn after signing licensing deals with major UK publishers such as The Guardian and the Daily Mail owner DMG Media. The investment deals come amid global scrutiny over what data is used to train AI models. The UK is weighing relaxing copyright rules for AI training, but tech companies such as OpenAI and Google are facing attempts to force them to pay more for valuable content through lawsuits in the US and new regulations in the EU. Meta this month faced authors in a US court in one of the first big tests over whether AI groups should pay for copyrighted training data that has been scraped from the internet. Recommended OpenAI, which has done numerous data sharing deals, including with the Financial Times, is still facing copyright lawsuits from some media groups including the New York Times. Jason Zhao, the co-founder of Pip Labs, which uses blockchain technology to track and license intellectual property, said: 'Instead of trying to spend a ton of time changing the law to fit, what we're trying to do is show that this is just a better solution that both AI companies and IP holders would rather use.' Read More Campaign Audio Advertising Awards 2024: winners revealed Stability AI, which is also being sued by artists who claim the company used their intellectual property to train their models, is looking into starting its own licensing marketplace, says its chief executive Prem Akkaraju. '[It's] something we're working on, where artists can actually have a marketplace or a portal where they can say, 'hey, you could train on this',' said Akkaraju. 'I think it's really smart.' The nascent AI training data marketplace faces several challenges. The start-ups need to find enough data set providers to create a working business model. They also need to find data at a high enough quality, and make it easily and quickly available. Many online data sets include unwanted content, such as child sexual abuse material or other harmful material, which could expose companies to reputational harm or litigation. Another obstacle will be convincing artists and creatives that selling their content to train AI model will be beneficial. 'So many of the companies and creators we talk to don't yet have confidence in the technical solutions that are either out there or being developed,' said Gina Neff, professor of responsible AI at Queen Mary University of London. 'It feels like a really bad trade off to them.' But Human Native's Smith said: 'We can't have a situation where we decimate industries that we hold dear, like journalism or music. We have to find a way to make this work.'

Exclusive: Sony Music backs AI rights startup Vermillio
Exclusive: Sony Music backs AI rights startup Vermillio

Axios

time03-03-2025

  • Business
  • Axios

Exclusive: Sony Music backs AI rights startup Vermillio

Vermillio, the Chicago-based AI licensing and protection platform, has raised a $16 million Series A co-led by DNS Capital and Sony Music, executives exclusively tell Axios Why it matters: Sony Music's first investment in AI licensing seeks to protect its artists and support them in responsibly using generative AI tools. How it works: Vermillio's TraceID tool monitors online content for use of intellectual property, as well as name, image and likeness. The platform can automatically send takedown requests and manage payments for licensed content. The company charges $4,000 per month for the software and takes a transaction fee for its licensing tool. Clients include movie studios like Sony Pictures, record labels like Sony Music, talent agencies like WME, as well as individual talent. With Sony Pictures, Vermillio let fans create AI-generated Spider-Verse characters, and it partnered with The Orb and David Gilmour, alongside Sony Music and Legacy Recordings, on AI tools for creating tracks and artwork inspired by "Metallic Spheres In Colour." Context: CEO Dan Neely has worked in AI for more than 20 years. The serial entrepreneur sold his last startup, Networked Insights, to American Family Insurance in 2017 and founded Vermillio in 2019. He says he was inspired to build the "guardrails for generative internet" after seeing the release of deepfake creation software, DeepFaceLab, and rapper Jay-Z's efforts to take down a deepfake of himself. Flashback: Prior, Vermillio raised $7.5 million in seed funding from angel investors. Dennis Kooker, president, global digital business at Sony Music Entertainment, says he was introduced to Neely about a year and a half ago and was impressed by his knowledge of and the startup's strategy. "The first project we did together was a proof of concept with David Gilmore and The Orb to show and highlight that intellectual property and generative AI can work hand in hand," Kooker says. "Training the right way, ethically and principally, can be accomplished." Zoom out: Some companies like Sony Music are seeking legal action on cases where generative AI impacts the core of IP companies. These companies want to protect and monetize creators and content along with nearly every other aspect of their businesses. Sony Music, along with Universal Music Group and Warner Records, sued AI startups Suno and Udio for copyright infringement. But content companies also want to embrace these technologies. Artists can use the tech for their own content creation and for fan engagement. What's next: Neely says Vermillio plans to expand to sports and work with major sports leagues this year. It's also releasing a free version of the product that shows whether someone is at high or low risk of AI copyright.

Scarlett Johansson Is Right: Tech Platforms Shouldn't Profit From Her Likeness In Unauthorized AI
Scarlett Johansson Is Right: Tech Platforms Shouldn't Profit From Her Likeness In Unauthorized AI

Yahoo

time19-02-2025

  • Entertainment
  • Yahoo

Scarlett Johansson Is Right: Tech Platforms Shouldn't Profit From Her Likeness In Unauthorized AI

Editors note: Dan Neely is a serial entrepreneur who was recently named to the Time 100 list of the most influential people in AI. He is CEO of Vermillio, a company that uses AI tools to identify and eliminate unauthorized content for clients that include public figures, performers, creators and IP owners. He advocates for greater IP protection and monetization standards around AI. Scarlett Johansson has stepped up once again with a strong statement on unauthorized AI. This time, she called out the use of her likeness in an AI-generated video addressing Kanye West's latest hate speech. More from Deadline Scarlett Johansson Calls On Legislators To Limit AI After Viral Video Shows Fake Image Of Her Condemning Kanye West Scarlett Johansson Had To Threaten OpenAI With Legal Action To Get Soundalike Voice Taken Down, Wants 'Appropriate Legislation' To Stop Such Deepfakes – Update AI Arm Of 'Dune' Visual Effects Firm DNEG Acquires Metaphysic, Which De-Aged Tom Hanks In 'Here' She's right: It's not only about the message itself or the misuse of her own image but an unchecked technology rapidly reshaping our reality, with little to no guardrails in place. Time is of the essence to get national AI policies right and establish industrywide standards. Leading talent, like Johansson, have important voices and critical leverage to counterbalance the influence of the biggest tech platforms. Social media platforms are profit-driven ventures designed to maximize engagement. This means algorithms prioritize virality over veracity. The most interesting content is often about the most interesting people, which includes the world's most famous celebrities. AI-generated content featuring talent – whether it's deepfake videos, fabricated audio, or AI-written misinformation – can spread like wildfire, good or bad. This particular video was all over the internet – Facebook, TikTok, YouTube, Instagram, and more. There were no mechanisms at the platforms in place to stop it from spreading. While the platforms each theoretically have the ability to act, history tells us they won't unless compelled to do so. We can't expect them to police themselves. From IP rights to fair compensation, the entertainment industry has long fought to protect talent from exploitation at the hands of tech platforms. They've generally reacted, rather than led. In the new era of AI, these battles take on an entirely new urgency. Unauthorized AI-generated content isn't just an inconvenience – it's an existential threat to personal identity, professional livelihoods (of both talent and IP holders), and even democratic institutions. History has shown that platforms will leverage the world's leading talent and most beloved IP without sharing a fair piece of the growing pie. If we don't act now, we risk ceding control of reality itself to technology that can fabricate and spread convincing falsehoods at scale. As the CEO of an AI licensing and protection platform, of course I believe that this remarkable technology isn't inherently bad. With proper guardrails, AI regulated by AI offers a powerful tool to talent and creators. Instead of having it used without her consent, what if Scarlett Johansson wanted to license her voice for an educational tool for children or for a cause of her choice? Consider a podcaster who wants to license their AI-generated likeness to create promotional material for lucrative compensation that fuels their creativity. These are positive applications of AI that could empower talent, rather than exploit them. But this only works if real technology guardrails, industrywide standards, and the proper legislation are all in place. The leverage of leading talent, like Johansson, is critical. Regulation cannot be left to the discretion of tech CEOs, whose incentives are not always aligned with Hollywood or the general public's. Hollywood, Washington and the tech industry are all circling AI. Hollywood's leading voices need to fight for a spot at the table when it comes to passing legislation that regulates the spread of AI-generated content. All parties need to come together to establish clear and enforceable guardrails. That means legislation ensuring consent in AI-generated likeness and voice replication, meaningful data protections, and regulated platform accountability for the spread of unauthorized content. The recent decision in the Thomson Reuters case shows that courts are beginning to take AI-related copyright issues seriously. Talent must leverage this momentum to ensure their voices are heard and shaping these decisions. If talent doesn't step up, tech platforms will dominate the conversation, and history has shown that when that happens, creators and the public more broadly get left with the smallest piece of the pie. Johansson's voice carries weight, but she shouldn't have to fight this battle alone. More people – especially the most recognizable faces who are driving so much of the most viral content – need to speak up. The technology exists to monitor and enforce regulations on the biggest platforms, so let's require the platforms to use them. The time to act is now – before tech platforms drown out the voices of talent and we collectively lose our ability to distinguish fact from fiction.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store