
Record labels in talks to license music to AI firms Udio, Suno
The major music companies sued Udio and Suno last year accusing them of copyright infringement. The Recording Industry Association of America, a trade group for record labels, sought as much as $150,000 per work infringed, which could total billions of dollars.
Advertisement
The music companies and the AI startups are talking to see if they can agree on terms rather than continue to fight in court. The negotiations are happening in parallel, creating a race of sorts to see which AI company and label will strike a deal first. The talks are complicated because the labels are pushing for greater control over the use of their work, while Udio and Suno are seeking flexibility to experiment and want deals at a price reasonable for startup companies.
Advertisement
Udio and Suno didn't respond to requests for comment. The three music companies declined to comment.
Udio got $10 million last year from a group of investors that included venture capital firm Andreessen Horowitz, while Suno raised $125 million in a round that included Lightspeed Venture Partners.
AI companies have battled major media companies over whether they need to pay to train their large language models on copyrighted work. They've argued that the training is covered under fair use, while rights holders say they must be compensated. The New York Times Co. sued OpenAI, which has struck licensing deals with companies such as News Corp., the Associated Press and Vox Media.
The music industry has wrestled with how best to respond to the rise of AI technology. Industry sales have grown over the last decade, but have yet to fully recover from the damage wrought in the early days of the internet, when file sharing sites and piracy decimated sales. Record companies have battled every new technology, from file-sharing to user-generated content to streaming. But streaming services such as Spotify Technology SA have helped revive the industry.
The industry has expressed a desire to protect its copyrights while also leaning into new technology.
'The music community has embraced AI, and we are already partnering and collaborating with responsible developers to build sustainable AI tools centered on human creativity that put artists and songwriters in charge,' Mitch Glazier, chief executive officer of the RIAA, said when it filed the suit. 'But we can only succeed if developers are willing to work together with us.'
Advertisement
With assistance from Rachel Metz.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
14 minutes ago
- Forbes
New Study Reveals True AI Capabilities And Job Replacement Risk
The OECD has unveiled groundbreaking AI Capability Indicators that map artificial intelligence ... More progress against human abilities across nine key domains, revealing where AI currently stands and what's coming next. Imagine trying to navigate the digital transformation of your business using a compass that only points to "somewhere north." That's essentially what we've been doing with AI assessment until now. While tech companies have been throwing around impressive-sounding claims of superhuman performance in narrow tasks, business leaders and policymakers have been left squinting through the hype, trying to figure out what any of it actually means for the real world. The OECD has just delivered something we've desperately needed: a proper GPS system for AI capabilities. Their new AI Capability Indicators represent the most comprehensive attempt yet to create a standardized framework for understanding what AI can actually do compared to human abilities. Think of it as moving from vague headlines about "AI breakthrough" to having a detailed performance review that actually tells you something useful about real-world capabilities. Unlike the typical parade of cherry-picked benchmarks that dominate tech headlines, the OECD's approach cuts through the marketing noise. They've developed nine distinct capability scales that map AI progress against fundamental human abilities: Language, Social Interaction, Problem Solving, Creativity, Metacognition and Critical Thinking, Knowledge and Memory, Vision, Manipulation, and Robotic Intelligence. Each scale runs from Level 1 (basic, solved problems) to Level 5 (full human equivalence), with clear descriptions of what AI systems can actually accomplish at each stage. What makes this particularly helpful is how it sidesteps the technical jargon that usually makes AI assessment reports about as accessible as quantum physics textbooks. Instead of drowning in discussions of transformer architectures or neural network parameters, you get straightforward descriptions like whether an AI system can "adapt teaching methods to meet students' varying needs" or "handle objects of diverse shapes and materials in cluttered environments." The methodology behind these indicators is equally impressive. Over 50 experts across computer science and psychology spent five years developing this framework, combining rigorous academic research with practical, real-world applications. Here's where things get interesting and perhaps a bit sobering for those caught up in the AGI hype cycle. The assessment reveals that current AI systems are clustered around Levels 2 and 3 across most capabilities. We're not at the finish line; we're not even close to it. Large language models like ChatGPT score at Level 3 for language capabilities, meaning they can understand and generate semantic meaning with sophisticated knowledge, but they still struggle with analytical reasoning and have that persistent habit of confidently stating complete nonsense. It's like having a brilliant conversationalist who occasionally insists that gravity flows upward. In social interaction, even the most advanced systems barely reach Level 2. They can combine simple movements to express emotions and learn from interactions, but they're essentially sophisticated actors with no real understanding of the social dynamics they're performing. The vision capabilities tell an equally nuanced story. While AI can handle variations in lighting and target objects, performing multiple subtasks with known data variations (Level 3), it's still leagues away from the adaptable, learning-oriented visual intelligence that characterizes higher levels. For business leaders, this framework offers something really valuable: a reality check that cuts through vendor marketing speak. When a sales representative promises their AI solution will "revolutionize your operations," you can now ask pointed questions about which capability levels their system actually achieves and in which specific domains. The gap analysis between current AI capabilities and the requirements of specific business tasks becomes clearer when standardized benchmarks are in place. Consider customer service, where companies are deploying AI chatbots with the enthusiasm of gold rush prospectors. The OECD framework suggests that while AI can handle structured interactions reasonably well, anything requiring genuine social intelligence, nuanced problem-solving, or creative thinking quickly exposes current limitations. This doesn't mean AI isn't useful in customer service, but it helps set realistic expectations about what human oversight will still be necessary. It's the difference between using AI as a sophisticated tool versus expecting it to be a replacement employee. One approach leads to productivity gains; the other leads to customer complaints and public relations disasters. The framework also reveals opportunities that might not be immediately obvious. Areas where AI performs at Level 3 or higher represent genuine automation potential, while Level 2 capabilities suggest powerful augmentation opportunities. Smart businesses will use this intelligence to identify the low-hanging fruit while preparing for the longer-term implications of advancing capabilities. Perhaps nowhere are the implications more immediate and profound than in the field of education. The report's analysis of teaching capabilities reveals why educators are feeling simultaneously excited and terrified about AI's expanding role in classrooms. Many core teaching tasks require capabilities at Levels 4 and 5, particularly when it comes to adapting instruction to individual student needs or managing the complex social dynamics that make learning environments work. This creates a fascinating paradox worthy of a philosophy textbook: AI might be able to deliver standardized instruction more efficiently than humans, but the most transformational aspects of teaching, the inspiration, emotional connection, and creative problem-solving that actually change lives, remain firmly in human territory. The implications suggest we're heading toward a hybrid model that could fundamentally reshape education. AI handles routine instructional delivery, assessment, and administrative tasks, while humans focus on motivation, emotional support, creative problem-solving, and the kind of inspirational mentoring that transforms students into lifelong learners. This isn't displacement; it's specialization at a scale we've never seen before. The OECD's systematic approach provides something invaluable for strategic planning: a clear picture of what breakthrough capabilities we should be monitoring. The jump from Level 3 to Level 4 across multiple domains would represent a genuine inflection point, particularly in areas like creative problem-solving and social intelligence. What's especially revealing is how the framework illuminates the interconnectedness of different capabilities. True robotic intelligence, for instance, requires simultaneous advances across multiple domains. You can't have Level 5 robotic intelligence without corresponding progress in vision, manipulation, social interaction, and problem-solving. The framework also highlights capability areas where progress might stall or slow dramatically. Social interaction and creativity appear to have particularly steep curves between current performance and human-level capability. What the OECD has created is essentially a report card system for the AI age. Instead of being swept along by breathless predictions about artificial general intelligence arriving next week, we now have a framework for systematically tracking progress and understanding real-world implications. For businesses, this means more informed decisions about where to invest in AI capabilities and where to double down on human talent development. For policymakers, it provides a foundation for regulations and workforce planning grounded in evidence rather than science fiction. For educators, it offers a roadmap for preparing students for a world where human and artificial intelligence must work together effectively. The OECD framework isn't predicting exactly when AI will achieve human-level performance across all domains; that's still anyone's guess. Instead, it provides a common language for discussing AI capabilities and a systematic way to track progress that everyone, from CEOs to school principals, can understand and use. In a field notorious for moving fast and breaking things, having a reliable measurement system might just be what is needed.


Business Insider
15 minutes ago
- Business Insider
Quanta Services price target raised to $414 from $364 at Goldman Sachs
Goldman Sachs raised the firm's price target on Quanta Services (PWR) to $414 from $364 and keeps a Buy rating on the shares. The firm is positive on the stock and the trading multiple with the belief that over the longer-term, the substantial volume of transmission and distribution spending will continue to provide support to shares even in periods of AI headline driven softness, the analyst tells investors in a research note. Goldman adds however that there is a relatively better understanding of the fundamental drivers of estimates and a recognition among investors that under 10% of revenue comes from data centers. Confident Investing Starts Here:
Yahoo
20 minutes ago
- Yahoo
Austin city manager removes item on automatic license plate readers from council agenda
The Brief Item regarding automated license plate reader program removed from Austin City Council agenda City manager cites concerns expressed by residents during Tuesday's work session Testimony focused on city's vendor Flock, which works with ICE and uses AI AUSTIN, Texas - Austin's city manager has removed an item concerning the city's automated license plate reader (ALPR) program from Thursday's city council agenda. What we know T.C. Broadnax said in a statement Tuesday night that staff will be withdrawing Item 67, a proposed extension of the city's ALPR program, from Thursday's agenda. Broadnax cites concerns expressed by Austin residents during the council's work session on Tuesday as a reason behind his decision. What they're saying "Given concerns expressed today, I have decided to withdraw this item from the agenda at this time to provide more opportunities to address council members' questions and do our due diligence to alleviate concerns prior to bringing this item back to City Council for consideration," Broadnax said in his message to the Mayor and Council. Local perspective A press release from the office of council member Mike Siegel says that dozens of residents showed up to the work session to testify about the program. "The speakers overwhelmingly testified against the use of ALPRs, citing concerns about personal privacy, threats to immigrant families, threats to political dissidents, and more. Much of the testimony focused on the City's ALPR vendor, Flock, which works closely with Immigration and Customs Enforcement (ICE) and also uses artificial intelligence (AI) to develop profiles of vehicles based on camera footage and other sources," said the release. What's next Broadnax's decision essentially means Austin's ALPR program will end on June 30. A press conference is scheduled for Wednesday, June 4 at 11:30 a.m. where immigration, reproductive rights and data privacy advocates will be joining Mayor Pro Tem Vanessa Fuentes, Council member Zo Qadri and Siegel. They are expected to speak against the use of ALPRs and mass surveillance tools in Austin. The Source Information in this report comes from a release from Austin City Council member Mike Siegel's office.