logo
Artists revolt against Spotify over CEO's investment in AI warfare

Artists revolt against Spotify over CEO's investment in AI warfare

Euronews30-07-2025
The prolific Australian psych-rock group King Gizzard & the Lizard Wizard is the latest band to cut ties with Spotify in protest of CEO Daniel Ek's increasing ties with the arms industry - specifically his investment in a controversial AI-driven military tech firm.
Ek co-founded the investment firm Prima Materia, which has invested heavily in Helsing, a German company developing AI for use in warfare, including drone technology.
The Financial Times recently reported that Prima Materia led a €600 million funding round for Helsing and had previously backed the company before Russia's 2022 invasion of Ukraine.
The news has sparked strong backlash from musicians who say they no longer want to be associated with a platform whose profits are being funnelled into weapons development.
King Gizzard & the Lizard Wizard, known for hits like 'Work This Time' and 'Robot Stop', have removed nearly all of their music from Spotify, only leaving a few releases due to existing licensing deals. They announced the decision on Instagram, stating their new demos were available 'everywhere except Spotify,' adding 'f*** Spotify.'
A post shared by Deerhoof (@deerhoof)
Other artists have taken similar action. American indie group Deerhoof posted a statement saying they don't want their "music killing people" and described Spotify as a 'data-mining scam.' Experimental rock group Xiu Xiu also criticised the platform, calling it a 'garbage hole armageddon portal" and urged fans to cancel their Spotify subscriptions.
These protests add to a growing list of controversies and concerns surrounding the streaming platform. Spotify recently came under fire after allowing an AI-generated band called Velvet Sundown, which has managed to rack up millions of streams, to appear on its platform with a 'verified artist' badge.
Euronews Culture's very own music aficionado David Mouriquand described it as "a prime example of autocratic tech bros seeking to reduce human creation to algorithms designed to eradicate art."
He added: "When artists are expressing real, legitimate concerns over the ubiquity of AI in a tech-dominated world and the use of their content in the training of AI tools, the stunt comes off as tone-deaf. Worse, morally shameless."
And while Spotify announced in its Loud & Clear 2024 report that it paid over $10 billion (€9.2 billion) to the music industry in 2024 alone, critics argue that most of those payouts go to just a small percentage of top artists and labels, and that the platform still underpay and exploit the vast majority of musicians.
Icelandic musician Björk put it most bluntly: 'Spotify is probably the worst thing that has happened to musicians.'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Grok, is that Gaza? AI image checks mislocate news photographs
Grok, is that Gaza? AI image checks mislocate news photographs

France 24

time2 hours ago

  • France 24

Grok, is that Gaza? AI image checks mislocate news photographs

But when social media users asked Grok where it came from, X boss Elon Musk's artificial intelligence chatbot was certain that the photograph was taken in Yemen nearly seven years ago. The AI bot's untrue response was widely shared online and a left-wing pro-Palestinian French lawmaker, Aymeric Caron, was accused of peddling disinformation on the Israel-Hamas war for posting the photo. At a time when internet users are turning to AI to verify images more and more, the furore shows the risks of trusting tools like Grok, when the technology is far from error-free. Grok said the photo showed Amal Hussain, a seven-year-old Yemeni child, in October 2018. In fact the photo shows nine-year-old Mariam Dawwas in the arms of her mother Modallala in Gaza City on August 2, 2025. Before the war, sparked by Hamas's October 7, 2023 attack on Israel, Mariam weighed 25 kilograms, her mother told AFP. Challenged on its incorrect response, Grok said: "I do not spread fake news; I base my answers on verified sources." The chatbot eventually issued a response that recognised the error -- but in reply to further queries the next day, Grok repeated its claim that the photo was from Yemen. The chatbot has previously issued content that praised Nazi leader Adolf Hitler and that suggested people with Jewish surnames were more likely to spread online hate. Radical right bias Grok's mistakes illustrate the limits of AI tools, whose functions are as impenetrable as "black boxes", said Louis de Diesbach, a researcher in technological ethics. "We don't know exactly why they give this or that reply, nor how they prioritise their sources," said Diesbach, author of a book on AI tools, "Hello ChatGPT". Each AI has biases linked to the information it was trained on and the instructions of its creators, he said. In the researcher's view Grok, made by Musk's xAI start-up, shows "highly pronounced biases which are highly aligned with the ideology" of the South African billionaire, a former confidante of US President Donald Trump and a standard-bearer for the radical right. Asking a chatbot to pinpoint a photo's origin takes it out of its proper role, said Diesbach. "Typically, when you look for the origin of an image, it might say: 'This photo could have been taken in Yemen, could have been taken in Gaza, could have been taken in pretty much any country where there is famine'." AI does not necessarily seek accuracy -- "that's not the goal," the expert said. Another AFP photograph of a starving Gazan child by al-Qattaa, taken in July 2025, had already been wrongly located and dated by Grok to Yemen, 2016. That error led to internet users accusing the French newspaper Liberation, which had published the photo, of manipulation. 'Friendly pathological liar' An AI's bias is linked to the data it is fed and what happens during fine-tuning -- the so-called alignment phase -- which then determines what the model would rate as a good or bad answer. "Just because you explain to it that the answer's wrong doesn't mean it will then give a different one," Diesbach said. "Its training data has not changed and neither has its alignment." Grok is not alone in wrongly identifying images. When AFP asked Mistral AI's Le Chat -- which is in part trained on AFP's articles under an agreement between the French start-up and the news agency -- the bot also misidentified the photo of Mariam Dawwas as being from Yemen. For Diesbach, chatbots must never be used as tools to verify facts. "They are not made to tell the truth," but to "generate content, whether true or false", he said.

US government gets a year of ChatGPT Enterprise for $1
US government gets a year of ChatGPT Enterprise for $1

France 24

time3 hours ago

  • France 24

US government gets a year of ChatGPT Enterprise for $1

Federal workers in the executive branch will have access to ChatGPT Enterprise in a partnership with the US General Services Administration, according to the pioneering San Francisco-based artificial intelligence (AI) company. "By giving government employees access to powerful, secure AI tools, we can help them solve problems for more people, faster," OpenAI said in a blog post announcing the alliance. ChatGPT Enterprise does not use business data to train or improve OpenAI models and the same rule will apply to federal use, according to the company. Earlier this year, OpenAI announced an initiative focused on bringing advanced AI tools to US government workers. The news came with word that the US Department of Defense awarded OpenAI a $200 million contract to put generative AI to work for the military. OpenAI planned to show how cutting-edge AI can improve administrative operations, such as how service members get health care, and also has cyber defense applications, the startup said in a post. OpenAI has also launched an initiative to help countries build their own AI infrastructure, with the US government a partner in projects. The tech firm's move to put its technology at the heart of national AI platforms around the world comes as it faces competition from Chinese rival DeepSeek. DeepSeek's success in delivering powerful AI models at a lower cost has rattled Silicon Valley and multiplied calls for US big tech to protect its dominance of the emerging technology. The OpenAI for Countries initiative was launched in June under the auspices of a drive -- dubbed "Stargate" -- announced by US President Donald Trump to invest up to $500 billion in AI infrastructure in the United States. OpenAI, in "coordination" with the US government, will help countries build data centers and provide customized versions of ChatGPT, according to the tech firm.

Meta removes 6.8 million WhatsApp accounts linked to criminal scammers
Meta removes 6.8 million WhatsApp accounts linked to criminal scammers

Euronews

time6 hours ago

  • Euronews

Meta removes 6.8 million WhatsApp accounts linked to criminal scammers

WhatsApp has taken down 6.8 million accounts that were 'linked to criminal scam' centres targeting people online around that world, its parent company Meta said. The account deletions, which Meta said took place over the first six months of the year, arrive as part of wider company efforts to crack down on scams. In a Tuesday announcement, Meta said it was also rolling new tools on WhatsApp to help people spot scams, including a new safety overview that the platform will show when someone who is not in a user's contacts adds them to a group, as well as ongoing test alerts to pause before responding. Scams are becoming all too common and increasingly sophisticated in today's digital world. Too-good-to-be-true offers and unsolicited messages attempt to steal consumers' information or money, with scams filling our phones, social media, and other corners of the internet each day. Meta noted that 'some of the most prolific' sources of scams are criminal scam centres, which often span from forced labour operated by organised crime – and warned that such efforts often target people on many platforms at once, in attempts to evade detection. That means that a scam campaign may start with messages over text or a dating app, for example, and then move to social media and payment platforms, Meta said. Meta, which also owns Facebook and Instagram, pointed to recent scam efforts that it said attempted to use its own apps – as well as TikTok, Telegram, and AI-generated messages made using ChatGPT – to offer payments for fake likes, to enlist people into a pyramid scheme, or to lure others into cryptocurrency investments. Meta linked these scams to a criminal scam center in Cambodia and said it disrupted the campaign in partnership with ChatGPT maker OpenAI.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store