Latest news with #AIgenerated


The Guardian
a day ago
- Business
- The Guardian
AI, bot farms and innocent indie victims: how music streaming became a hotbed of fraud and fakery
There is a battle gripping the music business today around the manipulation of streaming services – and innocent indie artists are the collateral damage. Fraudsters are flooding Spotify, Apple Music and the rest with AI-generated tracks, to try and hoover up the royalties generated by people listening to them. These tracks are cheap, quick and easy to make, with Deezer estimating in April that over 20,000 fully AI-created tracks – that's 18% of new tracks – were being ingested into its platform daily, almost double the number in January. The fraudsters often then use bots, AI or humans to endlessly listen to these fake songs and generate revenue, while others are exploiting upload services to get fake songs put on real artists' pages and siphon off royalties that way. Spotify fines the worst offenders and says it puts 'significant engineering resources and research into detecting, mitigating, and removing artificial streaming activity', while Apple Music claims 'less than 1% of all streams are manipulated' on its service. That may sound encouraging, but in a streaming business worth $20.4bn globally (according to the IFPI), it's likely that hundreds of millions of dollars are being skimmed off annually by rogue operators. Part of the problem is that while the barriers of entry for musicians have been dramatically lowered – uploading songs to streaming services is much easier than manufacturing CDs or vinyl – the barriers of entry for fraudsters are lower too. So the industry has declared war, but hair-trigger automated detection systems mean that artists are seeing their music taken offline even when they've done nothing wrong. Darren Owen, chief operating officer of music distribution company Fuga, suggests streaming fraud 'started to blow up across the industry' around 2021. Grappling with it now makes up 50% of his workload. Using AI and machine learning, Fuga gives streaming patterns a 'severity score', separating out 'non-human listening patterns' to spot fraud. 'You're not going to listen to the same song at the same time across multiple devices,' Owen says, noting that countries like India, Vietnam, Thailand and parts of eastern Europe are hotbeds of click-farm activity, using low-paid workers. 'It's become clear that organised criminals are involved in it as well.' It is not just services like in Germany – which was taken offline after a court injunction – and others in Canada and Brazil who are being targeted by record industry trade bodies for offering artificially inflated streams. Universal Music Group (UMG), the biggest record company in the world, has been accused by Drake of conspiring to increase the play count of Kendrick Lamar's diss track Not Like Us, an allegation UMG denies. The Guardian has spoken to multiple artists who found themselves at the sharp end of this war on manipulation, where unexpected spikes in streams get taken as proof of guilt. Darren Hemmings is managing director of music marketing company Motive Unknown and a musician himself. His distributor recently informed him that a track on one EP, having jumped from 'a few plays a day' to more than 1,000, was guilty of manipulation. 'I wouldn't blame them for drawing that conclusion,' he says, but 'it's very judge, jury, executioner'. He did not manipulate the streams, but could not identify the root cause – other than it simply becoming popular with actual listeners. Northern Irish rock band Final Thirteen had some of their music taken off streaming services due to a spike in the tens of thousands. They suspect this came after a play on Radio 1, but their distributor automatically concluded they were manipulated. 'It's really hard for any artist to prove that they didn't [manipulate streams], but it's even harder for Spotify to prove that they did,' says their drummer, Doobes. '[They] take it down and that's it.' Adam J Morgan, who makes music as indie act Naked & Baked, had a track get over 10,000 streams in a week, possibly from use in a TikTok video, but it was zapped by his distributor RouteNote as suspicious. 'I hadn't done anything wrong and they didn't provide any evidence,' he says, believing it was down to an overly anxious RouteNote. 'I spent that weekend trying to work out what was going on, but Spotify said my music hadn't been flagged at all.' RouteNote did not respond to a request for an interview. Takedowns can cause musicians inconvenience, derail marketing and cost them money. Matthew Whiteside, artistic director of experimental classical event The Night With… (and head of the TNW Music label) had three different albums taken down amid claims of artificial streaming. He tracked it back: TNW Music tracks were being added to manipulated playlists. 'It made no sense [why they were added] based on the genre.' His distributor said he could resubmit the album again, at $40 per album, but with no guarantee it would not be removed again. 'Streaming in general is geared against the smaller and the niche,' he says. 'If we get 1,000 streams a month on an album, I'd be very happy.' As such, paying to resubmit an album is beyond their release budget. Deezer claims it was the first streaming service to implement fraud detection systems. 'We look at a lot of indicators that help our algorithm decide if a user is fraudulent or not,' says Thibault Roucou, the company's royalties and reporting director. 'When we ask for a takedown, we look manually at what's happening and we're very confident that it is extreme manipulation.' Unfortunately, systems elsewhere for taking down tracks often presume guilt and the appeals system is so arduous that small acts, already struggling, just give up. Pop singer Levina, who represented Germany at Eurovision in 2017, saw her music taken off streaming services without warning – it was flagged because she unintentionally had the same name as another artist. 'With streaming services, it's almost impossible to [appeal] through them,' she sighs. 'You fill out a form but it leaves you quite powerless.' She is also chair of the artist council within trade body Featured Artists Coalition, and they are finalising 'minimum standards for what distributors should be doing'. She proposes a traffic light warning system that allows acts time to present their defence or take action to address the problems. Streaming services and distributors now accept this battle is about containment rather than total elimination. Owen, however, says the latest iteration is not fraudsters manipulating the streams of a few tracks by large amounts, but rather boosting multiple tracks a small amount to fly under detection radars. For Hemmings, this could result in a two-tier streaming economy, with smaller acts abandoning the main streaming platforms, where earnings are derisory anyway, to focus on a service such as Bandcamp. 'This could provoke a conclusion among large swathes of the independent music community that they're just better off focusing on other ways to make money.'


Forbes
2 days ago
- Business
- Forbes
B&J Isn't Selling 'Free Palestine' Ice Cream, But Will End Israel Deal
Photo byGetty Images A widely-circulated image of a "Free Palestine" tub of Ben & Jerry's ice cream is faked, and was made using ChatGPT. The picture shows a carton of ice cream with two smiling children wearing kaffiyehs, and standing next to a plate of fish. Along with other faked images of pro-Palestine Ben & Jerry's packaging, including one labelled "From the River to Sea", it's been doing the rounds on social media - and attracting both approval and outrage. However, the image was in fact created and posted by pro-Palestinian Instagram user @iampocoloco. I just had ChatGPT create this because I thought it'd look sweet," he wrote. Fact-checking organization Polifact has run the image through programs that identify whether images have been generated with artificial intelligence. And, it found, WasItAI concluded that it was created with AI, with Hive Moderation finding that the change it was AI-generated was 98.2%. Meanwhile, fact-checker NewsGuard has checked through Ben & Jerry's menu of flavors and social media accounts, without finding any flavors referring to Palestine or Israel. However, this didn't stop internet users sharing the image. One, pro-Israel X user @VividProwess, asked "Is this real? Wow. Shame on Ben & Jerry's", and received 484,000 views and 10,000 likes in one day, also posting "Ben & Jerry's openly supporting terrorism now." Meanwhile, TikTok user shewritestales wrote: "The poets and novelists in Gaza - from beneath the fire, the rubble and the ruins - extend their deepest thanks to you. Ben & Jerry's, the human company, we are proud of your support and the image you've raised that cries out on our behalf." Ben & Jerry's ice cream is sold in Israel by a completely independent firm, Ben & Jerry's Israel, which took over full local ownership of the brand in 2022. However, said Ben & Jerry's in as statement last week, "We have been working to change this, and so we have informed our licensee that we will not renew the license agreement when it expires at the end of next year." It added: "We're a values-led company with a long history of advocating for human rights, and economic and social justice. We believe it is inconsistent with our values for our product to be present within an internationally recognized illegal occupation." Ben & Jerry's co-founders Ben Cohen and Jerry Greenfield have previously spoken out against the conflict and criticized the humanitarian conditions in Gaza. Last year, the firm sued its parent company, Unilever, claiming that Unilever had attempted to silence it. The company said it does plan to stay in Israel through a different, unspecified arrangement.


Forbes
28-05-2025
- Entertainment
- Forbes
‘Emotional Support Kangaroo' Video Goes Viral—But It's Completely Fake
The viral "emotional support kangaroo" posted by Infinite Unreality Infinite Unreality/Instagram You might have seen an unusual sight on your social media timeline lately—a clip of a woman trying to board a plane with her 'emotional support kangaroo,' arguing passionately with a stewardess. The clip ends with the camera zooming into the kangaroo's face, sparking questions. Who gave it an airplane ticket? Why isn't it eating the ticket? Are emotional support kangaroos a thing now? Of course, the video is fake—in fact, it's completely AI-generated. The kangaroo clip was originally posted on Instagram by a visual effects artist known as 'Infinite Unreality.' This account regularly posts surreal AI-generated videos, most of them featuring Infinite Unreality's logo, the infinity symbol, which can also be spotted in the kangaroo clip. If there is a niche for AI-generated art, perhaps it makes sense to lean into the uncanny and unsettling, as this plays to the strengths of the technology. In the case of the emotional support kangaroo, the video proved just weird enough to gain attention, but not weird enough to be immediately flagged as AI. After being posted on Instagram, the clip was reposted by DramaAlert to X (Twitter), where it was viewed more than 58 million times, according to X's metrics. Many internet users assumed the clip was real, and it's not hard to see why. After all, the internet is constantly collecting the weirdest moments of the real world, and people are especially weird when it comes to their pets. Several users turned the close-up of the cute, confused-looking kangaroo into a meme, seemingly without noticing that the animal was fully AI-generated. Without inspecting too closely, the video seems like just another strange crumb of content on the timeline. If one ignores the fact that the kangaroo is holding a little plane ticket, it's not a particularly uncanny clip. The fact that videos on X are automatically muted helped launch this particular clip into virality, as the sound quickly exposes the artifice—the two women in the clip are speaking in a nonsensical language. This appears to be intentional on the part of the video's creator, as today's video-generating tools are capable of generating decent audio that roughly matches the visual output. What's really striking about the AI-generated emotional support kangaroo is how easily it slips into the ambient noise of the internet, just another one of those silly stories we all constantly see on our timelines. Previously, I wrote about how Google's new video-generating tool, Veo 3, had attained photorealism, measured by the ability to generate a convincing clip of Will Smith eating spaghetti. It used to be fairly easy to spot AI-generated content, and if you look carefully, many of the clues are still there. The more Veo 3 clips one encounters, the easier it is to see the uncanny movements and oddities. Overall, however, the clues have greatly diminished, and in some of the clips, they just aren't there at all. A boundary has been crossed, and the ability to generate a short video clip that is indistinguishable from real footage is now possible. Just like how the viral, AI-generated Balenciaga Pope ushered in a new era of AI-generated imagery that now swamps the internet, the emotional support kangaroo is the first viral video created by AI. It's hard to predict where things go from here—issues such as disinformation, non-consensual pornography and scams are almost certainly going to get much worse—but there's no way to tell how social media will be altered by the ability to create convincing footage from nothing. The internet is surely going to become stranger, even more detached from reality than it already is—the emotional support kangaroo is just the beginning.


The Verge
20-05-2025
- Entertainment
- The Verge
Chicago Sun-Times publishes made-up books and fake experts in AI debacle
The May 18th issue of the Chicago Sun-Times features dozens of pages of recommended summer activities: new trends, outdoor activities, and books to read. But some of the recommendations point to fake, AI-generated books, and other articles quote and cite people that don't appear to exist. Alongside actual books like Call Me By Your Name by André Aciman, a summer reading list features fake titles by real authors. Min Jin Lee is a real, lauded novelist — but 'Nightshade Market,' 'a riveting tale set in Seoul's underground economy,' isn't one of her works. Rebecca Makkai, a Chicago local, is credited for a fake book called 'Boiling Point' that the article claims is about a climate scientist whose teenage daughter turns on her. In a post on Bluesky, the Sun-Times said it was 'looking into how this made it into print,' noting that it wasn't editorial content and wasn't created or approved by the newsroom. Victor Lim, senior director of audience development, added in an email to The Verge that 'it is unacceptable for any content we provide to our readers to be inaccurate,' saying more information will be provided soon. It's not clear if the content is sponsored — the cover page for the section bears the Sun-Times logo and simply calls it 'Your guide to the best of summer.' The book list appears without a byline, but a writer named Marco Buscaglia is credited for other pieces in the summer guide. Buscaglia's byline appears on a story about hammock culture in the US that quotes several experts and publications, some of whom do not appear to be real. It references a 2023 Outside magazine article by Brianna Madia, a real author and blogger, that I was unable to find. The piece also cites an 'outdoor industry market analysis' by Eagles Nest Outfitters that I was unable to find online. Also quoted is 'Dr. Jennifer Campos, professor of leisure studies at the University of Colorado,' who does not appear to exist. Buscaglia did not immediately respond to a request for comment but admitted to 404 Media that he uses AI 'for background at times' and always checks the material. 'This time, I did not and I can't believe I missed it because it's so obvious. No excuses,' he told 404. 'On me 100 percent and I'm completely embarrassed.' Another uncredited article titled 'Summer food trends' features similar seemingly nonexistent experts, including a 'Dr. Catherine Furst, food anthropologist at Cornell University.' Padma Lakshmi is also attributed in the piece for a quote she doesn't appear to have said. News outlets have repeatedly run AI-generated content next to their actual journalism, often blaming the issue on third-party content creators. High-profile incidents of AI-generated content at Gannett and Sports Illustrated raised questions about the editorial process, and in both cases, a third-party marketing firm was behind the AI sludge. Newsrooms' defense is typically that they had nothing to do with the content — but the appearance of AI-generated work alongside real reporting and writing by human staffers damages trust all the same.

News.com.au
12-05-2025
- News.com.au
School IT worker in AI-generated child abuse material shock
A former Sydney high school IT worker created AI-generated child abuse material which was unwittingly discovered by police when they began into an investigation into his efforts to steal the personal data of colleagues. A court on Monday heard the case was believed to be a first for the state after Aaron Pennesi, 29, was convicted and sentenced for possessing child abuse material images which had been produced using artificial intelligence. Sydney's Downing Centre Local Court heard details of the perverse terms Pennesi used to prompt the program to create the images. The search terms he used are too graphic to publish. The court was told that Pennesi reported that he did not have a sexual interest in children, there was no evidence that he had shared the images with anyone and he claimed he was motivated by testing the limits of the software. Pennesi's house of cards came crashing down early last year when he came into the crosshairs of the NSW Police Cybercrime squad when they were alerted to malware on the computers at Forest High School where he worked as an IT administrator. The court was told that the software captured data from users and downloaded it in zip files to his computer. He says he deleted the material and never sought to use it for nefarious purposes and when police searched his computer, the information was found in the recycling bin on his hard drive. Material found on his computer included a school employees' identification information. 'There was no evidence he did anything with it but the fact he captured it is a significant breach of the person's privacy,' Magistrate James Gibson said in his sentencing remarks on Monday afternoon. When police searched his computer, they found 54 child abuse material images. The material was not connected to any child at the school. He pleaded guilty to one count each of possessing child abuse material and unauthorised modification of data to cause impediment. The court heard he had been suspended without pay by the Department of Education and it was inevitable that he would be sacked, and was now stacking shelves in a supermarket. His legal team argued his offending was related to his diagnosis for ADHD and autism spectrum disorder, which was now being treated. Mr Gibson said while the child abuse material did not depict real victims, a stern message needed to be sent to the community. 'Child abuse material is abhorrent with a real child … and in my view artificially generated is equally abhorrent to the community and the message needs to be sent that will not be tolerated,' Mr Gibson said. Pennesi was facing a maximum of 10 years in jail.