
YouTube's AI Slop Is a Win for Alphabet. But What About Us?
So too have other AI-generated videos which are starting to dominate the platform in much the same way they've proliferated across Facebook, Pinterest and Instagram. Several of YouTube's most popular channels now feature AI-generated content heavily.
I'd originally thought this would be a problem for YouTube as it grappled with what looked like a new form of spam, but the general lack of complaint from advertisers coupled with the gangbusters growth of AI content, and even appreciative comments from viewers, changed my view. It seems the public is happy to gorge on slop, and that's not a problem for Alphabet Inc.'s most valuable asset after Google Search. Quite the opposite.
Earlier this month, YouTube — which could surpass The Walt Disney Co. this year as the world's largest media company by revenue — updated its policies to strike a balance between allowing AI-generated videos to flourish on its platform without spamming it.
The new rules cut ad revenue from low-effort, repetitive content. Think channels like this one, this one, this one, this one and many more, often run by the same person uploading dozens of videos a day. Their creators might exploit AI tools like Eleven Labs to create a synthetic voice that reads out a script, scraped from Reddit, over a slideshow of stock images. Some of these videos get hundreds of thousands of views.
The video platform's overall approach, however, is that AI-generated content is fine, so long as it's original, provides value to viewers and includes some human input. For now, it seems to be measuring that on a case-by-case basis, which is as good an approach as any with new tech. YouTube is also no stranger to fighting spam.
Indeed, the policy update seems have put advertisers at ease, even as 92% of creators on the site use generative AI tools, according to the company. Advertisers have a tacit understanding that more AI on YouTube means more content, and more revenue. It helps that the industry has years of experience trying to monitor icky content — from racism to conspiracy theories — shown next to their brands online. They've learned it's a years-long game of Whac-A-Mole.
YouTube clearly wants AI content to thrive. Sister company Google has said that later this summer, it will bring its video-generation tool Veo3 to YouTube Shorts, making it even easier to create lifelike AI videos of Storm-Trooper vloggers or biblical characters as influencers. The company says AI will 'unlock creativity' for its creators.
pic.twitter.com/r4iROmLt52— PJ Ace (@PJaccetturo) June 2, 2025
But unlocking new forms of profit is more straightforward for Alphabet than it is for creators. Take Ahmet Yiğit, the Istanbul-based creator behind the viral pilot-baby video. Though his channel has racked up hundreds of millions of views, he's only received an estimated $2,600 for his most viral post, with the bulk of his audience coming from countries like India, where ad rates are low.
Yiğit says he spends hours on a single scene and juggles a dozen tools, suggesting that even this new generation of AI creators could end up working harder for less, while Alphabet reaps ad revenue from their output. As long as the content machine runs, it doesn't matter whether AI videos are quick and easy or grueling to make — only that they drive views and ads.
That's why YouTube is leaning harder into welcoming slop than policing it. While the company does require creators to say if their videos contain AI, the resulting disclaimer is listed in a small-text description that viewers must click through to read, making it tough to spot.(1) That does little to address the growing confusion around what's real and what's synthetic as more YouTubers race to capitalize on AI content.
The risk is that as slop floods our feeds and juices YouTube's recommendation algorithms, it'll drown out more thoughtful, human-made work. The earliest big YouTube hits were slices of life like the infamous, 'Charlie Bit My Finger.' What happens when the next wave of viral hits have no bearing on reality, instead offering bizarre, dreamlike sequences of babies dressed as Storm Troopers, or Donald Trump beating up bullies in an alleyway?
Perhaps they will both reflect and deepen our sense of disconnection from real life. AI content might turn out to be a boon for YouTube, but it offers an unsettling future for the rest of us.
More From Bloomberg Opinion:
(1) That probably won't change for some time. Guidelines from the US Federal Trade Commission requires YouTube videos to disclose if they include a paid promotion, but there's no similar legal obligation to disclose content that's AI generated
This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of 'Supremacy: AI, ChatGPT and the Race That Will Change the World.'
More stories like this are available on bloomberg.com/opinion

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
an hour ago
- Time of India
Australia bans YouTube for children under 16, joining TikTok, Instagram and Snapchat in age limits
Australia is going to ban children under 16 from creating YouTube accounts starting December, expanding its world-first social media restrictions. The ban, which already applies to TikTok, Snapchat, Instagram, Facebook and X (formerly Twitter), now includes YouTube following recommendations from the eSafety Commissioner. Authorities argue that YouTube, despite being considered primarily a video platform, exposes children to harmful content and risks similar to traditional social media platforms. Under the new rule, minors can still watch YouTube videos without an account but will lose features like personalised recommendations, posting content, and commenting. The move is aimed at protecting young Australians from online harm, cyberbullying and mental health issues linked to excessive social media use, setting an example for stricter global digital safety standards. Australian government acts to safeguard children under 16 from online harm As reported by Euronews, Australia's Prime Minister Anthony Albanese emphasised that the government is prioritizing children's safety and mental well-being in the digital era. He stated: 'We know that social media is doing social harm. My government and this parliament are ready to take action to protect young Australians.' This move follows mounting evidence that unrestricted access to online platforms can lead to issues such as cyberbullying, exposure to explicit content, online grooming, and excessive screen time that impacts mental health. YouTube's widespread use among children prompts safety concerns For years, YouTube was treated differently from other social platforms because it primarily serves as a video streaming service rather than a social networking site. However, its widespread use among children and the presence of harmful content prompted authorities to reconsider. According to the eSafety Commissioner, three out of four Australian children aged 10–15 regularly use YouTube, making it more popular than TikTok and Instagram. Alarmingly, 37% of children who reported exposure to harmful online content said they encountered it on YouTube. The commissioner concluded that providing YouTube with an exemption was inconsistent with the goal of protecting minors, leading to its inclusion in the ban. Children under 16 will still be able to view videos without an account but will lose access to personalised recommendations, commenting, and content creation features. Public support for age restrictions A survey conducted last month involving nearly 4,000 Australians revealed that nine in ten people support some form of 'age assurance' for social media platforms. This widespread public backing reflects growing societal concerns about protecting children's mental health, limiting their exposure to inappropriate content, and reducing online exploitation risks. Australia's decision is among the strictest social media regulations in the world, potentially influencing other countries to adopt similar measures. It also raises questions about digital freedoms, parental responsibility, and how tech companies will adapt to stricter compliance standards. While critics argue such bans might limit digital learning and social connectivity, supporters believe strong guardrails are necessary in a digital environment dominated by algorithms designed to maximise engagement, often at the expense of young users' well-being. Also Read | OpenAI's ChatGPT agent outsmarts 'I am not a robot' test without detection, raising cybersecurity concerns AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Hindustan Times
an hour ago
- Hindustan Times
YouTube is fine with the F-word for monetisation, as long as it's…
If you are a YouTube creator who has been making content that includes profanity in the first 7 seconds, well, YouTube restricted the ability to fully monetise it, and it featured the yellow dollar icon, not the full green icon. But now, as shared by Creator Insider's, Conor Kavanagh, who is the Head of Monetisation Policy Experience at YouTube, has come forward to announce good news that says people can now fully monetise their videos, even if their videos include profanity in the first seven seconds. This opens up monetisation possibilities for creators. YouTube is now okay with profanities but only until the 7-second mark.(Pixabay) So why the change now? Well, YouTube's Conor explains that over time there are new advertising policies, and advertisers cannot specifically target the types of content they would like their ads on. It's primarily the change of expectations. Previously, there used to be an expectation of a gap between the actual profanity and the ad that is going to be displayed. But now advertisers are free to choose from their desired level of profanity. What counts as profanity? YouTube also explains exactly which words count as profanity. Words like assh*le are moderate profanities. Strong profanities include words like 'f*ck' and more. YouTube says you can now use all of these words without affecting the monetisation within the first 7 seconds of your YouTube videos. However, at the same time, if you use profanity in thumbnails or subtitles, monetisation will remain limited. The video also goes in-depth about what can result in a limited monetisation status. Videos like a compilation of a character's top swear words from a specific TV show, or something similar wherein strong profanities are repeated, remains a violation of the advertiser-friendly guidelines on YouTube. MOBILE FINDER: iPhone 16 LATEST Price


Hindustan Times
an hour ago
- Hindustan Times
No Instagram, no YouTube: Australia announces social media ban for kids under 16 and the internet has mixed feelings
In a world-first move, the Australian government has announced a sweeping social media ban for children under 16, and YouTube is now on the list. 'YouTube is not social media,' claimed the platform in a statement earlier this week. But Australia isn't buying it. After weeks of pushback from tech giants like Meta and Snapchat, who claim that YouTube functions similarly to their own platforms with algorithmic feeds, interactive tools, and comment sections, YouTube joins a growing list of popular social media platforms (TikTok, Instagram, Facebook, X, Snapchat) which have all been banned for users under 16, beginning December. Australia announces social media ban for kids under 16 Prime Minister Anthony Albanese also released a statement saying, 'Protecting kids online means taking on some tough problems, so we're banning social media accounts for under-16s. The way these platforms are built can harm children while they're still finding their own way.' He also mentioned the names of three young teens who had lost their lives in social media-related incidents: Ollie, Liv and Tilly. Everything to know The ban, set to roll out later this year, requires tech companies to deactivate existing underage accounts, prevent new ones from being created, and fix any workarounds or risk fines of up to A$50 million (US$32.5 million). Teenagers will still be able to view YouTube content, but they won't be able to interact, comment, or post without an account. Online gaming, messaging apps, education, and health-related tools are exempted from the legislation, as officials say they carry 'fewer social media harms.' Globally, the move is gaining traction. Norway has already announced a similar restriction, and the UK says it's 'strongly considering' following suit. But the internet? It's not impressed. 'Ah yes do more to void parents of taking responsibility for what their children see and are influenced by…' one person posted on X, reflecting the growing concern that governments are overstepping. Another fumed, 'This makes me mad... how is the most kid-friendly platform being banned? First UK, now Australia? Why are leaders more worried about what we learn online rather than making the country bearable?' Someone else raged, 'What in the actual f*ck is going on in Western governments atm.' Other reactions focused on overreach: 'Yes, the biggest problem in Australia is under-16s watching game streams on YouTube.' One more read, 'Governments should not mess with teenagers or children. Parents are responsible about what kind of content they watch.' Another claimed, 'It's sad to see that Australia is joining the censorship bandwagon too.' One thing's for sure — this isn't just about Australia anymore. This is a global referendum on who controls kids' access to the digital world: parents, platforms, or governments. What do you think about this?