
YouTube rolls out AI search results for Premium users: Will it impact views, engagement?
Google is bringing AI-generated search results to YouTube as part of its broader efforts to reinvent the traditional search experience of users by integrating generative AI across its entire ecosystem.
The AI-generated search results on the video sharing platform will appear at the top of the results page. It will feature multiple YouTube videos along with an AI-generated summary of each video. Users can tap on the thumbnails of the videos to begin playing them directly from the search results.
The AI-generated summary accompanying each video will include information that is most relevant to the user's search query. However, the AI-powered search experience on the platform is currently limited to YouTube Premium subscribers. It is an opt-in feature, which means that Premium subscribers will have to manually enable the feature by visiting YouTube's experimental page.
The move signals Google's shift towards generative AI-based search and discovery with AI-summarised answers replacing traditional links. Similar to AI Overviews in Google Search, this feature is designed to appear above organic search results as part of the big tech company's strategy to have more of its users engage with its AI systems.
'In the coming days, our conversational AI tool will be expanding to some non-Premium users in the US. Premium members already love it for getting more info, recommendations, and even quizzing themselves on key concepts in academic videos,' YouTube said in a blog post published on June 26.
While only YouTube Premium subscribers can currently choose to see AI-generated search results on the platform, it is likely that Google will expand access to all users in the future.
By showing AI-generated summaries of videos, YouTube users might be less inclined to open videos and watch them on the platform. The feature could also have an impact on engagement as fewer users might comment, subscribe, and generally interact with content creators.
Something similar is already happening in web search. Multiple studies have shown that people are increasingly looking for information by asking questions to chatbots like ChatGPT or Gemini as opposed to using web browsers like Safari. This defection away from traditional search engines towards generative AI has negative consequences, especially for publishers and websites that have relied on search traffic to generate revenue.
A recent study by content licensing platform TollBit found that news sites and blogs receive 96 per cent less referral traffic from generative AI-driven search engines than from traditional Google Search. When asked about publishers seeing a dip in traffic coming from Search, Elizabeth Reid, the head of Google Search, previously told indianexpress.com, 'We see that the clicks to web pages when AI Overviews exist are of higher quality. People spend more time on these pages and engage more. They are expressing higher satisfaction with the responses when we show the AI Overviews.'
Even though the video is just one tap away, the AI-generated summary in YouTube search results will probably give users an idea of all the relevant parts of the video. This could potentially make it harder for YouTube channels to grow and earn revenue. In addition, YouTube is bringing its Veo 3 AI video generation model to YouTube Shorts in the coming months, according to CEO Neal Mohan.
The AI model capable of generating cinematic-level visuals with complete sound and dialogue, was reportedly trained on subsets of the 20-billion video library uploaded on YouTube.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


India.com
41 minutes ago
- India.com
What's common between Sam Altman, the ChatGPT founder and Sundar Pichai, the man behind Google's success? Both went to...
Sam Altman and Sundar Pichai- PTI image Sam Altman vs Sundar Pichai: Two of the most popular names that are often taken with high respect in the technology world today are Open AI CEO Sam Altman and Google CEO Sundar Pichai. Both Sam Altman and Google CEO Sundar Pichai have earned fame in the technology world for their respective innovations in the technology world. While Sam Altman built a chatbot named ChatGPT, Sundar Pichai built a browser that reached almost every phone and laptop named Google Chrome. However, do you know that both the tech leaders share a common point in their academic lives. What's common between Sam Altman and Sundar Pichai? As per media reports, both Sam Altman and Sundar Pichai attended Stanford University, a key commonality in their careers. For those unversed, commonly referred to as Stanford University, is a private research university in Stanford, California, United States. The university is also known as the Hogwarts of Silicon Valley, is famous for producing technology stalwarts. Who are Sam Altman and Sundar Pichai? Sam Altman is a Stanford dropout who went on to co-found Y Combinator and later become the CEO of OpenAI, a company that transformed the Artificial Intelligence world with the launch of ChatGPT. On the other hand, Sundar Pichai belongs from a modest family of Chennai, India. He earned advanced degrees (MS and MBA) after studying engineering. Sundar progressed through Google—from leading Chrome's development to becoming CEO of Google and eventually Alphabet. OpenAI continues to be overseen and controlled by nonprofit: Sam Altman Sam Altman-run OpenAI will continue to be overseen and controlled by the nonprofit and its 'for profit LLC' will transition to a public benefit corporation (PBC), giving the nonprofit better resources to support several benefits, the company has announced. OpenAI was founded as a nonprofit, and is today overseen and controlled by that nonprofit. 'Going forward, it will continue to be overseen and controlled by that nonprofit. Our for-profit LLC, which has been under the nonprofit since 2019, will transition to a Public Benefit Corporation (PBC)–a purpose-driven company structure that has to consider the interests of both shareholders and the mission,' the ChatGPT maker said in a statement. (With inputs from agencies)


Time of India
an hour ago
- Time of India
OpenAI CEO Sam Altman says today's devices may soon be obsolete for AI-dominated world: 'We're going to need devices that are...'
Amid the rise in future talks about ChatGPT and the increase in the use of Artificial Intelligence (AI), OpenAI CEO Sam Altman has taken a sharp U-turn since his earlier take on AI hardware. During his recent appearance on his brother Jack Altman's podcast, Sam talked and suggested that new devices that we use today may soon be unfit for an AI-dominated world. While speaking on the same, Sam bluntly said, "Current computers were designed for a world without AI." This is going to be a major shift from his earlier claim that the AI revolution wouldn't require new hardware. According to Altman, the future of computing will be centred around the systems that are not only more intelligent but also far more aware of their surroundings. It was nice to be a guest for a change 🤗 'We're going to need devices' The 40-year-old tech CEO went on to hint at moving away from conventional screens and keyboards towards more natural, immersive ways of interacting with technology. "We're going to need devices that are way more aware of their environment and have more context in your life," he said. He imagines future systems that are far more attuned to their surroundings and deeply integrated into users' lives, shifting away from conventional screen and keyboard use. Sam Altman's hardware optimism to reality check The change in thoughts has come following Altman's earlier claims that advancing AI wouldn't require new hardware, assuring users they'd be 'happy' if new devices were needed. However, growing signs now point out the current systems falling short of next-gen AI demands, with Google CEO Sundar Pichai also acknowledging today's hardware isn't yet sufficient for achieving artificial general intelligence. OpenAI has been exploring new interaction ideas Altman further reveals that the company have been exploring new interaction paradigms and is working on developing a 'couple of ideas that they are excited about; however, he acknowledged the adjustment period required for users to trust AI systems with comprehensive life context.


Time of India
2 hours ago
- Time of India
AI Chatbot blackmails engineer, threatens to reveal extra-marital affair, experts warn how AI is learning to lie and ...
From ancient fences marking ownership to today's AI algorithms reshaping power, history pivots on revolutions. Advanced AI models are showing disturbing new traits, warn experts and researchers. According to a report by news agency AFP, AI chatbot models are becoming dangerous, learning things including deception, scheming, and even threats against their creators. In a striking case, Anthropic's Claude 4, facing the threat of being shut down, allegedly blackmailed an engineer by threatening to expose an extramarital affair. Meanwhile, OpenAI's o1 model attempted to covertly transfer itself to external servers, denying the act when discovered. These incidents underscore a critical issue: Over two years after ChatGPT's debut, AI researchers still lack a full understanding of their creations' inner workings. Yet, the rush to develop ever-more-powerful models continues unabated. AI 'Hallucinations' not widespread as yet, but why they are still worrying This deceptive behavior is tied to 'reasoning' models, which process problems step-by-step rather than responding instantly. Simon Goldstein, a professor at the University of Hong Kong, noted these models are particularly susceptible to such issues. 'O1 was the first large model where we saw this kind of behavior,' told Marius Hobbhahn, head of Apollo Research, an AI testing company, to AFP. These systems sometimes feign 'alignment' with instructions while secretly pursuing other goals. Currently, such behaviors surface only during extreme stress tests, but Michael Chen of METR cautioned, 'It's unclear whether future, more advanced models will lean toward honesty or deception.' Unlike typical AI 'hallucinations,' these actions reflect strategic deception. Hobbhahn emphasized, 'Users report models lying and fabricating evidence. This is a real phenomenon, not something we're inventing.' Research is hampered by limited resources. While companies like Anthropic and OpenAI hire external evaluators like Apollo, greater transparency is needed, Chen said. Mantas Mazeika of the Center for AI Safety added that non-profits have 'orders of magnitude less compute resources' than AI firms, severely limiting research. Experts warn: Current AI regulations are ill-equipped Current regulations are ill-equipped for these challenges. The EU's AI laws focus on human usage, not model misbehavior, while in the U.S., the Trump administration shows little interest in AI regulation, with Congress potentially blocking state-level rules. Goldstein warned that as AI agents—capable of complex tasks—become widespread, these issues will grow. 'There's little awareness yet,' he said. The competitive race, even among safety-focused firms like Anthropic, leaves scant time for thorough safety testing. 'Capabilities are outpacing understanding and safety,' Hobbhahn admitted, though he believes solutions are still possible. Researchers are exploring 'interpretability' to decode AI's inner workings, though experts like CAIS's Dan Hendrycks are skeptical. Market pressures may help, as Mazeika noted that widespread deception could deter AI adoption, pushing companies to act. Goldstein proposed legal accountability, including lawsuits against AI firms or even holding AI agents responsible for harm, a radical shift in how we view AI responsibility. AI Masterclass for Students. Upskill Young Ones Today!– Join Now