AI tools can help ‘fuel creativity', YouTube executive says
AI tools can help 'fuel creativity' by removing some of the 'drudgery' from work, a YouTube executive has said.
Steve McLendon, podcast expert and YouTube group product lead, was speaking as the Google-owned video platform announced it had reached one billion monthly users for podcast content on the platform for the first time.
Some have raised concerns about the possible impact of generative AI tools on the workforce, with fears that AI could replace humans at carrying out administrative tasks in years to come.
But Mr McLendon said he believed such tools would in fact help workers, particularly those in creative roles, by freeing them from admin tasks to focus on 'the things they want to do'.
'I think as it related to podcasts and creators – really creators across YouTube – I think a lot of these AI products really are tools that will help fuel creativity,' he told the PA news agency.
'If you think of the creation process, there's a lot of drudgery in that process, and certainly from my team's perspective, we're trying to think about ways to help creators be more creative and have more time to do the things that they want to do, as opposed to some of the drudgery work.
'And that's where I think that AI tooling is actually going to unlock a tremendous amount of value for creators, so really excited to see where that goes.'
Last year, Google made headlines when it added an audio feature to its AI-powered research tool, NotebookLM, which can turn large documents, such as reports, into AI-generated audio content that sounds similar to a podcast.
Mr McLendon said the tool tended to use the same two voices and adopt a similar tone no matter the topic – suggesting it was unlikely to rival human podcasters – but added that the technology was something that 'people can use in their personal lives around productivity'.
'So, I have a long article, or a 50-page document, I don't have time to read it. Maybe I want to listen to a summary of it and be able to engage with it that way,' he said.
On YouTube's podcast milestone, he said it highlighted the rise in popularity of podcasts as a broadcast medium in recent years, but also echoed how television had previously revolutionised broadcasting into people's homes.
'I'm not sure that people really think of how big and prevalent podcasting is – certainly, they don't think about how big and prevalent podcasting is on YouTube,' he said.
'It speaks to how podcasts have really connected with audiences all around the world.
'Broadcasting, I would say, has evolved.
'I think video has been an accelerant to podcast engagement and audience building in particular – podcasts are oftentimes really intimate – you have a relationship with the person you listen to in your ear every day, and being able to see that person I actually think really deepens that relationship.
'It's funny, I also think that television served that purpose in people's homes for a long time – televisions were like radios in people's homes – and if you think of YouTube as evolving what television is, it's unsurprising that it's also evolving what radio is, particularly in the home.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CNBC
36 minutes ago
- CNBC
Nvidia CEO says this is the decade of robotics and autonomous vehicles
Autonomous vehicles and robotics are going to take off in a big way in the years ahead, according to Nvidia CEO Jensen Huang. "This is going to be the decade of AV [autonomous vehicles], robotics, autonomous machines," Huang told CNBC's Arjun Kharpal Thursday at the Viva Tech conference in Paris. Nvidia plays a significant role in the rollout of driverless vehicles as the U.S. chipmaking giant sells both hardware and software solutions for AVs. Self-driving cars are being spotted more frequently in the U.S., where Google-owned Waymo is operating robotaxi services in parts of San Francisco, Phoenix, and Los Angeles. Meanwhile, a number of Chinese companies including Baidu and are also running their own respective robotaxi fleets. Europe, on the other hand, is yet to see significant AV adoption — primarily because the regulations are not yet clear enough for self-driving technology companies to get their services off the ground. However, the technology is beginning to gain more traction. In the U.K., legislation called the Autonomous Vehicles Act has been passed into law, paving the way for self-driving vehicles to arrive on roads by 2026. Uber on Tuesday announced a partnership with British self-driving car technology firm Wayve to launch trials of fully autonomous rides in the U.K., starting in spring 2026.


WIRED
40 minutes ago
- WIRED
Vibe Coding Is Coming for Engineering Jobs
Jun 12, 2025 6:30 AM Engineering was once the most stable and lucrative job in tech. Then AI learned to code. Photo-Illustration:On a 5K screen in Kirkland, Washington, four terminals blur with activity as artificial intelligence generates thousands of lines of code. Steve Yegge, a veteran software engineer who previously worked at Google and AWS, sits back to watch. 'This one is running some tests, that one is coming up with a plan. I am now coding on four different projects at once, although really I'm just burning tokens,' Yegge says, referring to the cost of generating chunks of text with a large language model (LLM). Learning to code has long been seen as the ticket to a lucrative, secure career in tech. Now, the release of advanced coding models from firms like OpenAI, Anthropic, and Google threatens to upend that notion entirely. X and Bluesky are brimming with talk of companies downsizing their developer teams—or even eliminating them altogether. When ChatGPT debuted in late 2022, AI models were capable of autocompleting small portions of code—a helpful, if modest step forward that served to speed up software development. As models advanced and gained 'agentic' skills that allow them to use software programs, manipulate files, and access online services, engineers and non-engineers alike started using the tools to build entire apps and websites. Andrej Karpathy, a prominent AI researcher, coined the term 'vibe coding' in February, to describe the process of developing software by prompting an AI model with text. The rapid progress has led to speculation—and even panic—among developers, who fear that most development work could soon be automated away, in what would amount to a job apocalypse for engineers. 'We are not far from a world—I think we'll be there in three to six months—where AI is writing 90 percent of the code,' Dario Amodei, CEO of Anthropic, said at a Council on Foreign Relations event in March. 'And then in 12 months, we may be in a world where AI is writing essentially all of the code,' he added. But many experts warn that even the best models have a way to go before they can reliably automate a lot of coding work. While future advancements might unleash AI that can code just as well as a human, until then relying too much on AI could result in a glut of buggy and hackable code, as well as a shortage of developers with the knowledge and skills needed to write good software. David Autor, an economist at MIT who studies how AI affects employment, says it's possible that software development work will be automated—similar to how transcription and translation jobs are quickly being replaced by AI. He notes, however, that advanced software engineering is much more complex and will be harder to automate than routine coding. Autor adds that the picture may be complicated by the 'elasticity' of demand for software engineering—the extent to which the market might accommodate additional engineering jobs. 'If demand for software were like demand for colonoscopies, no improvement in speed or reduction in costs would create a mad rush for the proctologist's office,' Autor says. 'But if demand for software is like demand for taxi services, then we may see an Uber effect on coding: more people writing more code at lower prices, and lower wages.' Yegge's experience shows that perspectives are evolving. A prolific blogger as well as coder, Yegge was previously doubtful that AI would help produce much code. Today, he has been vibe-pilled, writing a book called Vibe Coding with another experienced developer, Gene Kim, that lays out the potential and the pitfalls of the approach. Yegge became convinced that AI would revolutionize software development last December, and he has led a push to develop AI coding tools at his company, Sourcegraph. 'This is how all programming will be conducted by the end of this year,' Yegge predicts. 'And if you're not doing it, you're just walking in a race.' The Vibe-Coding Divide Today, coding message boards are full of examples of mobile apps, commercial websites, and even multiplayer games all apparently vibe-coded into being. Experienced coders, like Yegge, can give AI tools instructions and then watch AI bring complex ideas to life. Several AI-coding startups, including Cursor and Windsurf have ridden a wave of interest in the approach. (OpenAI is widely rumored to be in talks to acquire Windsurf). At the same time, the obvious limitations of generative AI, including the way models confabulate and become confused, has led many seasoned programmers to see AI-assisted coding—and especially gung-ho, no-hands vibe coding—as a potentially dangerous new fad. Martin Casado, a computer scientist and general partner at Andreessen Horowitz who sits on the board of Cursor, says the idea that AI will replace human coders is overstated. 'AI is great at doing dazzling things, but not good at doing specific things,' he said. Still, Casado has been stunned by the pace of recent progress. 'I had no idea it would get this good this quick,' he says. 'This is the most dramatic shift in the art of computer science since assembly was supplanted by higher-level languages.' Ken Thompson, vice president of engineering at Anaconda, a company that provides open source code for software development, says AI adoption tends to follow a generational divide, with younger developers diving in and older ones showing more caution. For all the hype, he says many developers still do not trust AI tools because their output is unpredictable, and will vary from one day to the next, even when given the same prompt. 'The nondeterministic nature of AI is too risky, too dangerous,' he explains. Both Casado and Thompson see the vibe-coding shift as less about replacement than abstraction, mimicking the way that new languages like Python build on top of lower-level languages like C, making it easier and faster to write code. New languages have typically broadened the appeal of programming and increased the number of practitioners. AI could similarly increase the number of people capable of producing working code. Bad Vibes Paradoxically, the vibe-coding boom suggests that a solid grasp of coding remains as important as ever. Those dabbling in the field often report running into problems, including introducing unforeseen security issues, creating features that only simulate real functionality, accidentally running up high bills using AI tools, and ending up with broken code and no idea how to fix it. 'AI [tools] will do everything for you—including fuck up,' Yegge says. 'You need to watch them carefully, like toddlers.' The fact that AI can produce results that range from remarkably impressive to shockingly problematic may explain why developers seem so divided about the technology. WIRED surveyed programmers in March to ask how they felt about AI coding, and found that the proportion who were enthusiastic about AI tools (36 percent) was mirrored by the portion who felt skeptical (38 percent). 'Undoubtedly AI will change the way code is produced,' says Daniel Jackson, a computer scientist at MIT who is currently exploring how to integrate AI into large-scale software development. 'But it wouldn't surprise me if we were in for disappointment—that the hype will pass.' Jackson cautions that AI models are fundamentally different from the compilers that turn code written in a high-level language into a lower-level language that is more efficient for machines to use, because they don't always follow instructions. Sometimes an AI model may take an instruction and execute better than the developer—other times it might do the task much worse. Jackson adds that vibe coding falls down when anyone is building serious software. 'There are almost no applications in which 'mostly works' is good enough,' he says. 'As soon as you care about a piece of software, you care that it works right.' Many software projects are complex, and changes to one section of code can cause problems elsewhere in the system. Experienced programmers are good at understanding the bigger picture, Jackson says, but 'large language models can't reason their way around those kinds of dependencies.' Jackson believes that software development might evolve with more modular codebases and fewer dependencies to accommodate AI blind spots. He expects that AI may replace some developers but will also force many more to rethink their approach and focus more on project design. Too much reliance on AI may be 'a bit of an impending disaster,' Jackson adds, because 'not only will we have masses of broken code, full of security vulnerabilities, but we'll have a new generation of programmers incapable of dealing with those vulnerabilities.' Learn to Code Even firms that have already integrated coding tools into their software development process say the technology remains far too unreliable for wider use. Christine Yen, CEO at Honeycomb, a company that provides technology for monitoring the performance of large software systems, says that projects that are simple or formulaic, like building component libraries, are more amenable to using AI. Even so, she says the developers at her company who use AI in their work have only increased their productivity by about 50 percent. Yen adds that for anything requiring good judgement, where performance is important, or where the resulting code touches sensitive systems or data, 'AI just frankly isn't good enough yet to be additive.' 'The hard part about building software systems isn't just writing a lot of code,' she says. 'Engineers are still going to be necessary, at least today, for owning that curation, judgment, guidance and direction.' Others suggest that a shift in the workforce is coming. 'We are not seeing less demand for developers,' says Liad Elidan, CEO of Milestone, a company that helps firms measure the impact of generative AI projects. 'We are seeing less demand for average or low-performing developers.' 'If I'm building a product, I could have needed 50 engineers and now maybe I only need 20 or 30,' says Naveen Rao, VP of AI at Databricks, a company that helps large businesses build their own AI systems. 'That is absolutely real.' Rao says, however, that learning to code should remain a valuable skill for some time. 'It's like saying 'Don't teach your kid to learn math,'' he says. Understanding how to get the most out of computers is likely to remain extremely valuable, he adds. Yegge and Kim, the veteran coders, believe that most developers can adapt to the coming wave. In their book on vibe coding, the pair recommend new strategies for software development including modular code bases, constant testing, and plenty of experimentation. Yegge says that using AI to write software is evolving into its own—slightly risky—art form. 'It's about how to do this without destroying your hard disk and draining your bank account,' he says.


Android Authority
42 minutes ago
- Android Authority
I tried Google's secret, open source, offline AI app to see if it's better than Gemini
Andy Walker / Android Authority Google has pumped out so many AI products in recent years that I'd need my fingers, toes, and the digits of several other people to keep count. Its current public-facing headliner is Gemini, which also doubles as its virtual assistant on its myriad products. But, if you're willing to lift its development rock to peek at the creepy crawlies beneath, you'll find AI Edge Gallery. Hidden away on GitHub — where few Google-made products have ever resided — AI Edge Gallery gives early adopters a taste of fully downloaded AI models that can be run entirely on the phone. To uncover why Google banished this app beyond the Play Store and what it can actually do, I took up my Pixel and downloaded it. Here's what I discovered. What is Google AI Edge Gallery? Andy Walker / Android Authority First, let me provide details about what AI Edge Gallery is. The app allows users to download and run large language models (LLMs) on Android phones for offline use. Once downloaded, the LLMs don't require an internet connection to crunch queries, which makes AI Edge Gallery, in theory, very handy in isolated situations. At present, the app offers four LLMs ranging in size and skill. It also splits these up into three suggestions for using them: Ask Image, Prompt Lab, and AI Chat. AI Edge Gallery allows users to download LLMs that can be used for offline prompt processing right on your phone. These categories are largely self-descriptive, but they do explain what to expect from AI Edge Gallery. You can use these models to ask questions about images, engage in simple chats as you would with Gemini or ChatGPT, and use prompts for 'single-turn use cases.' Installation and setup are a pain, but the app is slick and smooth There's a good reason why AI Edge Gallery isn't on the Play Store. The setup is an absolute pain, even if the app is buzzy and feels like a Google-made product. Once you grab the app off GitHub and install it, you'll need to install the individual models you wish to try. However, before this, you'll need to create a Hugging Face account — the site that hosts the models — and acknowledge several user agreements. This includes one on the AI Edge Gallery app itself, another on Hugging Face, and finally, Google's own Gemma Access Request form. Finally, after all of this, you'll need to tap back several times to head back into the AI Edge Gallery app, where the model download will begin. There were several times I issued a loud sigh during this process, and I wouldn't blame you if you'd rather clean all your shoes instead. Nevertheless, I persisted. The setup process from downloading the app to using the model of your choice is padded by several user acknowledgements. To whet my palate, I leapt onto the Gemma-3n-4EB-it-int4 train (I'll refer to it simply as 'Gemma' as we advance). At 4.4GB, it's the largest model available on the gallery and is available across all three categories. In theory, the largest model should offer all I need to accomplish any offline chatbot goal I could have. For the most part, its offline capabilities were impressive. An offline travel planner, science teacher, and sous chef Andy Walker / Android Authority To test this model's capabilities, and therefore, the usefulness of AI Edge Gallery, I wanted to use several prompts that I'd normally run by ChatGPT and Gemini — products that have access to the internet. For my first trick, I asked Gemma about a theoretical trip to Spain. I used the prompt: 'I'm traveling to Spain in a few weeks. What are some items I should consider packing, and which sights should I see?' I wanted to test its capabilities as an offline travel companion. After several seconds of pondering an answer, Gemma leapt into action and completed the answer three minutes later. That's pretty tardy, but considering it ran entirely offline and rendered my Pixel 8 pretty warm, I was impressed. Processing times are long, but considering the LLM is running entirely offline on my Pixel 8, it's admirable. I was even more impressed when scrolling through the answer. Considering that I didn't specify how long I'd be spending in Spain, where I'd be heading, or when I'd be leaving, Gemma offered plenty of sights to see, exact quantities of garments I should pack, and additional travel tips. To test if it can connect to the internet if required, I asked it, 'What are the biggest news stories of the day?' It gave me an answer from October 26, 2023, presumably the limit of its global knowledge. This isn't a problem, but remember that this model is better suited to timeless queries. OK, back to general questions. I wanted to see how proficient the model is at explaining established theories. I asked it to 'Explain the theory of relativity and provide an ELI5 example.' Again, it took a day and an age, but eventually, it produced a deep review of Einstein's theory. Don't expect the models to replace services like Perplexity that can readily access information on the internet. It also offered a detailed explainer about the source of rattles coming from a car's engine bay, recipes for making vanilla ice cream, facts about the tallest mountains in the world, and an explanation of soccer's offside rule. All answers were accurate. How good is the app at creating things? Within the Prompt Lab section, you can use a model to 'rewrite tone, summarize text, and code snippets.' The latter use case is pretty cool! For a complete coding noob, I asked Gemma to 'Create code that responds with 'hello' when I input 'Good day.'' It promptly offered a line of JavaScript that did just that. There are seven languages to pick from, too. Notably, the response includes integrating the code into various scenarios, like a website, making it an excellent educational or verification tool. The app also allows the summary of text blocks, and it's not too shabby at that, either. I crammed the introduction of Wikipedia's Theory of Relativity article into the prompt box, and Gemma confidently broke the content down into five bullet points. The response was swift enough that I'd consider using AI Edge Gallery to break down longer PDFs and studies rather than ChatGPT, especially on documents I don't want to share. There are various answer options, including bullet points, briefer paragraphs, and more. What about tone rewriting? I'm unsure when I'd use this feature in my life. I'd rather opt for chat apps and Gmail's built-in tone tweaker. Nevertheless, I gave Gemma the same snippet used above, selecting the Enthusiastic tone option. You can see the results in the screenshots above. Andy Walker / Android Authority It's important to remember that the model you use will dictate AI Edge Gallery's answers, capabilities, and processing speed. The app offers plenty of flexibility in this regard. You can download all four and use them interchangeably, or you can use the largest model (as I have) and call it a day. You can even snag the smallest model and enjoy quicker operation, albeit more limited smarts. The choice is yours. Identifying tomatoes but misplacing monuments What about image queries? The app makes it super easy to select an image from my albums or capture a new photo, and ask a question about it. For my test, I picked a shot of some tomatoes we grew over the spring. I asked Gemma, 'How do I grow these?' Impressively, the model accurately identified them as grape tomatoes, offered a complete breakdown of their preferred habitat and conditions, details on how to start them from seed, including specifics like thinning and soil mix, and suggestions for planting outdoors. This response took over four minutes, but it was a brilliant, detailed answer! I queried its knowledge of local landmarks to see how it handled more nuanced images. I picked an image of Franschhoek's NG Kerk, the oldest church in one of the prettiest towns in South Africa. I didn't expect it to know it, and, well, it didn't. It answered with: 'This is St. Mary's Church in Stellenbosch.' It picked a nearby town, but that's a red cross. Perhaps it would know the more distinct Huguenot Monument in Franschhoek? Nope. That's in Rome, the model decided. Clearly, Gemma struggles with recognizing buildings but has little issue with tomatoes. It seems you'll get mixed success here based on the prevalence and familiarity of objects within an image. This still makes it pretty useful in some cases. I'll have to test this a little more in a future feature. I've activated your flashlight (just kidding!) Andy Walker / Android Authority Finally, I want to discuss where the models of AI Edge Gallery and an actual virtual assistant like Gemini differ. The latter has near complete control of my Pixel 8 and lets me play specific playlists on Spotify, open YouTube channels, search the internet, or trigger my flashlight with a simple prompt. However, this isn't possible with AI Edge Gallery. Although asking Gemma to 'Switch on my flashlight' is recognized and accepted as a prompt, and the model gleefully replies 'Okay! I've activated your flashlight,' it adds that it cannot actually do this because it's a 'text-based AI.' It understands what I want accomplished, but its net doesn't reach that far. AI Edge Gallery cannot replace Gemini, at least not as a virtual assistant. To be fair, I didn't expect this app to have that level of control over my device, but I had to test this regardless. If you were hoping to replace Assistant or Gemini with an offline product like AI Edge Gallery, you'll be sorely disappointed. It's also worth noting that AI Edge Gallery and its models cannot generate images from prompts or address queries about files other than images. Hopefully, these features will come to the app's future iterations. There's a reason Gemini is Google's consumer-facing AI product So, is AI Edge Gallery worth a try? Without a doubt, yes. As someone who loves the idea of fully offline LLMs that only connect to the internet when available or required, the models here genuinely excite me, and that app makes it possible to test them without too much trouble. I'm sure that query crunching would be far quicker and more efficient on a faster smartphone, too. I feel my Pixel 8 was the bottleneck here. The app itself looks great and functions adequately for the most part, but it still requires some polish here and there. Leave it open in the background, and you'll regularly get non-responding boxes popping up and multiple crashes when it returns to focus. It also has several annoying UX issues. Swiping across the screen left or right will clear your last prompt, and you'll have to start all over again. It's remarkably easy to do this by accident. AI Edge Gallery makes private offline processing possible, but there's a reason it's not on the Play Store. Nevertheless, I'm still left impressed by the app's image identification smarts. As someone who regularly uses Circle to Search to identify plants, animals, and landmarks, AI Edge Gallery could be handy if I'm stuck in the wilderness without a connection and an unidentified bird. You may not consider an offline AI tool necessary, but processing data on your phone does have privacy and security benefits. If you have a flagship Android phone, I'd recommend picking up AI Edge Gallery, perhaps not as a replacement for Gemini, but as a glimpse into the distant future where much of Gemini's smarts could be available locally.