logo
NotebookLM Was Already My Favorite AI Tool, but New Features Keep Making It Even Better

NotebookLM Was Already My Favorite AI Tool, but New Features Keep Making It Even Better

CNET5 hours ago

NotebookLM has always been a fun idea -- it's kind of a mini-LLM for all of your personal documents, or really any documents you want to feed it. After taking another look recently, it's definitely more than a diversion. It's become my favorite AI tool ever and something I use nearly every day.
Powered by Google's Gemini AI, NotebookLM breaks down complex subjects into an easy-to-understand format and helps you brainstorm new ideas. There's now a mobile app for iOS and Android, and new features were just announced during Google I/O earlier this month. It keeps getting better without feeling like it's becoming overstuffed with features just for the sake of it.
NotebookLM isn't just Google Keep stuffed with AI, nor is it just a chatbot that can take notes. It's both and neither. Instead of asking questions to Gemini, only for it to find an answer from the ether of the internet, NotebookLM will only search through the sources that you provide it. It's a dead simple concept that feels like one of the most practical uses of AI, giving way to the perfect study buddy for classes or work. And Google didn't stop there.
Now it can do so much more, and will reward your poking around to see what it can do for you. And features like its impressive Audio Overviews have since trickled down into Gemini itself, allowing it to be used in a much wider set of Google's products.
Below, I'll cover some of NotebookLM's most interesting features (including the newly announced ones) and how it became one of my favorite AI tools to use.
For more, check out Google's smart glasses plans with AndroidXR.
What is NotebookLM?
NotebookLM is a Gemini-powered note-taking and research assistant tool that can be used in a multitude of ways. It all starts with the sources you feed it, whether they're webpage URLs, YouTube videos or audio clips, allowing you to pull multiple sources together into a cohesive package and bring some organization to your scattered thoughts or notes.
The most obvious use case for NotebookLM is using it for school or work. Think of it -- you've kept up with countless classes and typed notes down for every one and even perhaps recorded some lectures. Sifting through everything individually can eventually get you to some semblance of understanding, but what if you could get them to work together?
Once you've uploaded your sources, Gemini will get to work to create an overall summary of the material. From there, you can begin asking Gemini questions about specific topics on the sources and information from the sources will be displayed in an easy-to-understand format. This alone may be enough for some people just looking to get the most out of their notes, but that's really just scratching the surface.
Available for desktop and mobile
NotebookLM's three panel layout
NotebookLM/Screenshot by Blake Stimac
NotebookLM has been available for a while now on the desktop and is broken into a three-pane layout, consisting of Source, Chat and Studio panels. Both the Source and Studio panels are collapsible, so you can have a full-screen chat experience if you prefer.
While the Source and Chat panels are pretty self-explanatory, the Studio panel is where magic can happen (though some of the features can also be created directly from the Chat panel). This is where you can get the most out of your NotebookLM experience.
The NotebookLM app is like having a data alchemist in your pocket
The mobile app for Android and iOS launched the day before Google I/O 2025.
Blake Stimac/CNET
Those familiar with the desktop experience will feel right at home with the new mobile apps for iOS and Android. The streamlined app allows you to switch between the Source, Chat and Studio panels via a menu at the bottom. When you go to the view that shows all of your notebooks, you'll see tabs for Recent, Shared, Title and Downloaded.
While not everything is on the app yet, it's likely just a matter of time before it matches the web version's full functionality.
Audio Overviews
If you didn't hear about NotebookLM when it was first announced, you likely did when Audio Overviews were released for it. Once you have at least one source uploaded, you can then opt to generate an Audio Overview, which will provide a "deep dive" on the source material. These overviews are created by none other than Gemini, and it's not just a quick summary of your material in audio format -- it's a full-blown podcast with two "hosts" that break down complex topics into easy-to-understand pieces of information. They're incredibly effective, too, often asking each other questions to dismantle certain topics.
The default length of an Audio Overview will vary depending on how much material there is to go over and the complexity of the topic -- though I'm sure there are other factors at play. In my testing, a very short piece of text created a five-minute audio clip, whereas two lengthier and more dense Google Docs documents I uploaded created an 18-minute Overview.
If you want a little more control on the length for your Audio Overview, you're in luck. Announced in a blog post during Google I/O earlier this month, users now have three options to choose from: shorter, default and longer. This is perfect if you either want to have a short and dense podcast-like experience of if you want to get into the nitty gritty on a subject with a longer Audio Overview.
You can interact with your AI podcasters
It gets even better. Last December, NotebookLM got a new design and new ways to interact with Audio Overviews. The customize button allows you to guide the conversation so that key points are covered. Type in your directive and then generate your Audio Overview.
Now, if you want to make this feature even more interactive, you can choose the Interactive mode, which is still in beta, to join the conversation. The clip will play, and if you have a particular question in response to something that's said, you can click the join button. Once you do, the speakers will pause and acknowledge your presence and ask you to chime in with thoughts or questions, and you'll get a reply.
I wanted to try something a little different, so I threw in the lyrics of a song as the only source, and the AI podcast duo began to dismantle the motivations and emotions behind the words. I used the join feature to point out a detail in the lyrics they didn't touch on, and the two began to dissect what my suggestion meant in the context of the writing. They then began linking the theme to other portions of the text. It was impressive to watch: They handled the emotional weight of the song so well, and tactfully at that.
Mind Maps
Generating a Mind Map is just one of several powerful features from NotebookLM
Google/Screenshot by Blake Stimac
I'd heard interesting things about NotebookLM's Mind Map feature, but I wanted to go in blind when I tried it out, so I did a separate test. I took roughly 1,500 words of Homer's Odyssey and made that my only source. I then clicked the Mind Map button, and within seconds, an interactive and categorical breakdown of the text was displayed for me to poke around in.
Many of the broken-down sections had subsections for deeper dives, some of which were dedicated to single lines for dissection. Clicking on a category or end-point of the map will open the chat with a prefilled prompt.
I chose to dive into the line, "now without remedy," and once clicked, the chat portion of NotebookLM reopened with the prefilled prompt, "Discuss what these sources say about Now without remedy, in the larger context of [the subsection] Alternative (worse)." The full line was displayed, including who said it, what it was in response to and any motivations (or other references) for why the line was said in the text.
Study guides and more
If the combination of all that Audio Overviews and Mind Maps could do sounds like everything a student might need for the perfect study buddy, NotebookLM has a few other features that will solidify it in that place.
Study guides
After you've uploaded a source, you can create a quick study guide based on the material that will automatically provide a document with a quiz, potential essay questions, a glossary of key terms and answers for the quiz at the bottom. And if you want, you can even convert the study guide into a source for your notebook.
FAQs
Whether you're using it for school or want to create a FAQ page for your website, the NotebookLM button generates a series of potentially common questions based on your sources.
Timeline
If you're looking for a play-by-play sort of timeline, it's built right in. Creating a timeline for the Odyssey excerpt broke down main events in a bulleted list and placed them based on the times mentioned in the material. If an event takes place at an unspecified time, it will appear at the top of the timeline, stating this. A cast of characters for reference is also generated below the timeline of events.
Briefing document
The briefing document is just what it sounds like, giving you a quick snapshot of the key themes and important events to get someone up to speed. This will include specific quotes from the source and their location. A summary of the material is also created at the bottom of the document.
How NotebookLM really 'sold' me
I already really liked NotebookLM's concept and execution during its 1.0 days, and revisiting the new features only strengthened my appreciation for it. My testing was mostly for fun and to see how this tool can flex, but using it when I "needed" it helped me really get an idea of how powerful it can be, even for simple things.
During a product briefing, I did my typical note-taking: Open a Google Doc, start typing in fragmented thoughts on key points, and hope I could translate what I meant when I needed to refer back to them. I knew I would also receive an official press release, so I wasn't (too) worried about it, but I wanted to put NotebookLM to the test in a real-world situation when I was using it for real -- and not just tinkering, when nearly anything seems impressive when it does what you tell it to.
I decided to create a new notebook and make my crude notes (which looked like a series of bad haikus at first glance) the only source, just to see what came out on the other end. Not only did NotebookLM fill in the blanks, but the overall summary read almost as well as the press release I received the following day. I was impressed. It felt like alchemy -- NotebookLM took some fairly unintelligible language and didn't just turn it into something passable, but rather, a pretty impressive description.
Funny enough, I've since become a more thorough note-taker, but I'm relieved to know I have something that can save the day if I need it to.
Video Overviews are on the way
Another feature that was announced during Google I/O was Video Overviews, and it's exactly what it sounds like. There's currently no time frame outside of "coming soon" from the blog post, but it should be a good way to get a more visual experience from your notebooks.
We'd previously heard that Video Overviews might be on the way, thanks to some sleuthing from Testing Catalog. The article also mentioned that the ability to make your notebooks publicly available and view an Editor's Picks list of notebooks will eventually make their way to NotebookLM. The Editors Picks feature has yet to rear it's head, but you can indeed now share notebooks directly or make them publicly available for anyone to access.
While we're waiting for the new features, here's a preview of a Video Overview below.
If you need more from NotebookLM, consider upgrading
Most individuals may never have the need to pay for NotebookLM, as the free version is robust enough. But if you're using it for work and need to be able to add more sources or the option to share your notebook with multiple people, NotebookLM Plus is worth considering. It gives you more of everything while introducing more customization, additional privacy and security features as well as analytics. It's worth noting that NotebookLM Plus will also be packaged in with Google's new AI subscriptions.
For more, don't miss Google's going all-in on AI video with Flow and Veo 3

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Musk Follows Harvard In Biting The Hand That Feeds
Musk Follows Harvard In Biting The Hand That Feeds

Forbes

time12 minutes ago

  • Forbes

Musk Follows Harvard In Biting The Hand That Feeds

Elon Musk and Harvard Both Bite the Governmental Hand that Feeds Them From an early age, children are taught essential lessons: do not play with fire, do not pet strange dogs, and if one cannot swim, stay out of the deep end. Another timeless rule—often forgotten by those in positions of immense wealth and influence—is this: do not bite the hand that feeds you. This lesson, while simple, has profound implications in the real world. It applies just as readily to billionaires and institutions as it does to children on a playground. Yet recent actions by both Elon Musk and prominent academic institutions—most notably Harvard, but also Columbia, MIT, and others—suggest that even the most successful individuals and organizations are capable of ignoring foundational wisdom. Harvard set the tone. Amid growing political scrutiny and a shifting cultural landscape, the university has drawn intense criticism over its handling of campus protests, particularly those involving slogans such as 'from the river to the sea.' The administration's decision to defend even the most controversial speech—widely viewed by many as antisemitic—has triggered investigations and jeopardized billions in tax-exempt status and government research funding. This raises a critical question: is this truly the hill worth dying on? Is preserving the right to controversial protest slogans worth risking Harvard's institutional future? It is doubtful that most students and faculty would knowingly trade funding, grants, and prestige for this fight. Elon Musk, the world's richest man, has now followed suit—this time turning his attention toward President Donald Trump, with whom he has launched a high-profile and personal feud. What makes this move especially striking is that President Trump is not a distant figure or a fading influence. He is once again sitting in the White House, wielding executive authority over regulatory agencies, defense contracting, and infrastructure initiatives—all areas that directly affect Musk's companies. Tesla, SpaceX, and xAI have flourished in part because of government partnership. SpaceX alone holds multibillion-dollar contracts with NASA and the Department of Defense. Tesla has benefitted from years of energy subsidies and EV tax incentives. Picking a fight with the sitting president—regardless of personal conviction—puts this entire ecosystem at risk. And again the question must be asked: is this battle worth the damage? Whatever principle Musk may be defending, the consequences extend far beyond himself. Shareholders, employees, and retail investors—many of whom placed their trust and savings in his leadership—are the ones left exposed. The parallel between Harvard and Musk is striking: both have been immensely successful, aided in large part by government funding, favorable regulation, and public goodwill. And both have, for different reasons, chosen to confront the very institutions and leaders that have helped sustain their growth. There is precedent for how this ends. Jack Ma, once the most powerful entrepreneur in China, famously criticized the Chinese government. The backlash was immediate and absolute. His companies were dismantled. His IPO was cancelled. His wealth and influence evaporated almost overnight. Even in less authoritarian systems, the lesson holds: those who antagonize the systems that support them may not survive the consequences. While Musk's personal net worth has dropped from nearly $450 billion to approximately $300 billion, the impact is more symbolic than practical for him. But for millions of investors, employees, and stakeholders, these battles matter. Market volatility, regulatory backlash, and reputational risk all come with tangible financial costs—costs borne not just by Musk himself, but by those who have trusted and invested in his vision. The same applies to Harvard and peer institutions. Their leadership may believe they are standing on principle, but the price of alienating government agencies and key financial backers could reshape the long-term trajectory of these universities. The erosion of public trust, the loss of bipartisan support, and the potential withdrawal of federal funding pose existential threats. Leadership—whether in business or academia—requires more than conviction. It requires judgment, timing, and the discipline to separate personal ideology from institutional responsibility. Founder-led companies often outperform when leaders are focused, visionary, and measured. But when ego replaces strategy, the consequences can be swift and severe. No one is demanding absolute political alignment or silence in the face of controversy. No one is asking Elon Musk to wear a MAGA hat. But his recent actions have been so volatile, so self-destructive, that investors may soon be tempted to hand him something else entirely—a MEGA hat: Make Elon Great Again. In today's polarized environment, the margin for error has narrowed. And for those who owe much of their success to public support—whether in Silicon Valley or the Ivy League—biting the hand that feeds is not just unwise. It is unsustainable. ---------------------------------- Disclosure: Past performance is no guarantee of future results. Please refer to the following link for additional disclosures: Additional Disclosure Note: The author has an affiliation with ERShares and the XOVR ETF. The intent of this article is to provide objective information; however, readers should be aware that the author may have a financial interest in the subject matter discussed. As with all equity investments, investors should carefully evaluate all options with a qualified investment professional before making any investment decision. Private equity investments, such as those held in XOVR, may carry additional risks—including limited liquidity—compared to traditional publicly traded securities. It is important to consider these factors and consult a trained professional when assessing suitability and risk tolerance.

An AI Film Festival And The Multiverse Engine
An AI Film Festival And The Multiverse Engine

Forbes

time13 minutes ago

  • Forbes

An AI Film Festival And The Multiverse Engine

In the glassy confines of Alice Tully Hall on Thursday, the third annual Runway AI Film Festival celebrated an entirely new art form. The winning film, Total Pixel Space, was not made in the traditional sense. It was conjured by Jacob Adler, a composer and educator from Arizona State University, stitched together from image generators, synthetic voices, and video animation tools — most notably Runway's Gen-3, the company's text-to-video model (Runway Gen-4 was released in March). Video generation technology emerged in public in 2022 with Meta's crude video of a flying Corgi wearing a red cape and sunglasses. Since then, it has fundamentally transformed filmmaking, dramatically lowering barriers to entry and enabling new forms of creative expression. Independent creators and established filmmakers alike now have access to powerful AI tools such as Runway that can generate realistic video scenes, animate storyboards, and even produce entire short films from simple text prompts or reference images. As a result, production costs and timelines are shrinking, making it possible for filmmakers with limited resources to achieve professional-quality results and bring ambitious visions to life. The democratization of content creation is expanding far beyond traditional studio constraints, empowering anyone with patience and a rich imagination. Adler's inspiration came from Jorge Luis Borges' celebrated short story The Library of Babel, which imagines a universe where every conceivable book exists in an endless repository. Adler found a parallel in the capabilities of modern generative machine learning models, which can produce an unfathomable variety of images from noise (random variations in pixel values much like the 'snow' on an old television set) and text prompts. 'How many images can possibly exist,' the dreamy narrator begins as fantastical AI-generated video plays on the screen: a floating, exploding building; a human-sized housecat curled on a woman's lap. 'What lies in the space between order and chaos?' Adler's brilliant script is a fascinating thought experiment that attempts to calculate the total number of possible images, unfurling the endless possibilities of the AI-aided human imagination. 'Pixels are the building blocks of digital images, tiny tiles forming a mosaic,' continues the voice, which was generated using ElevenLabs. 'Each pixel is defined by numbers representing color and position. Therefore, any digital image can be represented as a sequence of numbers,' the narration continues, the voice itself a sequence of numbers that describe air pressure changes over time. 'Therefore, every photograph that could ever be taken exists as coordinates. Every frame of every possible film exists as coordinates.' Winners at the 3rd Annual International AIFF 2025 Runway was founded in 2018 by Cristóbal Valenzuela, Alejandro Matamala, and Anastasis Germanidis, after they met at New York University Tisch School of the Arts. Valenzuela, who serves as CEO, says he fell in love with neural networks in 2015, and couldn't stop thinking about how they might be used by people who create. Today, it's a multi-million-user platform, used by filmmakers, musicians, advertisers, and artists, and has been joined by other platforms, including OpenAI's Sora, and Google's Veo 3. What separates Runway from many of its competitors is that it builds from scratch. Its research team — which comprises most of the company — develops its own models, which can now generate up to about 20 seconds of video. The result, as seen in the works submitted to the AI Film Festival, is what Valenzuela calls 'a new kind of media.' The word film may soon no longer apply. Nor, perhaps, will filmmaker. 'The Tisches of tomorrow will teach something that doesn't yet have a name,' he said during opening remarks at the festival. Indeed, Adler is not a filmmaker by training, but a classically trained composer, a pipe organist, and a theorist of microtonality. 'The process of composing music and editing film,' he told me, 'are both about orchestrating change through time.' He used the image generation platform Midjourney to generate thousands of images, then used Runway to animate them. He used ElevenLabs to synthesize the narrator's voice. The script he wrote himself, drawing from the ideas of Borges, combinatorics, and the sheer mind-bending number of possible images that can exist at a given resolution. He edited it all together in DaVinci Resolve. The result? A ten-minute film that feels as philosophical as it is visual. It's tempting to frame all this as the next step in a long evolution; from the Lumière brothers to CGI, from Technicolor to TikTok. But what we're witnessing isn't a continuation. It's a rupture. 'Artists used to be gatekept by cameras, studios, budgets,' Valenzuela said. 'Now, a kid with a thought can press a button and generate a dream.' At the Runway Film Festival, the lights dimmed, and the films came in waves of animated hallucinations, synthetic voices, and impossible perspectives. Some were rough. Some were polished. All were unlike anything seen before. This isn't about replacing filmmakers. It's about unleashing them. 'When photography first came around — actually, when daguerreotypes were first invented — people just didn't have the word to describe it,' Valenzuela said during his opening remarks at the festival. 'They used this idea of a mirror with a memory because they'd never seen anything like that. … I think that's pretty close to where we are right now.' Valenzuela was invoking Oliver Wendell Holmes Sr.'s phrase to convey how photography could capture and preserve images of reality, allowing those images to be revisited and remembered long after the moment had passed. Just as photography once astonished and unsettled, generative media now invites a similar rethinking of what creativity means. When you see it — when you watch Jacob Adler's film unfold — it's hard not to feel that the mirror is starting to show us something deeper. AI video generation is a kind of multiverse engine, enabling creators to explore and visualize an endless spectrum of alternate realities, all within the digital realm. 'Evolution itself becomes not a process of creation, but of discovery,' his film concludes. 'Each possible path of life's development … is but one thread in a colossal tapestry of possibility.'

Tested: Tesla Model Y Juniper As Robotaxi
Tested: Tesla Model Y Juniper As Robotaxi

Forbes

time33 minutes ago

  • Forbes

Tested: Tesla Model Y Juniper As Robotaxi

Here's some breaking news: the 2026 Tesla Model Y 'Juniper' with Full Self Driving is a robotaxi. Maybe Tesla can't call it that but that's what it is. And Waymo may have met its match. I had the 2026 Model Y for the 48-hour test drive (which Tesla just began offering) this past week in Los Angeles. The new Model Y, which hit Tesla stores in February, comes with Full Self-Driving (Supervised) version 13.2.9. But the fact that it's supervised didn't stop me from using it, in practice, unsupervised as a robotaxi, i.e., going door to door without intervention. As background, I've tested the Juniper Model Y FSD now three times: two test drives when it arrived at Tesla stores in March-April and now a 48-hour test drive. On most excursions it has gotten me door to door without intervention (see video below). That is, I just punch in the destination address and let the Model Y drive. I'm a passenger – not unlike Waymo, which I've also used many times in the Beverly Hills-West Hollywood area (more on Waymo comparison in video). Here's the short version. The new Model Y Juniper with version 13 of FSD is pretty damn close to a Tesla robotaxi and Waymo. Yes, I had to occasionally intervene but many trips in the vehicle are intervention-free = robotaxi. And, yes, it makes mistakes but so does Waymo. No FSD errors on the Model Y Juniper with v13.2.9 I've experienced have been dangerous or egregious. Mostly things like driving too slowly or taking a convoluted route to my destination (the latter is a mistake Waymo also makes). The Model Y with FSD version 13 is a vast improvement over the Model 3 I tested about a year ago. As just two examples, the Model Y took me from my home to a Supercharger location about 10 miles away intervention-free. I did nothing but sit there and witness the drive. At the end of the return trip, it took a route that I would not have chosen to take. But human taxi drivers do that too. It also took me to a Starbucks about 8 miles away intervention-free. That trip too was very similar, if not exactly the same as, what I've experienced in a Waymo Jaguar I-PACE in downtown Los Angeles. The only thing that I've found annoying is occasional speed limitations. On some short stretches of road near my home it slows to 25 mph and won't go faster unless I intervene. Tesla FSD is often compared unfavorably to Google's Waymo. That may have been true in the past. But not anymore. I use Waymo a lot in Los Angeles, as I said above. Though Waymo is amazing, it also makes mistakes. But its biggest shortcoming is its range limitations, i.e., geofencing (see this map). Los Angeles is a very big place and most of LA county is off limits to Waymo. Tesla's FSD doesn't have that problem. That is both a boon and a bane for Tesla – the latter because it's a huge challenge. But I see Tesla meeting the challenge in most cases. I will give Waymo this. In the geofenced area I use (Century City / Beverly Hills / West Hollywood) it is more refined and more confident than Tesla FSD. In some cases, more adept at avoiding and getting around obstacles. But Tesla is almost there. And, again, Tesla FSD has a huge advantage in that it is not limited to small restricted areas. I've spent a lot of time testing General Motors Super Cruise. As well as Ford's Bluecruise and Rivian's Highway Assist. Super Cruise does what it says it does. It very competently takes over the driving duties on the highway. But it ain't Tesla FSD. It won't do local roads. It's not a robotaxi. And that's the bottom line. FSD is not foolproof or flawless. And a Bloomberg story this week makes that clear. In that case, an older version of FSD was blinded by the sun, resulting in fatalities. And I've been in a Tesla when FSD missed seeing a community gate, which, without intervention, would have resulted in an accident. That was in a previous version of FSD. But it doesn't mean it can't happen again. That said, GM's SuperCruise, based on my experience, also makes the rare risky mistake. As do other ADAS (Advanced Driver Assist System) from other EV manufacturers that I've tested. Over the past year, I've tested ADAS on EVs from General Motors (Super Cruise), Rivian (Highway Assist), Ford (Bluecruise), and Tesla. My take is that the benefits of an ADAS outweigh the risks. In 2024, there were 39,345 US traffic fatalities. Needless to say, practically all involved human drivers. And that increasingly means distracted drivers using their smart device. Unlike humans, an ADAS does not get distracted. The larger picture is that, on balance, a Tesla with FSD – and any reputable ADAS for that matter – makes the roads safer. As long as the driver is paying attention and can take over when the ADAS fails. The latter unfortunately is a big if because some drivers see it as an invitation to text or nap. So, what about a robotaxi where there is no driver to intervene? As stated above, of course there's risk. But there is a much bigger risk with the average car driven by the average distracted human. With the explosion of personal devices, more and more people are distracted while they drive as they engage in things like texting – and even web browsing – while driving. I see people staring down at their devices while driving every day in Los Angeles. Those people are much more dangerous than any ADAS-controlled car. And those people would benefit greatly from an ADAS. The upshot is, an ADAS, such as Tesla FSD and robotaxi, does not get distracted and is laser-focused on the road. Humans often are not.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store