
Why NotebookLM needs to be the next app you download on your phone
One of my daily struggles is organizing ideas cohesively into a single place. Similar is the case with my sister, who works on machine learning research projects at the intersection of dental science. My youngest sibling is an educator, and she has more teaching material folders on her desktop than I have the patience to count.
For us, collecting segments of learning or research material, sources, and nuggets of notes, and making sense of them cohesively is a chore. After trying my fair share of organization tools and productivity shortcuts, I finally landed on Google's NotebookLM last year. Yes, it has a lot of AI. And no, it won't overwhelm you with the burden of made-up facts and AI hallucinations.
Recommended Videos
Unlike a chatbot like Gemini or ChatGPT, NotebookLM can work solely with your own materials. Then it does more. A lot, actually. It can draft your haphazard materials into well-drafted documents, create a mind-map, and even create a podcast out of it. You can even interrupt the two hosts while they discuss your written ideas, as if it were a two-person news panel.
So far, NotebookLM has remained exclusive to the web platform. That made it cumbersome to access via a phone. On the eve of Google I/O, the app finally landed on mobile. And even though it still has a few gaps to fill, it can already do a lot more than an average note-taking app. Far more, to be fair.
Getting started with NotebookLM
The mobile app is fairly basic. Call it barebones, or an intentional move to keep things simple. You start by creating a notebook, which lets you add the source materials. This can be a PDF stored on your phone, a YouTube video, a web article, or even text copied from your clipboard.
Once the notebook has been created, the app processes all the sources and is ready to answer your questions. Now, these can be hyperspecific, or just broad queries. For example, I uploaded about half a dozen research papers and market analysis reports discussing the impact of tariffs on the graphite supply, and their direct effect on the global EV industry.
My broad requests usually involve turning all the source material into a short article for a quick overview. However, NotebookLM can also offer needle-in-the-haystack queries, as well, and with proper citations. For example, when I asked it about the country that would be the worst hit, it provided me with an accurate answer, with additional context.
The best part? It links to a specific section in the source material (which opens as a pop-up window) so that you can verify if the AI pulled up the right information. In my tests, the knowledge extraction has mostly been on point, unless you are dealing with artsy material such as poetry, where metaphors can occasionally throw off the AI and its comprehension.
You can add more sources to a notebook, and the AI will accordingly summarize and tweak its responses based on the fresh learning material. Finally, in the bottom bar, you have the Studio section for podcasts, but more on that later.
A few misses, with an easy fix
Now, NotebookLM is missing a healthy few features that are available on the web version. For example, you can't add your own ideas to the notebook, or convert it into a source. A workaround for that is saving your note locally as a PDF and then importing it into the NotebookLM app.
One of the most intriguing features of NotebookLM is creating mindmaps, but they're also missing from the mobile app. Likewise, you can't customize or adjust the length of podcasts in the app. Finally, the options for generating study guides, briefing docs, FAQ, and timeline are also a no-show.
Thankfully, you can do it all in a mobile browser. Once you've created an FAQ or briefing doc, simply add it as a source with a single tap, and you can access it on the mobile app. The only exception is mind maps, because they are saved as PNG, a file format that isn't currently supported for uploads on the mobile app. That trick is currently reserved for Gemini, though I expect it to land on NotebookLM soon.
Podcasts are the rear winner
One of the standout features of NotebookLM is native podcast generation. You can simply upload all your source URLs, PDF files, and notes, and have the onboard AI generate a two-person podcast for you. These podcasts have made the process of learning and revision a tad more immersive, especially for a person like me, who stares at text throughout the day.
I recently had a discussion with my colleagues regarding prep work for an interview. It has happened quite often that despite prepping in advance, I forget one or two of the key talking points. This time around, I went through the crowdsourced questions as an interactive podcast, and it left a lasting impression than as a regular list of bullet points.
But there is more to these podcasts. You can even interrupt the hosts and ask them relevant questions about the topic being discussed. That is an underrated perk for two reasons, especially in the age of AI. First, you know where the audio clips are getting their material from.
Second, you're not burdened with the trust conundrum of talking with an AI, which has a habit of confidently spewing garbage, like putting glue on a pizza recipe.
Look, there is no denying that the internet as we know it has quickly become a dumping ground for AI slop. Google is partly to blame for it. Features like AI Overviews and AI Search Mode still struggle with summarizing or even getting the basic facts wrong from time to time.
Likewise, YouTube and other social media platforms are increasingly getting bombarded with AI-generated clips with plenty of unverified claims and downright misleading information. The likes of Spotify and Amazon have also loosened their stance on AI content. In a nutshell, it's your fact-checking burden to bear.
The podcasts generated by NotebookLM avoid that dilemma. What you hear from the hosts is purely what you supplied them in the first place. Peer-reviewed research papers, YouTube videos from credible sources, articles, or your own musings (with all the grammatical errors in tow).
Now, the scientific community is divided on whether listening is definitively better than seeing for absorbing knowledge. A linguistics expert and educator told me that a hybrid learning approach is better. Since it engages more of our senses, the learning process is slightly more immersive and less drab.
Of course, one can't discount the power of creative persuasion when it comes to learning complex topics. To that end, being able to stop the podcast host and ask them in-depth questions really comes in handy. And when they are answered specifically in the context of the supplied material, instead of an AI summarizing it opaquely from the web, you are confident that the answers are reliable.
This should be on your phone
NotebookLM is what you would call the future of note-taking. It's an app that essentially turns your notes (and all the material you have collected) into an interactive format. A format where you can have a back-and-forth chat with an answering machine that has ingested all the knowledge you have supplied.
It goes a step further and then turns it into a podcast where even the most technical papers are turned into a fun two-person audio conversation. You get the flexibility of turning all your reading material into a variety of formats, ready for sharing or personal reading. I like mine as an FAQ, and get that done with a single tap.
The only miss right now is the app. I'm not sure why that's the case. Technically, NotebookLM's mobile browser version is all you need, but it's still a bit of a friction. Yes, you can create a web app shortcut with equal ease and get over those limitations.
Google says NotebookLM has found a lot of traction among students, and I can certainly vouch for that in my household. But I reckon if your phone is ready for all that on-device Gemini Nano pizzazz, you should have access to it all. Based on what I saw at Google's I/O this year, that could happen soon.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


E&E News
21 minutes ago
- E&E News
Texas signs off on proposed Constellation-Calpine merger
Texas regulators on Thursday approved Constellation Energy's proposed $26 billion acquisition of Calpine, pushing forward one of the most consequential power industry mergers in recent years. The sign-off from the Public Utility Commission of Texas helps pave the way for the nation's largest fleet of nuclear power plants to merge with one of the biggest operators of natural gas-fired generators. 'This approval from the Texas PUC brings us one step closer to creating the nation's premier platform for reliable, clean energy,' said Constellation CEO Joe Dominguez in a statement. He emphasized that the merged company would meet growing power demands in high-load regions like Texas, while advancing a 'secure and clean energy future.' Advertisement Constellation, already the nation's largest producer of zero-emissions electricity through its nuclear fleet, stands to gain a vast footprint in gas-fired and geothermal assets through Calpine. The resulting coast-to-coast platform aims to position itself as the leading provider of around-the-clock, sustainable power amid surging data center demand and the broader energy transition.


E&E News
21 minutes ago
- E&E News
Panel sets markup of drone wildfire-fighting legislation
A House committee will vote this week on a bipartisan bill that seeks to boost the use of drones in fighting wildfires. The Science, Space and Technology Committee on Wednesday will mark up the 'Advanced Capabilities for Emergency Response Operations (ACERO) Act,' H.R. 390. It would authorize NASA to conduct research under its existing ACERO wildfire program to develop 'advanced aircraft technologies and airspace management efforts to assist in the management, deconfliction, and coordination of aerial assets during wildfire response efforts,' according to bill text. The bill would authorize $15 million for fiscal 2026. Advertisement The legislation is sponsored by Rep. Vince Fong (R-Calif.) and co-sponsored by Rep. Jennifer McClellan (D-Va.). A previous version of the bill, sponsored by then-Rep. Mike Garcia (R-Calif.), passed the House in 2024. Garcia lost his bid for reelection.
Yahoo
21 minutes ago
- Yahoo
This AI Company Wants Washington To Keep Its Competitors Off the Market
Dario Amodei, CEO of the artificial intelligence company Anthropic, published a guest essay in The New York Times Thursday arguing against a proposed 10-year moratorium on state AI regulation. Amodei argues that a patchwork of regulations would be better than no regulation whatsoever. Skepticism is warranted whenever the head of an incumbent firm calls for more regulation, and this case is no different. If Amodei gets his way, Anthropic would face less competition—to the detriment of AI innovation, AI security, and the consumer. Amodei's op-ed came in a response to a provision of the so-called One Big Beautiful Bill Act, which would prevent any states, cities, and counties from enforcing any regulation that specifically targets AI models, AI systems, or automated decision systems for 10 years. Senate Republicans have amended the clause from a simple requirement to a condition for receiving federal broadband funds, in order to comply with the Byrd Rule, which in Politico's words "blocks anything but budgetary issues from inclusion in reconciliation." Amodei begins by describing how, in a recent stress test conducted at his company, a chatbot threatened an experimenter to forward evidence of his adultery to his wife unless he withdrew plans to shut the AI down. The CEO also raises more tangible concerns, such as reports that a version of Google's Gemini model is "approaching a point where it could help people carry out cyberattacks." Matthew Mittelsteadt, a technology fellow at the Cato Institute, tells Reason that the stress test was "very contrived" and that "there are no AI systems where you must prompt it to turn it off." You can just turn it off. He also acknowledges that, while there is "a real cybersecurity danger [of] AI being used to spot and exploit cyber-vulnerabilities, it can also be used to spot and patch" them. Outside of cyberspace and in, well, actual space, Amodei sounds the alarm that AI could acquire the ability "to produce biological and other weapons." But there's nothing new about that: Knowledge and reasoning, organic or artificial—ultimately wielded by people in either case—can be used to cause problems as well as to solve them. An AI that can model three-dimensional protein structures to create cures for previously untreatable diseases can also create virulent, lethal pathogens. Amodei recognizes the double-edged nature of AI and says voluntary model evaluation and publication are insufficient to ensure that benefits outweigh costs. Instead of a 10-year moratorium, Amodei calls on the White House and Congress to work together on a transparency standard for AI companies. In lieu of federal testing standards, Amodei says state laws should pick up the slack without being "overly prescriptive or burdensome." But that caveat is exactly the kind of wishful thinking Amodei indicts proponents of the moratorium for: Not only would 50 state transparency laws be burdensome, says Mittelsteadt, but they could "actually make models less legible." Neil Chilson of the Abundance Institute also inveighed against Amodei's call for state-level regulation, which is much more onerous than Amodei suggests. "The leading state proposals…include audit requirements, algorithmic assessments, consumer disclosures, and some even have criminal penalties," Chilson tweeted, so "the real debate isn't 'transparency vs. nothing,' but 'transparency-only federal floor vs. intrusive state regimes with audits, liability, and even criminal sanctions.'" Mittelsteadt thinks national transparency regulation is "absolutely the way to go." But how the U.S. chooses to regulate AI might not have much bearing on Skynet-doomsday scenarios, because, while America leads the way in AI, it's not the only player in the game. "If bad actors abroad create Amodei's theoretical 'kill everyone bot,' no [American] law will matter," says Mittelsteadt. But such a law can "stand in the way of good actors using these tools for defense." Amodei is not the only CEO of a leading AI company to call for regulation. In 2023, Sam Altman, co-founder and then-CEO of Open AI, called on lawmakers to consider "intergovernmental oversight mechanisms and standard-setting" of AI. In both cases and in any others that come along, the public should beware of calls for AI regulation that will foreclose market entry, protect incumbent firms' profits from being bid away by competitors, and reduce the incentives to maintain market share the benign way: through innovation and product differentiation. The post This AI Company Wants Washington To Keep Its Competitors Off the Market appeared first on