
You Can't Trust Everything Generative AI Tells You. Here's What to Do About It
Marietta is a little city of about 13,000 people on the Ohio River, with fun boutique shops and interesting museums of local history and beautiful views of the surrounding Appalachian foothills. It's been a year or so since I visited, so I'm probably due to go again. I asked ChatGPT for a restaurant recommendation -- the city's best Thai place. The chatbot obliged: Thai Taste. The problem? That restaurant is in Marietta, Georgia. The city in Ohio doesn't have a Thai restaurant.
The "what's the best Thai restaurant in this small town" question was an offhand example in a conversation I had with Katy Pearce, associate professor at the University of Washington and a faculty member of the UW Center for an Informed Public. As far as examples go, it's pretty minor. There are other fine restaurants in Marietta, Ohio, and a Thai restaurant down the road in Parkersburg, West Virginia. But the problem it demonstrates could be serious: When you're using an AI chatbot as a search engine, you might get an incredibly unhelpful answer hidden beneath layers of confidence. Like golden retrievers, Pearce said, chatbots "really want to please you."
Large language models (LLM) like OpenAI's ChatGPT, Anthropic's Claude and Google's Gemini are increasingly becoming go-to sources for finding information about the world. They're displacing traditional search engines as the first, and often only, place many people go when they have a question. Technology companies are rapidly injecting generative AI into search engine results, summaries in news feeds and other places we get information. That makes it increasingly important that we know how to tell when it's giving us good information and when it isn't -- and that we treat everything from an AI with the right level of skepticism.
(Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
In some cases, using AI instead of a search engine can be helpful. Pearce said she'll often go to an LLM for low-stakes questions in which there's probably enough information on the internet and in the model's training data to ensure a decent answer. Restaurant recommendations or basic home improvement tips, for example, can be generally reliable.
Other situations are more fraught. If you rely on a chatbot for information about your health, your finances, news or politics, those hallucinations could have serious consequences.
"It gives you this authoritative, confident answer, and that is the danger in these things," said Alex Mahadevan, director of the MediaWise media literacy program at the Poynter Institute. "It's not always right."
Here are some things to look out for and some tips on how to fact-check what you see when you ask generative AI.
Why you can't always trust AI
To understand why chatbots get things wrong or make things up, it helps to know a bit about how they work. LLMs are trained on massive amounts of data, and attempt calculate the probability of what comes next based on what is input. They are "prediction machines" that produce outputs based on the probability that they'll answer your queries, Mahadevan said. They'll attempt to produce the answer with the best chances of being correct, statistically speaking, rather than checking to see what the actual correct answer is.
Enlarge Image
This answer is off by a few states.
Screenshot by Jon Reed/CNET
When I asked for a Thai restaurant in one town called Marietta, the LLM couldn't find one that met my exact criteria -- but it found one in another town with the same name nearly 600 miles to the south. The probability of that being the answer I wanted might be higher in the LLM's internal calculations than the probability that I asked for something that doesn't exist.
Hence Pearce's golden retriever problem. Like the family-favorite dog breed, the AI tool is following its training: trying to do its best to make you happy.
When an LLM doesn't find the exact best answer for your question, it'll give you one that sounds correct but isn't. Or it could make something up out of thin air that appears plausible. That's particularly treacherous. Some of these errors, called hallucinations, are pretty obvious. You might pretty easily catch that you shouldn't put glue on your pizza to make the cheese stick. Others are more subtle, like when a model creates fake citations and attributes them to real authors.
"If the tool doesn't have the information, it creates something new," Pearce said. "And that something new could be entirely wrong."
The data these models were trained on also matters when it comes to accuracy. Many of these large systems were trained on nearly the whole internet, which includes accurate, fact-based information alongside random things someone said on a message board.
Humans make stuff up too. But a human can more reliably tell you where they got their information, for example, or how they did their calculations. An LLM, even if it's citing its work, might not be able to provide an accurate paper trail.
So how do you know what to trust?
Know the stakes
The accuracy of what you get from a chatbot or other gen AI tool may not matter all that much, but it could mean everything. Understanding the stakes of what happens if you act on untrustworthy information is vital to making good decisions, Pearce said.
"When people are using generative AI tools for getting information that they hope is based in fact-based reality, the first thing that people need to think about is: What are the stakes?" she said.
Getting suggestions for a music playlist? Low stakes. If a chatbot tries to make up an Elton John song, that's not worth stressing over.
But the stakes are higher when it comes to your health care, financial decisions or ability to get accurate news and information about the world.
Remember the types of data the models were trained on. For health questions, remember that while training data may have included some medical journals, it might have also been trained on unscientific social media posts and message board threads.
Always fact-check any information that could lead to you making a big decision -- think of things that could affect your money or your life. The information behind those decisions requires more scrutiny, and gen AI's tendency to mix things up or make things up should raise doubts before you act.
"If it's something that you need to be fact-based, you need to absolutely triple-check it," Pearce said.
Gen AI changes how you verify information online
Poynter's MediaWise program has been teaching media literacy well before ChatGPT burst on the scene at the end of 2022. Before generative AI, the key was to judge the source, Mahadevan said. If you see something on Facebook or in a Google search result that claimed a celebrity or politician has died, you could check the background of the source providing the information to see if it's reliable. A major news organization is reporting it? Probably reliable. The only source is your cousin's neighbor's ex-husband? Maybe not.
Even though that advice is straightforward, plenty of people ignore or don't understand it. "People have always had difficulty evaluating information online," Mahadevan said.
For AI, that advice no longer works as well as it did. A chatbot's answers may be completely shorn of context. You ask it a question and it returns an answer, and your instinct may be to trust the AI model as the expert. This is different from a traditional Google search or social media post, in which the source is at least somewhat prominently displayed. Some chatbots will give you sources, but often you will just get an answer without sourcing.
"With [chatbots], we don't really know who's behind the information," Mahadevan said.
Instead, you may have to search elsewhere to find the ultimate sources of information. Generative AI, like social media, is not a source of information but a medium for distributing that information. Just as the who behind a social media post matters, so does the ultimate source of any information you get from a chatbot.
Read more: AI Essentials: 27 Ways to Make Gen AI Work for You, According to Our Experts
How to get the truth (or something closer to it) from AI
The main way to ensure you're getting good information on the internet remains the same as it always has: check with multiple trusted sources.
But when working with gen AI, here are some ways to improve the quality of what you receive:
Use a tool that provides citations
Many chatbots and other gen AI models today will provide citations in their responses to you, although you might need to ask for them or turn on a setting. This feature also shows up in other AI use cases, like Google's AI Overviews in search results.
The presence of citations themselves is good -- that response may be more reliable than one without citations -- but the model might be making up sources or misconstruing what the source says. Especially if the stakes are high, you probably want to click on the links and make sure the summary you've been provided is accurate.
Ask how confident the AI is
You can get a better answer from a chatbot by writing your prompt carefully. One trick is to ask for confidence levels. For example: "Tell me when the first iPhone came out. Tell me how confident you are about your response."
You'll still need to bring skepticism to your evaluation of the answer. If the chatbot tells you the first iPhone came out in 1066, you'll probably have doubts even if its confidence is 100%.
Mahadevan suggests understanding the distance between your chatbox window and the source of the information: "You have to treat it as if you're being told it secondhand," he said.
Don't just ask a quick question
Adding phrases like, "tell me your confidence level," "provide links to your sources," "offer alternative viewpoints," "only derive information from authoritative sources" and "closely evaluate your information" can add up when you're crafting your prompt. Pearce said good prompts are long -- a paragraph or more to ask a question.
You can also ask it to play a certain role to get answers in the right tone or from the perspective you're seeking.
"When you're prompting it, adding all these caveats is really important," she said.
Bring your own data
LLMs may struggle most when pulling information from their training data or from their own searches for information on the internet. But if you provide the documents, they can perform better.
Mahadevan and Pearce both said they've had success with generative AI tools summarizing or extracting insights from large documents or data sets.
Pearce said she was shopping for a vehicle and provided ChatGPT with all the information she wanted it to consider -- PDFs of listings, Carfax reports, etc. -- and asked it to focus just on those potential vehicles. It offered a deep, detailed analysis of the vehicles. What it didn't do was recommend random other vehicles it found on the internet -- which Pearce wanted to avoid.
"I had to give all that information to it," she said. "If I had just said look on whatever used car dealership's website, it wouldn't have been so rich."
Use AI as a starting point
Mahadevan compared generative AI today to what people thought of Wikipedia's reliability decades ago. Many people were skeptical that Wikipedia would be accurate because it was freely editable. But one advantage the free encyclopedia has is that its sources are easy to find. You might start from a Wikipedia page and end up reading the articles or documents that the entry cites, getting a more complete picture of what you were looking for. Mahadevan calls this "reading upstream."
"The core thing I always try to teach in media literacy is how to be an active consumer of information," he said. While gen AI can cut you off from those primary sources, it doesn't have to. You just have to keep digging a little more.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Business Insider
an hour ago
- Business Insider
BioMatrix to Present Vision for Human-Centric AI Economy at a16z Tech Week's AI & Finance Conference 2025
New York, United States, June 12th, 2025, Chainwire BioMatrix to Present Vision for Human-Centric AI Economy at a16z Tech Week's AI & Finance Conference 2025 BioMatrix, the Web3 project redefining digital identity and universal basic income through biometric verification, took the stage on June 6 at the AI & Finance Conference 2025, hosted at Civic Hall as part of the official a16z Tech Week lineup. The AI & Finance Conference brought together over 1,500 attendees, including top builders, investors, and researchers across artificial intelligence, financial technology, blockchain, and quantitative modeling. As one of the most anticipated anchor events of this year's a16z Tech Week, it featured deep dives into generative AI, frontier financial models, and the infrastructure shaping the next decade of digital systems. BioMatrix was honored to be invited as a featured speaker, offering a timely reminder that in an age of booming AI, we must build people-centric systems. While much of the conversation centered on speed, scale, and automation, BioMatrix stood for human-first innovation: applying AI responsibly, verifying identity with dignity, and ensuring that technology enhances—rather than replaces—what it means to be human. In a time where artificial intelligence is rapidly automating industries and displacing the value of human labor, BioMatrix delivers a counterbalance: a decentralized system where simply being human is enough to participate in the economy. Founder Arthur Qin delivered a keynote titled 'Restoring Human Value in the Age of AI,' highlighting how BioMatrix transforms identity into value through a proprietary biometric system called Proof of You (PoY). In just one year, the project has scaled from 102 to over 4.5 million verified users, with its liquid token iPoY reaching a $1 billion market cap within two weeks of launch. 'You shouldn't need to prove your worth just to deserve a place in the economy,' said Arthur. 'At BioMatrix, your face is your wallet, and your existence is your value.' With a 60-year token issuance plan and a vision to onboard 1 billion BioMatrix Citizens by the end of 2028, the company is positioning itself as a long-term infrastructure layer for a fairer, AI-enhanced digital economy. BioMatrix is dedicated to enhancing the lives of all BioMatrix Citizens through AI and Blockchain technology. Along with tailored services and secure data solutions, BioMatrix builds an AIBusiness Matrix encompassing Game, Media, Social, Retail, and Rental sectors, leveraging Proof of You AI Token utility to ensure fairness, security, and inclusion for all. Contact Natalia Guo


Business Upturn
an hour ago
- Business Upturn
OpenAI in talks with Saudi PIF, Reliance for $40 billion fundraise, eyes additional $17 billion in 2027: Report
By News Desk Published on June 12, 2025, 09:15 IST OpenAI is reportedly in discussions with major investors from Saudi Arabia and India, including the Saudi Public Investment Fund (PIF) and Reliance Industries, as part of its ambitious $40 billion fundraising plan, according to The Information . The report highlights that OpenAI has held conversations with both Middle Eastern and Indian financial heavyweights, signalling the company's intent to broaden its global investor base. The discussions are part of OpenAI's efforts to secure capital for its AI infrastructure and long-term growth strategy. In addition to the current fundraising round, OpenAI has informed potential investors that it aims to raise another $17 billion in 2027, further underlining the scale of its future expansion plans. OPENAI HAS DISCUSSED RAISING MONEY FROM SAUDI ARABIA, INDIAN INVESTORS – THE INFORMATION OPENAI HAS TALKED TO SAUDI'S PIF AND INDIA'S RELIANCE FOR ITS $40 BILLION FUNDRAISE- THE INFORMATION OPENAI HAS TOLD INVESTORS IT WANTS TO RAISE ANOTHER $17 BILLION IN 2027 – THE… — First Squawk (@FirstSquawk) June 11, 2025 Meanwhile, OpenAI's lead investor SoftBank has reportedly been buying employee shares, indicating continued confidence in the AI firm's valuation and growth trajectory. The reported talks with sovereign wealth funds and major corporate entities come as OpenAI continues to lead the global artificial intelligence race with innovations like ChatGPT and partnerships across the tech ecosystem. News desk at


The Verge
an hour ago
- The Verge
Klarna's CEO is now taking your calls — over an AI hotline
Klarna CEO Sebastian Siemiatkowski has already used AI to replace himself during earnings reports, and now Klarna is using AI to clone him another way: through an AI-powered phone hotline. The hotline is now open for users to provide feedback to an interactive AI version of Siemiatkowski that's trained on his 'real voice, insights, and experiences,' according to a press release. 'Customers, including consumers and merchants alike, can speak to it about product features, issues and suggestions for improvements,' Klarna says. ''AI Sebastian' can also answer questions about Klarna's vision and mission, and speak about its founding story and journey so far, based on learnings from 'real life Sebastian.'' After the hotline takes a call, a transcript and summary are generated that are 'automatically analyzed by a large language model (LLM), populating an internal live feed and innovation dashboard.' Klarna says its product and engineering teams will then review the 'actionable insights immediately' and assign 'priority tasks.' The company claims that issues raised today could 'translate into tangible product improvements already the following day.' Phone numbers for the hotline are available now in the US, +14246675385, and Sweden, +46852512594. I called the US hotline just to see what the experience was like. After a brief message telling me the call may be recorded, the AI version of Siemiatkowski prompted me to share feedback about using Klarna. No matter what I asked AI Siemiatkowski, even sillier questions, the AI generally tried to push me back to sharing feedback. When I asked if it had talked to Fortnite's AI Darth Vader, for example, here's what it said: That's an interesting question. My focus is entirely on Klarna and gathering feedback to improve our services. I don't have information or conversations with characters from games like Fortnite. I'm here to hear your thoughts on Klarna. Is there anything else about your experience with Klarna that you'd like to share, or any ideas you have for how we could make things even better?