Latest news with #EuginaLeung


CNET
9 hours ago
- Health
- CNET
The Scientific Reason Why ChatGPT Leads You Down Rabbit Holes
That chatbot is only telling you what you want to believe, according to a new study. Whether you're using a traditional search engine like Google or a conversational tool like OpenAI's ChatGPT, you tend to use terms that reflect your biases and perceptions, according to the study, published this spring in the Proceedings of the National Academy of Sciences. More importantly, search engines and chatbots often provide results that reinforce those beliefs, even if your intent is to learn more about the topic. For example, imagine you're trying to learn about the health effects of drinking coffee every day. If you, like me, enjoy having exactly two cups of joe first thing in the morning, you may search for something like "is coffee healthy?" or "health benefits of coffee." If you're already skeptical (maybe a tea purist), you might search for "is coffee bad for you?" instead. The researchers found that framing of questions could skew the results -- I'd mostly get answers that show the benefits of coffee, while you'd get the opposite. "When people look up information, whether it's Google or ChatGPT, they actually use search terms that reflect what they already believe," Eugina Leung, an assistant professor at Tulane University and lead author of the study, told me. The abundance of AI chatbots, and the confident and customized results they so freely give you, makes it easier to fall down a rabbit hole and harder to realize you're in it. There's never been a more important time to think deeply about how you get information online. The question is: How do you get the best answers? Asking the wrong questions The researchers conducted 21 studies with nearly 10,000 participants who were asked to conduct searches on certain preselected topics, including the health effects of caffeine, gas prices, crime rates, COVID-19 and nuclear energy. The search engines and tools used included Google, ChatGPT and custom-designed search engines and AI chatbots. The researchers' results showed that what they called the "narrow search effect" was a function of both how people asked questions and how the tech platforms responded. People have a habit, in essence, of asking the wrong questions (or asking questions in the wrong way). They tended to use search terms or AI prompts that demonstrated what they already thought, and search engines and chatbots designed to provide narrow, extremely relevant answers, delivered on those answers. "The answers end up basically just confirming what they believe in the first place," Leung said. Read more: AI Essentials: 29 Ways to Make Gen AI Work for You, According to Our Experts The researchers also checked to see if participants changed their beliefs after conducting a search. When served a narrow selection of answers that largely confirmed their beliefs, they were unlikely to see significant changes. But when the researchers provided a custom-built search engine and chatbot designed to offer a broader array of answers, they were more likely to change. Leung said platforms could provide users with the option of a broader, less tailored search, which could prove helpful in situations where the user is trying to find a wider variety of sources. "Our research is not trying to suggest that search engines or algorithms should always broaden their search results," she said. "I do think there is a lot of value in providing very focused and very narrow search results in certain situations." 3 ways to ask the right questions If you want a broader array of answers to your questions, there are some things you can do, Leung said. Be precise: Think specifically about what exactly it is you're trying to learn. Leung used an example of trying to decide if you want to invest in a particular company's stock. Asking if it's a good stock or a bad stock to buy will likely skew your results -- more positive news if you ask if it's good, more negative news if you ask if it's bad. Instead, try a single, more neutral search term. Or ask both terms and evaluate the results of each. Get other views: Especially with an AI chatbot, you can ask for a broad range of perspectives directly in the prompt. If you want to know if you should keep drinking two cups of coffee a day, ask the chatbot for a variety of opinions and the evidence behind them. The researchers tried this in one of their experiments and found they got more variety in results. "We asked ChatGPT to provide different perspectives to answer the query from the participants and to provide as much evidence to back up those claims as possible," Leung said. At some point, stop asking: Follow-up questions didn't work quite as well, Leung said. If those questions aren't getting broader answers, you may get the opposite effect -- even more narrow, affirming results. In many cases, people who asked lots of follow-up questions just "fell deeper down into the rabbit hole," she said.


CNET
9 hours ago
- Health
- CNET
Getting Good Results From AI and Search Engines Means Asking the Right Questions
The way you search online or ask an AI chatbot for information can influence the results you get, even if you aren't trying to find information that reinforces your own beliefs, according to a new study. People tend to use terms, whether in a traditional search engine like Google or a conversational tool like OpenAI's ChatGPT, that reflect their existing biases and perceptions, according to the study, published this spring in the Proceedings of the National Academy of Sciences. More importantly, search engines and chatbots often provide results that reinforce those beliefs, even if the intent is to learn more about the topic. For example, imagine you're trying to learn about the health effects of drinking coffee every day. If you, like me, enjoy having a couple of cups of joe first thing in the morning, you may search for something like "is coffee healthy?" or "health benefits of coffee." If you're already skeptical (maybe a tea purist), you might search for "is coffee bad for you?" instead. The researchers found that framing of questions could skew the results -- I'd mostly get answers that show the benefits of coffee, while you'd get the opposite. "When people look up information, whether it's Google or ChatGPT, they actually use search terms that reflect what they already believe," Eugina Leung, an assistant professor at Tulane University and lead author of the study, told me. These concerns about how we get information that favors our own preconceptions are nothing new. Long before the internet, you'd learn about the world from a newspaper that might carry a particular slant. But the prevalence of search engines and social media makes it easier to fall down a rabbit hole and harder to realize you're in it. With AI chatbots and AI-powered search telling you with confidence what you should know, and sometimes making it up or not telling you where the information comes from, there's never been a more important time to think deeply about how you get information online. The question is: How do you get the best answers? Asking the wrong questions The researchers conducted 21 studies with nearly 10,000 participants who were asked to perform searches on certain preselected topics, including the health effects of caffeine, gas prices, crime rates, COVID-19 and nuclear energy. The search engines and tools used included Google, ChatGPT and custom-designed search engines and AI chatbots. The researchers' results showed that what they called the "narrow search effect" was a function of both how people asked questions and how the tech platforms responded. People have a habit, in essence, of asking the wrong questions (or asking questions in the wrong way). They tended to use search terms or AI prompts that demonstrated what they already thought, and search engines and chatbots were designed to provide narrow, extremely relevant answers, delivered on those answers. "The answers end up basically just confirming what they believe in the first place," Leung said. Read more: AI Essentials: 27 Ways to Make Gen AI Work for You, According to Our Experts The researchers also checked to see if participants changed their beliefs after conducting a search. When served a narrow selection of answers that largely confirmed their beliefs, they were unlikely to see significant changes. But when the researchers provided a custom-built search engine and chatbot designed to offer a broader array of answers, they were more likely to change. Leung said platforms could provide people with the option of a broader search, which could prove helpful in situations where the user is trying to find a wider variety of sources. "Our research is not trying to suggest that search engines or algorithms should always broaden their search results," she said. "I do think there is a lot of value in providing very focused and very narrow search results in certain situations." How to ask the right questions If you want a broader array of answers to your questions, there are some things you can do, Leung said. First, think specifically about what exactly it is you're trying to learn. She used an example of trying to decide if you want to invest in a particular company's stock. Asking if it's a good stock or a bad stock to buy will likely skew your results -- more positive news if you ask if it's good, more negative news if you ask if it's bad. Instead, try a single, more neutral search term. Or ask both terms and evaluate the results of each. Especially with an AI chatbot, you can ask for a broad range of perspectives directly in the prompt. If you want to know if you should keep drinking two cups of coffee a day, ask the chatbot for a variety of opinions and the evidence behind them. The researchers tried this in one of their experiments and found they got more variety in results. "We asked ChatGPT to provide different perspectives to answer the query from the participants and to provide as much evidence to back up those claims as possible," Leung said. Asking follow-up questions didn't work quite as well, Leung said. If those questions aren't getting broader answers, you may get the opposite effect -- even more narrow, affirming results. In many cases, people who asked lots of follow-up questions just "fell deeper down into the rabbit hole," she said.