logo
WATCH: Huge manta ray swims under paddleboarder at Honeymoon Island

WATCH: Huge manta ray swims under paddleboarder at Honeymoon Island

Yahoo18-05-2025

DUNEDIN, Fla. (WFLA) — Local drone operator John Yanchoris captured a rare sight Saturday evening: a huge manta ray swimming underneath a paddleboarder.
John said a marine biologist helped him identify it as a reef manta ray, one of the largest ray species in the world, and not common in Florida.
This one was spotted off Honeymoon Island at around 8 p.m. Saturday.
The reef manta is only beaten by the giant oceanic manta ray, whose wingspan can reach almost 30 feet wide.
Reef mantas are mostly found in the tropical waters of the Indo-Pacific Ocean, usually in more shallow water than their giant oceanic counterparts.
Their wingspans can reach up to fifteen feet, and the one captured by John looks to be almost the length of the paddleboard, which are usually between 10-12 feet.
Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The Scientific Reason Why ChatGPT Leads You Down Rabbit Holes
The Scientific Reason Why ChatGPT Leads You Down Rabbit Holes

CNET

timean hour ago

  • CNET

The Scientific Reason Why ChatGPT Leads You Down Rabbit Holes

That chatbot is only telling you what you want to believe, according to a new study. Whether you're using a traditional search engine like Google or a conversational tool like OpenAI's ChatGPT, you tend to use terms that reflect your biases and perceptions, according to the study, published this spring in the Proceedings of the National Academy of Sciences. More importantly, search engines and chatbots often provide results that reinforce those beliefs, even if your intent is to learn more about the topic. For example, imagine you're trying to learn about the health effects of drinking coffee every day. If you, like me, enjoy having exactly two cups of joe first thing in the morning, you may search for something like "is coffee healthy?" or "health benefits of coffee." If you're already skeptical (maybe a tea purist), you might search for "is coffee bad for you?" instead. The researchers found that framing of questions could skew the results -- I'd mostly get answers that show the benefits of coffee, while you'd get the opposite. "When people look up information, whether it's Google or ChatGPT, they actually use search terms that reflect what they already believe," Eugina Leung, an assistant professor at Tulane University and lead author of the study, told me. The abundance of AI chatbots, and the confident and customized results they so freely give you, makes it easier to fall down a rabbit hole and harder to realize you're in it. There's never been a more important time to think deeply about how you get information online. The question is: How do you get the best answers? Asking the wrong questions The researchers conducted 21 studies with nearly 10,000 participants who were asked to conduct searches on certain preselected topics, including the health effects of caffeine, gas prices, crime rates, COVID-19 and nuclear energy. The search engines and tools used included Google, ChatGPT and custom-designed search engines and AI chatbots. The researchers' results showed that what they called the "narrow search effect" was a function of both how people asked questions and how the tech platforms responded. People have a habit, in essence, of asking the wrong questions (or asking questions in the wrong way). They tended to use search terms or AI prompts that demonstrated what they already thought, and search engines and chatbots designed to provide narrow, extremely relevant answers, delivered on those answers. "The answers end up basically just confirming what they believe in the first place," Leung said. Read more: AI Essentials: 29 Ways to Make Gen AI Work for You, According to Our Experts The researchers also checked to see if participants changed their beliefs after conducting a search. When served a narrow selection of answers that largely confirmed their beliefs, they were unlikely to see significant changes. But when the researchers provided a custom-built search engine and chatbot designed to offer a broader array of answers, they were more likely to change. Leung said platforms could provide users with the option of a broader, less tailored search, which could prove helpful in situations where the user is trying to find a wider variety of sources. "Our research is not trying to suggest that search engines or algorithms should always broaden their search results," she said. "I do think there is a lot of value in providing very focused and very narrow search results in certain situations." 3 ways to ask the right questions If you want a broader array of answers to your questions, there are some things you can do, Leung said. Be precise: Think specifically about what exactly it is you're trying to learn. Leung used an example of trying to decide if you want to invest in a particular company's stock. Asking if it's a good stock or a bad stock to buy will likely skew your results -- more positive news if you ask if it's good, more negative news if you ask if it's bad. Instead, try a single, more neutral search term. Or ask both terms and evaluate the results of each. Get other views: Especially with an AI chatbot, you can ask for a broad range of perspectives directly in the prompt. If you want to know if you should keep drinking two cups of coffee a day, ask the chatbot for a variety of opinions and the evidence behind them. The researchers tried this in one of their experiments and found they got more variety in results. "We asked ChatGPT to provide different perspectives to answer the query from the participants and to provide as much evidence to back up those claims as possible," Leung said. At some point, stop asking: Follow-up questions didn't work quite as well, Leung said. If those questions aren't getting broader answers, you may get the opposite effect -- even more narrow, affirming results. In many cases, people who asked lots of follow-up questions just "fell deeper down into the rabbit hole," she said.

ASCO 2025: Key Highlights in Endometrial and Related Cancers
ASCO 2025: Key Highlights in Endometrial and Related Cancers

Medscape

timean hour ago

  • Medscape

ASCO 2025: Key Highlights in Endometrial and Related Cancers

Ursula A. Matulonis, MD, shares highlights from several key studies showcasing promising developments in the treatment of endometrial cancer. A phase 2 trial led by Dr Panagiotis Konstantinopoulos evaluated a combination of letrozole, abemaciclib, and metformin in patients with estrogen receptor (ER)-positive recurrent disease, reporting promising results. In the DUO-E trial, a post hoc circulating tumor (ctDNA) analysis demonstrated that durvalumab-based therapy reduced ctDNA levels, particularly in mismatch repair-deficient tumors, with further reductions observed when olaparib was added during maintenance. Additional studies underscored therapeutic innovation: a phase 2 trial of benmelstobart, an anti-programmed death-ligand 1 (PD-L1) antibody, with or without anlotinib in combination with chemotherapy showed high response rates, and HB0025, a bispecific antibody targeting PD-L1 and VEGF, also achieved encouraging preliminary results in first-line treatment. Beyond endometrial cancer, notable progress was reported across other gynecologic malignancies. In cervical cancer, the phase 3 KEYNOTE-A18 trial confirmed the benefit of pembrolizumab plus concurrent chemoradiotherapy, significantly improving both progression-free and overall survival. In vulvar cancer, a phase 2 trial combining pembrolizumab, cisplatin, and radiation therapy showed encouraging antitumor activity. In another phase 2 trial, the HER2-directed antibody-drug conjugate SHR-A1811 demonstrated promising response rates in ovarian, endometrial, and cervical cancers. In ovarian cancer, the FIRST/ENGOT-OV44 trial did not meet its secondary endpoint with the combination of dostarlimab and niraparib maintenance over niraparib alone. However, the ROSELLA trial showed improved overall survival with the combination of relacorilant (a glucocorticoid receptor antagonist) and nab-paclitaxel in platinum-resistant disease.

Getting Good Results From AI and Search Engines Means Asking the Right Questions
Getting Good Results From AI and Search Engines Means Asking the Right Questions

CNET

timean hour ago

  • CNET

Getting Good Results From AI and Search Engines Means Asking the Right Questions

The way you search online or ask an AI chatbot for information can influence the results you get, even if you aren't trying to find information that reinforces your own beliefs, according to a new study. People tend to use terms, whether in a traditional search engine like Google or a conversational tool like OpenAI's ChatGPT, that reflect their existing biases and perceptions, according to the study, published this spring in the Proceedings of the National Academy of Sciences. More importantly, search engines and chatbots often provide results that reinforce those beliefs, even if the intent is to learn more about the topic. For example, imagine you're trying to learn about the health effects of drinking coffee every day. If you, like me, enjoy having a couple of cups of joe first thing in the morning, you may search for something like "is coffee healthy?" or "health benefits of coffee." If you're already skeptical (maybe a tea purist), you might search for "is coffee bad for you?" instead. The researchers found that framing of questions could skew the results -- I'd mostly get answers that show the benefits of coffee, while you'd get the opposite. "When people look up information, whether it's Google or ChatGPT, they actually use search terms that reflect what they already believe," Eugina Leung, an assistant professor at Tulane University and lead author of the study, told me. These concerns about how we get information that favors our own preconceptions are nothing new. Long before the internet, you'd learn about the world from a newspaper that might carry a particular slant. But the prevalence of search engines and social media makes it easier to fall down a rabbit hole and harder to realize you're in it. With AI chatbots and AI-powered search telling you with confidence what you should know, and sometimes making it up or not telling you where the information comes from, there's never been a more important time to think deeply about how you get information online. The question is: How do you get the best answers? Asking the wrong questions The researchers conducted 21 studies with nearly 10,000 participants who were asked to perform searches on certain preselected topics, including the health effects of caffeine, gas prices, crime rates, COVID-19 and nuclear energy. The search engines and tools used included Google, ChatGPT and custom-designed search engines and AI chatbots. The researchers' results showed that what they called the "narrow search effect" was a function of both how people asked questions and how the tech platforms responded. People have a habit, in essence, of asking the wrong questions (or asking questions in the wrong way). They tended to use search terms or AI prompts that demonstrated what they already thought, and search engines and chatbots were designed to provide narrow, extremely relevant answers, delivered on those answers. "The answers end up basically just confirming what they believe in the first place," Leung said. Read more: AI Essentials: 27 Ways to Make Gen AI Work for You, According to Our Experts The researchers also checked to see if participants changed their beliefs after conducting a search. When served a narrow selection of answers that largely confirmed their beliefs, they were unlikely to see significant changes. But when the researchers provided a custom-built search engine and chatbot designed to offer a broader array of answers, they were more likely to change. Leung said platforms could provide people with the option of a broader search, which could prove helpful in situations where the user is trying to find a wider variety of sources. "Our research is not trying to suggest that search engines or algorithms should always broaden their search results," she said. "I do think there is a lot of value in providing very focused and very narrow search results in certain situations." How to ask the right questions If you want a broader array of answers to your questions, there are some things you can do, Leung said. First, think specifically about what exactly it is you're trying to learn. She used an example of trying to decide if you want to invest in a particular company's stock. Asking if it's a good stock or a bad stock to buy will likely skew your results -- more positive news if you ask if it's good, more negative news if you ask if it's bad. Instead, try a single, more neutral search term. Or ask both terms and evaluate the results of each. Especially with an AI chatbot, you can ask for a broad range of perspectives directly in the prompt. If you want to know if you should keep drinking two cups of coffee a day, ask the chatbot for a variety of opinions and the evidence behind them. The researchers tried this in one of their experiments and found they got more variety in results. "We asked ChatGPT to provide different perspectives to answer the query from the participants and to provide as much evidence to back up those claims as possible," Leung said. Asking follow-up questions didn't work quite as well, Leung said. If those questions aren't getting broader answers, you may get the opposite effect -- even more narrow, affirming results. In many cases, people who asked lots of follow-up questions just "fell deeper down into the rabbit hole," she said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store