logo
AI Chatbots Are Putting Clueless Hikers in Danger, Search and Rescue Groups Warn

AI Chatbots Are Putting Clueless Hikers in Danger, Search and Rescue Groups Warn

Yahoo20-05-2025

Two hikers trying to tackle Unnecessary Mountain near Vancouver, British Columbia, had to call in a rescue team after they stumbled into snow. The pair were only wearing flat-soled sneakers, unaware that the higher altitudes of a mountain range only some 15 degrees of latitude south of the Arctic Circle might still be snowy in the spring.
"We ended up going up there with boots for them," Brent Calkin, leader of the Lions Bay Search and Rescue team, told the Vancouver Sun. "We asked them their boot size and brought up boots and ski poles."
It turns out that to plan their ill-fated expedition, the hikers heedlessly followed the advice given to them by Google Maps and the AI chatbot ChatGPT.
Now, Calkin and his rescue team are warning that maybe you shouldn't rely on dodgy apps and AI chatbots — a piece of technology known for lying and being wrong all the time — to plan a grueling excursion through the wilderness.
"With the amount of information available online, it's really easy for people to get in way over their heads, very quickly," Calkin told the Vancouver Sun.
Across the pond, a recent report from Mountain Rescue England and Wales blamed social media and bad navigation apps for a historic surge in rescue teams being called out, the newspaper noted.
Stephen Hui, author of the book "105 Hikes," echoed that warning and cautioned that getting reliable information is one of the biggest challenges presented by AI chatbots and apps. With AI in particular, Hui told the Vancouver Sun, it's not always easy to tell if it's giving you outdated information from an obscure source or if it's pulling from a reliable one.
From his testing of ChatGPT, Hui wasn't too impressed. Sure, it can give you "decent directions" on the popular trails, he said, but it struggles with the obscure ones.
Most of all, AI chatbots struggle with giving you relevant real-time information.
"Time of year is a big deal in [British Columbia]," Hui told the Vancouver Sun. "The most sought-after view is the mountain top, but that's really only accessible to hikers from July to October. In winter, people may still be seeking those views and not realize that there's going to be snow."
When Calkin tested ChatGPT, he found that a "good input" made a big difference in terms of the quality of the answers he got. Of course, the type of person asking a chatbot for hiking advice probably won't know the right questions to ask.
Instead of an AI chatbot, you might, for instance, try asking a human being with experience in the area you're looking at for advice, Calkin suggested, who you can find on indispensable founts of wisdom like Reddit forums and Facebook groups.
"Someone might tell you there's a storm coming in this week," Calkin told the Vancouver Sun. "Or I was just up there Wednesday and it looks good. Or you're out of your mind, don't take your six-year-old on that trail."
More on AI: Elon Musk's AI Just Went There

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI to continue working with Scale AI after Meta deal
OpenAI to continue working with Scale AI after Meta deal

Yahoo

timean hour ago

  • Yahoo

OpenAI to continue working with Scale AI after Meta deal

PARIS (Reuters) -OpenAI plans to continue working with Scale AI after rival Meta on Friday agreed to take a 49% stake in the artificial intelligence startup for $14.8 billion, OpenAI's CFO Sarah Friar told the VivaTech conference in Paris. Scale AI provides vast amounts of labelled or curated training data, which is crucial for developing sophisticated tools such as OpenAI's ChatGPT. "We don't want to ice the ecosystem because acquisitions are going to happen," she said. "And if we ice each other out, I think we're actually going to slow the pace of innovation." Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

The Newspaper That Hired ChatGPT
The Newspaper That Hired ChatGPT

Yahoo

timean hour ago

  • Yahoo

The Newspaper That Hired ChatGPT

The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here. For more than 20 years, print media has been a bit of a punching bag for digital-technology companies. Craigslist killed the paid classifieds, free websites led people to think newspapers and magazines were committing robbery when they charged for subscriptions, and the smartphone and social media turned reading full-length articles into a chore. Now generative AI is in the mix—and many publishers, desperate to avoid being left behind once more, are rushing to harness the technology themselves. Several major publications, including The Atlantic, have entered into corporate partnerships with OpenAI and other AI firms. Any number of experiments have ensued—publishers have used the software to help translate work into different languages, draft headlines, and write summaries or even articles. But perhaps no publication has gone further than the Italian newspaper Il Foglio. For one month, beginning in late March, Il Foglio printed a daily insert consisting of four pages of AI-written articles and headlines. Each day, Il Foglio's top editor, Claudio Cerasa, asked ChatGPT Pro to write articles on various topics—Italian politics, J. D. Vance, AI itself. Two humans reviewed the outputs for mistakes, sometimes deciding to leave in minor errors as evidence of AI's fallibility and, at other times, asking ChatGPT to rewrite an article. The insert, titled Il Foglio AI, was almost immediately covered by newspapers around the world. 'It's impossible to hide AI,' Cerasa told me recently. 'And you have to understand that it's like the wind; you have to manage it.' Now the paper—which circulates about 29,000 copies each day, in addition to serving its online readership—plans to embrace AI-written content permanently, issuing a weekly AI section and, on occasion, using ChatGPT to write articles for the standard paper. (These articles will always be labeled.) Cerasa has already used the technology to generate fictional debates, such as an imagined conversation between a conservative and a progressive cardinal on selecting a new pope; a review of the columnist Beppe Severgnini's latest book, accompanied by Severgnini's AI-written retort; the chatbot's advice on what to do if you suspect you're falling in love with a chatbot ('Do not fall in love with me'); and an interview with Cerasa himself, conducted by ChatGPT. Il Foglio's AI work is full-fledged and transparently so: natural and artificial articles, clearly divided. Meanwhile, other publications provide limited, or sometimes no, insight into their usage of the technology, and some have even mixed AI and human writing without disclosure. As if to demonstrate how easily the commingling of AI and journalism can go sideways, just days after Cerasa and I first spoke, at least two major regional American papers published a spread of more than 50 pages titled 'Heat Index,' which was riddled with errors and fabrications; a freelancer who'd contributed to the project admitted to using ChatGPT to generate at least some portions of the text, resulting in made-up book titles and expert sources who didn't actually exist. The result was an embarrassing example of what can result when the technology is used to cut corners. [Read: At least two newspapers syndicated AI garbage] With so many obvious pitfalls to using AI, I wanted to speak with Cerasa to understand more about his experiment. Over Zoom, he painted an unsettling, if optimistic, portrait of his experience with AI in journalism. Sure, the technology is flawed. It's prone to fabrications; his staff has caught plenty of them, and has been taken to task for publishing some of those errors. But when used correctly, it writes well—at times more naturally, Cerasa told me, than even his human staff. Still, there are limits. 'Anyone who tries to use artificial intelligence to replace human intelligence ends up failing,' he told me when I asked about the 'Heat Index' disaster. 'AI is meant to integrate, not replace.' The technology can benefit journalism, he said, 'only if it's treated like a new colleague—one that needs to be looked after.' The problem, perhaps, stems from using AI to substitute rather than augment. In journalism, 'anyone who thinks AI is a way to save money is getting it wrong,' Cerasa said. But economic anxiety has become the norm for the field. A new robot colleague could mean one, or three, or 10 fewer human ones. What, if anything, can the rest of the media learn from Il Foglio's approach? Our conversation has been edited for length and clarity. Matteo Wong: In your first experiment with AI, you hid AI-written articles in your paper for a month and asked readers if they could detect them. How did that go? What did you learn? Claudio Cerasa: A year ago, for one month, every day we put in our newspaper an article written with AI, and we asked our readers to guess which article was AI-generated, offering the prize of a one-year subscription and a bottle of champagne. The experiment helped us create better prompts for the AI to write an article, and helped us humans write better articles as well. Sometimes an article written by people was seen as an article written by AI: for instance, when an article is written with numbered points—first, second, third. So we changed something in how we write too. Wong: Did anybody win? Cerasa: Yes, we offered a lot of subscriptions and champagne. More than that, we realized we needed to speak about AI not just in our newspaper, but all over the world. We created this thing that is important not only because it is journalism with AI, but because it combines the oldest way to do information, the newspaper, and the newest, artificial intelligence. Wong: How did your experience of using ChatGPT change when you moved from that original experiment to a daily imprint entirely written with AI? Cerasa: The biggest thing that has changed is our prompt. At the beginning, my prompt was very long, because I had to explain a lot of things: You have to write an article with this style, with this number of words, with these ideas. Now, after a lot of use of ChatGPT, it knows better what I want to do. When you start to use, in a transparent way, artificial intelligence, you have a personal assistant: a new person that works in the newspaper. It's like having another brain. It's a new way to do journalism. Wong: What are the tasks and topics you've found that ChatGPT is good at and for which you'd want to use it? And conversely, where are the areas where it falls short? Cerasa: In general, it is good at three things: research, summarizing long documents, and, in some cases, writing. I'm sure in the future, and maybe in the present, many editors will try to think of ways AI can erase journalists. That could be possible, because if you are not a journalist with enough creativity, enough reporting, enough ideas, maybe you are worse than a machine. But in that case, the problem is not the machine. The technology can also recall and synthesize far more information than a human can. The first article we put in the normal newspaper written with AI was about the discovery of a key ingredient for life on a distant planet. We asked the AI to write a piece on great authors of the past and how they imagined the day scientists would make such a discovery. A normal person would not be able to remember all these things. Wong: And what can't the AI do? Cerasa: AI cannot find the news; it cannot develop sources or interview the prime minister. AI also doesn't have interesting ideas about the world—that's where natural intelligence comes in. AI is not able to draw connections in the same way as intelligent human journalists. I don't think an AI would be able to come up with and fully produce a newspaper generated by AI. Wong: You mentioned before that there may be some articles or tasks at a newspaper that AI can already write or perform better than humans, but if so, the problem is an insufficiently skilled person. Don't you think young journalists have to build up those skills over time? I started at The Atlantic as an assistant editor, not a writer, and my primary job was fact-checking. Doesn't AI threaten the talent pipeline, and thus the media ecosystem more broadly? Cerasa: It's a bit terrifying, because we've come to understand how many creative things AI can do. For our children to use AI to write something in school, to do their homework, is really terrifying. But AI isn't going away—you have to educate people to use it in the correct way, and without hiding it. In our newspaper, there is no fear about AI, because our newspaper is very particular and written in a special way. We know, in a snobby way, that our skills are unique, so we are not scared. But I'm sure that a lot of newspapers could be scared, because normal articles written about the things that happened the day before, with the agency news—that kind of article, and also that kind of journalism, might be the past. Article originally published at The Atlantic

Meta AI searches made public - but do all its users realise?
Meta AI searches made public - but do all its users realise?

Yahoo

time2 hours ago

  • Yahoo

Meta AI searches made public - but do all its users realise?

How would you feel if your internet search history was put online for others to see? That may be happening to some users of Meta AI without them realising, as people's prompts to the artificial intelligence tool - and the results - are posted on a public feed. One internet safety expert said it was "a huge user experience and security problem" as some posts are easily traceable, through usernames and profile pictures, to social media accounts. This means some people may be unwittingly telling the world about things they may not want others to know they have searched for - such as asking the AI to generate scantily-clad characters or help them cheat on tests. Meta has been contacted for comment. It is not clear if the users know that their searches are being posted into a public feed on the Meta AI app and website, though the process is not automatic. If people choose to share a post, a message pops up which says: "Prompts you post are public and visible to everyone... Avoid sharing personal or sensitive information." The BBC found several examples of people uploading photos of school or university test questions, and asking Meta AI for answers. One of the chats is titled "Generative AI tackles math problems with ease". There were also searches for women and anthropomorphic animal characters wearing very little clothing. One search, which could be traced back to a person's Instagram account because of their username and profile picture, asked Meta AI to generate an image of an animated character lying outside wearing only underwear. Meanwhile, tech news outlet TechCrunch has reported examples of people posting intimate medical questions - such as how to deal with an inner thigh rash. Meta AI, launched earlier this year, can be accessed through its social media platforms Facebook, Instagram and Whatsapp. It is also available as a standalone product which has a public "Discover" feed. Users can opt to make their searches private in their account settings. Meta AI is currently available in the UK through a browser, while in the US it can be used through an app. In a press release from April which announced Meta AI, the company said there would be "a Discover feed, a place to share and explore how others are using AI". "You're in control: nothing is shared to your feed unless you choose to post it," it said. But Rachel Tobac, chief executive of US cyber security company Social Proof Security, posted on X saying: "If a user's expectations about how a tool functions don't match reality, you've got yourself a huge user experience and security problem." She added that people do not expect their AI chatbot interactions to be made public on a feed normally associated with social media. "Because of this, users are inadvertently posting sensitive info to a public feed with their identity linked," she said. Meta urged to go further in crackdown on 'nudify' apps WhatsApp defends 'optional' AI tool that cannot be turned off Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store