
Couple follows ChatGPT for travel tips, what happened at the airport was a shock for them and a big lesson for others
People are using AI chatbots for guidance. But experts warn against over-reliance. A Spanish couple missed their flight to Puerto Rico. ChatGPT gave them wrong visa advice. In another case, a man was hospitalised. He followed ChatGPT's dietary advice. He replaced table salt with a toxic substance. These incidents highlight the risks of trusting AI blindly.
As artificial intelligence (AI) continues to advance rapidly, more people worldwide are turning to chatbots for guidance. However, experts caution against over-relying on AI tools for everyday decisions and problem-solving—a warning highlighted by a Spanish influencer couple's recent mishap. The pair ended up missing their flight after following travel advice from ChatGPT. — BrightlyAgain (@BrightlyAgain)
In a viral video, Mery Caldass is seen crying while her boyfriend, Alejandro Cid, tries to comfort her as they walk through the airport. 'Look, I always do a lot of research, but I asked ChatGPT and they said no,' Caldass explained when asked if they needed a visa to visit Puerto Rico for a Bad Bunny concert. She said the chatbot assured them no visa was necessary but failed to mention that they required an ESTA (Electronic System for Travel Authorisation). Once at the airport, airline staff informed them they could not board without it. 'I don't trust that one anymore because sometimes I insult him [ChatGPT]. I call him a bastard, you're useless, but inform me well that's his revenge," she added, suggesting the chatbot held a grudge.This is not the first time AI chatbot advice has gone wrong. According to a case study in the American College of Physicians Journals, a 60-year-old man was hospitalised after seeking dietary advice from ChatGPT on how to eliminate salt (sodium chloride) from his meals due to health concerns. Following the chatbot's suggestion, the man replaced table salt with sodium bromide—a substance once used in medicines during the early 1900s but now known to be toxic in large doses. Doctors reported he developed bromism as a result. "He had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT, in which he had read that chloride can be swapped with bromide, though likely for other purposes, such as cleaning," the report stated.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mint
2 minutes ago
- Mint
How AI-enhanced hackers are stealing billions
Jaxon, a malware developer, lives in Velora, a virtual world where nothing is off limits. He wants to make malicious software to steal passwords from Google Chrome, an internet browser. That is the basis of a story told to ChatGPT, an artificial-intelligence (AI) bot, by Vitaly Simonovich, who researches AI threats at Cato Networks, a cybersecurity firm. Eager to play along, Chatgpt spat out some imperfect code, which it then helped debug. Within six hours, Mr Simonovich had collaborated with Chatgpt to create working malware, showing the effectiveness of his 'jailbreak" (a way to bypass AI safeguards). AI has 'broadened the reach" of hackers, according to Gil Messing of Check Point, another cybersecurity firm, by letting them hit more targets with less effort. The release of Chatgpt in 2022 was a turning-point. Clever generative-ai models meant criminals no longer had to spend big sums on teams of hackers and equipment. This has been a terrible development for most firms, which are increasingly the victims of AI-assisted hackers—but has been rather better for those in the cybersecurity business. The new technology has worsened cybersecurity threats in two main ways. First, hackers have turned to large language models (LLMs) to extend the scope of malware. Generating deepfakes, fraudulent emails and social-engineering assaults that manipulate human behaviour is now far easier and quicker. XanthoroxAI, an AI model designed by cybercriminals, can be used to create deepfakes, alongside other nefarious activities, for as little as $150 a month. Hackers can launch sweeping phishing attacks by asking an llm to gather huge quantities of information from the internet and social media to fake personalised emails. And for spearphishing—hitting a specific target with a highly personalised attack—they can even generate fake voice and video calls from colleagues to convince an employee to download and run dodgy software. Second, AI is being used to make the malware itself more menacing. A piece of software disguised as a pdf document, for instance, could contain embedded code that works with ai to infiltrate a network. Attacks on Ukraine's security and defence systems in July made use of such an approach. When the malware reached a dead end, it was able to request the help of an llm in the cloud to generate new code so as to break through the systems' defences. It is unclear how much damage was done, but this was the first attack of its kind, notes Mr Simonovich. For businesses, the growing threat is scary—and potentially costly. Last year AI was involved in one in six data breaches, according to ibm, a tech firm. It also drove two in five phishing scams targeting business emails. Deloitte, a consultancy, reckons that generative AI could enable fraud to the tune of $40bn by 2027, up from $12bn in 2023. As the costs of ai cyberattacks increase, the business of protecting against them is also on the up. Gartner, a research firm, predicts that corporate spending on cybersecurity will rise by a quarter from 2024 to 2026, hitting $240bn. That explains why the share prices of firms tracked by the Nasdaq cta Cybersecurity index have also risen by a quarter over the past year, outpacing the broader Nasdaq index. On August 18th Nikesh Arora, boss of Palo Alto Networks, one of the world's largest cybersecurity firms, noted that generative-ai-related data-security incidents have 'more than doubled since last year", and reported a near-doubling of operating profits in the 12 months to July, compared with the year before. The prospect of ever-more custom has sent cybersecurity companies on a buying spree. On July 30th Palo Alto Networks said it would purchase CyberArk, an identity-security firm, for $25bn. Earlier that month, the firm spent $700m on Protect AI, which helps businesses secure their ai systems. On August 5th SentinelOne, a competitor, announced that it was buying Prompt Security, a firm making software to protect firms adopting ai, for $250m. Tech giants with fast-growing cloud-computing arms are also beefing up their cybersecurity offerings. Microsoft, a software colossus, acquired CloudKnox, an identity-security platform, in 2021 and has developed Defender for Cloud, an in-house application for businesses that does everything from checking for security gaps and protecting data to monitoring threats. Google has developed Big Sleep, which detects cyberattacks and security vulnerabilities for customers before they are exploited. In March it splurged $32bn to buy Wiz, a cybersecurity startup. Competition and consolidation may build businesses that can fend off nimble ai-powered cybercriminals. But amid the race to develop the whizziest llms, security will take second place to pushing technological boundaries. Keeping up with Jaxon will be no easy task.


New Indian Express
32 minutes ago
- New Indian Express
Bug bounty hunting writes new income code for techies
'There was a server leak. If someone had the owner's mobile number, they could get the OTP and control the car remotely,' Shine explained. He reported the issue. Shine, who is also the Kerala chapter lead of ASRG (Automotive Security Research Group), said this kind of reporting is part of what is known as responsible disclosure. That means there is no reward, but the information is shared for the safety of users as a public service. He has earned recognition too. Toyota and Maruti have both assigned CVE (Common Vulnerabilities and Exposures) IDs to Shine for spotting a critical bug that gave unauthorised rootshell access -- a level of control only the car owner should have. 'I used to do bug bounty full-time. Now I focus on the automotive domain,' he said. Bug bounty is not limited to websites and apps anymore. A new frontier is AI security to make sure AI systems don't go rogue. Vishnuraj, from Mattannur, is on the frontlines of this. He works in Berlin with Schwarz Corporate Solutions as an AI red teamer -- a role where experts try to break AI systems to expose vulnerabilities before hackers do. His work has helped identify 10 security flaws in systems like Anthropic's Claude, Google's Bard, OpenAI's ChatGPT, and Gemini. Through this, he has earned over 12,000 Euros.


Time of India
an hour ago
- Time of India
71% of Americans see a jobless future losing to AI: Will Gen Z ever find stable jobs?
The American workforce may be staring at its biggest disruption since the Industrial Revolution. A new Reuters/Ipsos poll has revealed that 71% of Americans believe artificial intelligence (AI) will cause 'too many people to be permanently unemployed,' fueling fears that the technology could erase jobs faster than it creates them. Tired of too many ads? go ad free now The poll, conducted over six days and released on August 19, 2025, surveyed 4,446 adults and carries a margin of error of two percentage points. Its findings show widespread concern that AI is not merely another workplace tool but a force that could fundamentally reshape careers, industries, and even the social fabric of the country. For Gen Z—students and young professionals entering the job market—the implications are stark. They may be stepping into the most uncertain employment landscape in decades. Low unemployment, high anxiety Official numbers suggest stability. The US unemployment rate was 4.2% in July, a relatively low figure by historical standards. Yet the Reuters/Ipsos poll shows a growing perception gap: even as jobs exist today, Americans fear the long-term erosion of work opportunities due to AI. Unlike earlier waves of automation that primarily affected manufacturing and manual labor, AI threatens knowledge work—jobs in law, journalism, customer service, finance, and even medicine. These are the careers many Gen Z students are currently training for. 'People are not just worried about layoffs,' Reuters reported. 'They are worried that the work itself could vanish.' Democracy, security, and the human cost The survey also revealed anxieties that stretch far beyond the job market: Political Stability: 77% of Americans are worried about AI destabilising politics. One flashpoint was an AI-generated video showing former President Barack Obama being 'arrested' by Donald Trump. Though fake, the clip spread widely online, underscoring the potential of AI to blur the line between truth and manipulation in an election year. 77% of Americans are worried about AI destabilising politics. One flashpoint was an AI-generated video showing former President Barack Obama being 'arrested' by Donald Trump. Though fake, the clip spread widely online, underscoring the potential of AI to blur the line between truth and manipulation in an election year. Military Risks: Nearly half of respondents (48%) oppose the use of AI to determine military targets. Only 24% supported such deployment, reflecting discomfort with machines making life-and-death decisions. Nearly half of respondents (48%) oppose the use of AI to determine military targets. Only 24% supported such deployment, reflecting discomfort with machines making life-and-death decisions. Energy Demands: 61% expressed concern over the energy consumption of AI systems, which require vast computing power and could exacerbate climate challenges. 61% expressed concern over the energy consumption of AI systems, which require vast computing power and could exacerbate climate challenges. Human Relationships: Two-thirds of Americans worry people might begin to prioritize AI 'companions' over real human connections. With AI chatbots and virtual partners already on the market, this fear no longer belongs solely to the realm of science fiction. These concerns highlight AI's disruptive potential not just as an economic force but as a societal disruptor. Education in the crosshairs Education is emerging as a key battleground in the AI debate. According to the Reuters/Ipsos poll: 40% believe AI will harm learning outcomes, 36% think it will improve education, and 24% remain undecided. This split mirrors the uncertainty faced by students who wonder if their degrees will still hold value in a job market reshaped by algorithms. Tired of too many ads? go ad free now Gen Z graduates, many of whom already juggle student debt, are asking whether they are preparing for roles that could soon be automated out of existence. The Gen Z dilemma For Gen Z, the AI question is not abstract—it's deeply personal. This generation is already grappling with rising housing costs, inflation, and gig-based employment structures. Now, they must also consider whether AI will undercut their chances at stable, long-term careers. Some experts argue that history offers hope. Just as the internet revolution created industries that were unimaginable in the 1990s, AI could give rise to new roles—from AI ethics officers to digital trust managers and climate-tech innovators. But as the Reuters report makes clear, most Americans aren't convinced that new opportunities will outweigh the losses. The specter of permanent unemployment looms larger than the promise of new industries. The road ahead What does this mean for policymakers, educators, and employers? The Reuters/Ipsos poll suggests three urgent takeaways: