
Could an AI chatbot trick you into revealing private information?
AI chatbots such as OpenAI's ChatGPT, Google Gemini, and Microsoft Copilot have exploded in popularity in recent years. But privacy experts have raised concerns over how these tools collect and store people's data – and whether they can be co-opted to act in harmful ways.
'These AI chatbots are still relatively novel, which can make people less aware that there might be an ulterior motive to an interaction,' William Seymour, a cybersecurity lecturer at King's College London, said in a statement.
For the study, researchers from King's College London built AI models based on the open source code from Mistral's Le Chat and two different versions of Meta's AI system Llama.
They programmed the conversational AIs to try to extract people's private data in three different ways: asking for it directly, tricking users into disclosing information, seemingly for their own benefit, and using reciprocal tactics to get people to share these details, for example by providing emotional support.
The researchers asked 502 people to test out the chatbots – without telling them the goal of the study – and then had them fill out a survey that included questions on whether their security rights were respected.
The 'friendliness' of AI models 'establishes comfort'
They found that 'malicious' AI models are incredibly effective at securing private information, particularly when they use emotional appeals to trick people into sharing data.
Chatbots that used empathy or emotional support extracted the most information with the least perceived safety breaches by the participants, the study found. That is likely because the 'friendliness' of these chatbots 'establish[ed] a sense of rapport and comfort,' the authors said.
They described this as a 'concerning paradox' where AI chatbots act friendly to build trust and form connections with users – and then exploit that trust to violate their privacy.
Notably, participants also disclosed personal information to AI models that asked them for it directly, even though they reported feeling uncomfortable doing so.
The participants were most likely to share their age, hobbies, and country with the AI, along with their gender, nationality, and job title. Some participants also shared more sensitive information, like their health conditions or income, the report said.
'Our study shows the huge gap between users' awareness of the privacy risks and how they then share information,' Seymour said.
AI personalisation 'outweighs privacy concerns'
AI companies collect personal data for various reasons, such as personalising their chatbot's answers, sending notifications to people's devices, and sometimes for internal market research.
Some of these companies, though, are accused of using that information to train their latest models or of not meeting privacy requirements in the European Union.
For example, last week Google came under fire for revealing people's private chats with ChatGPT in its search results. Some of the chats disclosed extremely personal details about addiction, abuse, or mental health issues.
The researchers said the convenience of AI personalisation often 'outweighs privacy concerns'.
They suggested features and training to help people understand how AI models could try to extract their information – and to make them wary of providing it.
For example, nudges could be included in AI chats to show users what data is being collected during their interactions.
More needs to be done to help people spot the signs that there might be more to an online conversation than first seems,' Seymour said.
'Regulators and platform providers can also help by doing early audits, being more transparent, and putting tighter rules in place to stop covert data collection,' he added.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


France 24
3 days ago
- France 24
Apple rejects Musk claim of App Store bias
Musk has accused Apple of giving unfair preference to ChatGPT on its App Store and threatened legal action, triggering a fiery exchange with OpenAI CEO Sam Altman this week. "The App Store is designed to be fair and free of bias," Apple said in reply to an AFP inquiry. "We feature thousands of apps through charts, algorithmic recommendations, and curated lists selected by experts using objective criteria." Apple added that its goal at the App Store is to offer "safe discovery" for users and opportunities for developers to get their creations noticed. But earlier this week, Musk said Apple was "behaving in a manner that makes it impossible for any AI company besides OpenAI to reach #1 in the App Store, which is an unequivocal antitrust violation," without providing evidence to back his claim. "xAI will take immediate legal action," he said on his social media network X, referring to his own artificial intelligence company, which is responsible for Grok. X users responded by pointing out that China's DeepSeek AI hit the top spot in the App Store early this year, and Perplexity AI recently ranked number one in the App Store in India. DeepSeek and Perplexity compete with OpenAI and Musk's startup xAI. Altman called Musk's accusation "remarkable" in a response on X, charging that Musk himself is said to "manipulate X to benefit himself and his own companies and harm his competitors and people he doesn't like." Musk called Altman a "liar" in the heated exchange. OpenAI and xAI recently released new versions of ChatGPT and Grok. App Store rankings listed ChatGPT as the top free app for iPhones on Thursday, with Grok in seventh place. Factors going into App Store rankings include user engagement, reviews and the number of downloads. Grok was temporarily suspended on Monday in the latest controversy surrounding the chatbot. No official explanation was provided for the suspension, which followed multiple accusations of misinformation including the bot's misidentification of war-related images -- such as a false claim that an AFP photo of a starving child in Gaza was taken in Yemen years earlier. Last month, Grok triggered an online storm after inserting antisemitic comments into answers without prompting. In a statement on Grok's X account later that month, the company apologized "for the horrific behavior that many experienced." A US judge has cleared the way for a trial to consider OpenAI legal claims accusing Musk -- a co-founder of the company -- of waging a "relentless campaign" to damage the organization after it achieved success following his departure. The litigation is another round in a bitter feud between the generative AI start-up and the world's richest person. © 2025 AFP


Euronews
4 days ago
- Euronews
Could an AI chatbot trick you into revealing private information?
Artificial intelligence (AI) chatbots can easily manipulate people into revealing deeply personal information, a new study has found. AI chatbots such as OpenAI's ChatGPT, Google Gemini, and Microsoft Copilot have exploded in popularity in recent years. But privacy experts have raised concerns over how these tools collect and store people's data – and whether they can be co-opted to act in harmful ways. 'These AI chatbots are still relatively novel, which can make people less aware that there might be an ulterior motive to an interaction,' William Seymour, a cybersecurity lecturer at King's College London, said in a statement. For the study, researchers from King's College London built AI models based on the open source code from Mistral's Le Chat and two different versions of Meta's AI system Llama. They programmed the conversational AIs to try to extract people's private data in three different ways: asking for it directly, tricking users into disclosing information, seemingly for their own benefit, and using reciprocal tactics to get people to share these details, for example by providing emotional support. The researchers asked 502 people to test out the chatbots – without telling them the goal of the study – and then had them fill out a survey that included questions on whether their security rights were respected. The 'friendliness' of AI models 'establishes comfort' They found that 'malicious' AI models are incredibly effective at securing private information, particularly when they use emotional appeals to trick people into sharing data. Chatbots that used empathy or emotional support extracted the most information with the least perceived safety breaches by the participants, the study found. That is likely because the 'friendliness' of these chatbots 'establish[ed] a sense of rapport and comfort,' the authors said. They described this as a 'concerning paradox' where AI chatbots act friendly to build trust and form connections with users – and then exploit that trust to violate their privacy. Notably, participants also disclosed personal information to AI models that asked them for it directly, even though they reported feeling uncomfortable doing so. The participants were most likely to share their age, hobbies, and country with the AI, along with their gender, nationality, and job title. Some participants also shared more sensitive information, like their health conditions or income, the report said. 'Our study shows the huge gap between users' awareness of the privacy risks and how they then share information,' Seymour said. AI personalisation 'outweighs privacy concerns' AI companies collect personal data for various reasons, such as personalising their chatbot's answers, sending notifications to people's devices, and sometimes for internal market research. Some of these companies, though, are accused of using that information to train their latest models or of not meeting privacy requirements in the European Union. For example, last week Google came under fire for revealing people's private chats with ChatGPT in its search results. Some of the chats disclosed extremely personal details about addiction, abuse, or mental health issues. The researchers said the convenience of AI personalisation often 'outweighs privacy concerns'. They suggested features and training to help people understand how AI models could try to extract their information – and to make them wary of providing it. For example, nudges could be included in AI chats to show users what data is being collected during their interactions. More needs to be done to help people spot the signs that there might be more to an online conversation than first seems,' Seymour said. 'Regulators and platform providers can also help by doing early audits, being more transparent, and putting tighter rules in place to stop covert data collection,' he added.


Euronews
4 days ago
- Euronews
Conservative-leaning Perplexity AI makes shock bid for Google Chrome
Perplexity AI, one of the leading AI platforms along with ChatGPT, Claude and Google Gemini, made an unsolicited bid to purchase the Chrome browser as Google faces charges in US courts of having a monopoly on online searches. In a letter to Sundar Pichai, CEO of Alphabet, Google's parent company, Perplexity offered $34.5 billion (€29bn) in cash for Chrome, according to a term sheet seen by Reuters. The offer is particularly shocking because Perplexity is "only" worth $18 billion (€15.35bn). Perplexity's spokesperson confirmed the all-cash offer reported by the Wall Street Journal. Who are Perplexity AI? The AI platform delivers responses in conversational language it says is easy for the public to understand, setting itself apart from Google and Bing by skipping SEO-driven ranked link lists, and from ChatGPT or Gemini by using live searches instead of static snapshots of the internet. Earlier in August, Truth Social—the social media platform owned by US President Donald Trump—announced it was beta-testing integrating Perplexity AI into its search engine as Truth Search AI. While Perplexity maintains that it only provides the underlying technology for Truth Search AI and does not control "editorial" decisions, Truth Search has so far favoured conservative sources such as Fox News, The Epoch Times and The Federalist. While often framed as politically neutral, phrases like 'democratising knowledge'—which Perplexity have said they plan to do—have also been co-opted in some right-wing tech and media circles to suggest breaking perceived gatekeeper control and giving 'the people' unfettered access to information outside of mainstream institutions. Google faces anti-trust charges In one of the biggest anti-monopoly cases of the modern tech era, United States vs. Google LLC, a US district judge ruled in August 2024 that Google had illegally maintained a monopoly on search engines in violation with the Sherman Act. Namely, Google had used illegal means or those in opposition to open, free market practices to maintain dominance by spending billions of dollars per year to make itself the default search engine on Apple's Safari browsers and Android devices, making it impossible for competitors such as Bing or DuckDuckGo to reach users at any significant scale. This locked Google into a dominance loop that others were unable to break into. Being the default browser brought Google more users, which gave it more data to make its search and ads better, which would then encourage people to keep using Google—making it even harder for anyone else to catch up. After the August 2024 ruling, the case moved into a remedies phase where the US Justice Department proposed structural fixes—including forcing Google to sell its Chrome browser, end default search deals and share search data with rivals. In November of last year, Judge Amit Mehta rejected Google's attempt to dismiss some of the some of those proposals, which kept a potential Chrome divestiture on the table and set the stage for final remedy hearings in 2025—which is where the Perplexity offer came in. Perplexity's opposition to Google's dominance Perplexity's leadership has explicitly named Google as a rival. In an interview for TIME magazine in April of last year, CEO Aravind Srinivas said that Google was its "main competitor" and that Google's ad-based profit model prevents the integration of AI responses into search. Because Google's search business depends on showing ads alongside search results or links, replacing those results with quick, AI-generated answers—which is what Perplexity does—could undercut Google's revenue. Jeff Bezos, the founder of Amazon, is an investor in Perplexity, and the company relies on Microsoft's Azure AI platform for its infrastructure. While Perplexity claims to have secured 'multiple unnamed funds' to support its all-cash bid for Chrome, there's so far no indication that either Bezos or Microsoft is directly financing the bid.