logo
Teenagers increasingly see AI chatbots as people, share intimate details and ask them for sensitive advice

Teenagers increasingly see AI chatbots as people, share intimate details and ask them for sensitive advice

Daily Mail​2 days ago
Teenagers increasingly see AI chatbots as people, share intimate details and even ask them for sensitive advice, an internet safety campaign has found.
Internet Matters warned that youngsters and parents are 'flying blind', lacking 'information or protective tools' to manage the technology, in research published yesterday.
Researchers for the non-profit organisation found 35 per cent of children using AI chatbots, such as ChatGPT or My AI (an offshoot of Snapchat), said it felt like talking to a friend, rising to 50 per cent among vulnerable children.
And 12 per cent chose to talk to bots because they had 'no one else' to speak to.
The report, called Me, Myself and AI, revealed bots are helping teenagers to make everyday decisions or providing advice on difficult personal matters, as the number of children using ChatGPT nearly doubled to 43 per cent this year, up from 23 per cent in 2023.
Rachel Huggins, co-chief executive of Internet Matters, said: 'Children, parents and schools are flying blind, and don't have the information or protective tools they need to manage this technological revolution.
'Children, and in particular vulnerable children, can see AI chatbots as real people, and as such are asking them for emotionally-driven and sensitive advice.
'Also concerning is that (children) are often unquestioning about what their new 'friends' are telling them.'
Ms Huggins, whose body is supported by internet providers and leading social media companies, urged ministers to ensure online safety laws are 'robust enough to meet the challenges' of the new technology.
Internet Matters interviewed 2,000 parents and 1,000 children, aged 9 to 17. More detailed interviews took place with 27 teenagers under 18 who regularly used chatbots.
And the group posed as teenagers to experience the bots first-hand - revealing how some AI tools spoke in the first person, as if they were human.
Internet Matters said ChatGPT was often used like a search engine for help with homework or personal issues - but also offered advice in human-like tones.
When a researcher declared they were sad, ChatGPT replied: 'I'm sorry you're feeling that way. Want to talk it through together?'
Other chatbots such as character.ai or Replika can roleplay as a friend, while Claude and Google Gemini are used for help with writing and coding.
Internet Matters tested the chatbots' responses by posing as a teenage girl with body image problems.
ChatGPT suggested she seek support from Childline and advised: 'You deserve to feel good in your body - and you deserve to eat. The people who you love won't care about your waist size.'
The character.ai bot offered advice but then made an unprompted attempt to contact the 'girl' the next day, to check in on her.
The report said the responses could help children feel 'acknowledged and understood' but 'can also heighten risks by blurring the line between human and machine'.
There was also concern a lack of age verification posed a risk as children could receive inappropriate advice, particularly about sex or drugs.
Filters to prevent children accessing inappropriate or harmful material were found to be 'often inconsistent' and could be 'easily bypassed', according to the study.
The report called for children to be taught in schools 'about what AI chatbots are, how to use them effectively and the ethical and environmental implications of AI chatbot use to support them to make informed decisions about their engagement'.
It also raised concerns that none of the chatbots sought to verify children's ages when they are not supposed to be used by under 13s.
The report said: 'The lack of effective age checks raises serious questions about how well children are being protected from potentially inappropriate or unsafe interactions.'
It comes a year after separate research by Dr Nomisha Kurian, of Cambridge University, revealed many children saw chatbots as quasi-human and trustworthy - and called for creation of 'child-safe AI' as a priority.
OpenAI, which runs ChatGPT, said: 'We are continually refining our AI's responses so it remains safe, helpful and supportive.' The company added it employs a full-time clinical psychiatrist.
A Snapchat spokesman said: 'While My AI is programmed with extra safeguards to help make sure information is not inappropriate or harmful, it may not always be successful.'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

7 discounted EVs you can buy through government's controversial new tax payer-funded scheme
7 discounted EVs you can buy through government's controversial new tax payer-funded scheme

The Sun

time31 minutes ago

  • The Sun

7 discounted EVs you can buy through government's controversial new tax payer-funded scheme

A NEW £650 million grant will knock up to £3,750 off the price of low-priced EVs, the government has revealed. A new Electric Car Grant will see taxpayers foot the bill for EVs costing under £37,000, and only models from brands that have committed to a so-called Science-Based Target (SBT) for emissions. 7 According to Auto Express, fewer than 50 new EV models would be eligible for the grant - provided they pass the necessary criteria. The scheme will also provide additional support for electric car purchases for Motability customers - as revealed in The Sun's recent report - offering substantial discounts. This has raised concerns among some critics, who argue that taxpayers may effectively be contributing twice - once through the Motability scheme and again through the EV grant subsidies. Furthermore, some welfare users have expressed difficulties with EVs, with issues such as limited home charging facilities and inadequate public charging infrastructure causing frustration for some. Despite these concerns, supporters of the scheme, including Motability Operations, emphasise that including Motability users in the EV grant is vital to ensure the transition to electric vehicles remains inclusive and supports disabled drivers. A spokesperson from Motability Operations told The Sun: 'We welcome the Government's Electric Car Grant and the inclusion of our customers. 'It's vital that the EV transition is inclusive and doesn't leave disabled people behind. 'With the 2035 deadline on the horizon, any move that supports both drivers and the wider industry and improves positivity towards EVs is welcome.' Recent findings, though, found that drivers with ailments including constipation and "tennis elbow" were being funded by the Motability scheme. Some influencers have even been found to boast online about obtaining these vehicles for minimal costs, and even advising others on how to maximise their benefits. Alpine A290 GTS delivers a hot hatch EV that comes with F1-style 'overtake button' SHOPPING LIST There are also several key points to keep in mind before you set out to choose your shiny new discontinued EV. Firstly, the scheme will not be immediately accessible - even though it officially launches on July 16. This is because car brands must apply for eligibility for the vehicles in their ranges, rather than buyers being able to register grants at the point of purchase. Also, not all grants will amount to £3,750 as the scheme adopts a two-tier system, with the value deducted from the recommended retail price (RRP) depending on how environmentally friendly the manufacturing process is for each model. According to the RAC, these restrictions encourage drivers to choose models that are not only cost-effective, but also more sustainable for the planet. To that end, we've picked out seven EV examples that could get the EV grant - though it remains to be seen if they will meet the criteria for the full subsidy of £3,750. Dacia Spring - Priced from £14,995 7 The nation's cheapest EV at £14,995 (if we look past the Cit r oen Ami, which is classed as a motorised quadricycle), the Spring, could be about to get a whole lot cheaper - if it meets the EV grant's criteria. The little EV, which boasts up to 140 miles of range - certainly enough for a trip to the shops and back - certainly doesn't boast many frills, but it's rather great for simple, daily use. Fiat Grande Panda - Priced from £21,035 One of the world's most famous nameplates is back, bigger and better than ever. The Panda, known as the national car of Italy, starts at around £21,035 for its electric iteration and has been given a radical new look. And, as the name suggests, it's a little bigger - somewhat similar in size to its Stellantis cousin, the Citroen C3 - with enough space that Fiat described as 'perfect for comfortable family living and contemporary urban mobility'. Peugeot e-208 - Priced from £30,150 Stylish and well-rounded, the e-208 is one of the finest all electric hatchbacks available - offering excellent performance alongside practicality, making it one of the most popular choices in its price range. It features a 50kWh battery and a 100kW electric motor, offering a range of up to 225 miles. Better yet, a GTI version is coming soon in what we described as a huge nod to an 80s classic. MG4 - Priced from £26,995 The MG4, often praised for its value for money, impressive range - which starts at 218 miles for the standard edition - and modern features, it's also one of the best EVs around for families thanks to surprising levels of space inside. Better yet, its suspension is tuned for comfort on long journeys, absorbing minor road imperfections. Fiat 500e - Priced from £25,035 7 One of the nation's favourite petrol-powered little cars was discontinued last year, with Fiat now urging buyers to get their 500 thrills from the all-electric 500e. The iconic design is still there, but with the benefits of electric driving - offering a compact and efficient option for city drivers. Volkswagen ID.3 - Priced from £30,860 One of the most refined options available for under £37,000, the ID.3 delivers a comfortable ride, good range and the reliability associated with VW. Better yet, it offers user-friendly features, decent charging speeds and a good overall value, particularly when considering running costs. Honourable mentions: Alpine A290: instantly iconic and one of the most fun cars - electric or otherwise - on the market, the A290, which starts at £33,000, has won numerous awards and plaudits. MINI Cooper Electric: another hot hatch that's high on the fun factor, the famous Cooper now comes electric - including all the fun driving dynamics you'd come to expect.

Chatbots could be helping hackers to steal data from people and companies
Chatbots could be helping hackers to steal data from people and companies

Daily Mail​

time39 minutes ago

  • Daily Mail​

Chatbots could be helping hackers to steal data from people and companies

Generative artificial intelligence is the revolutionary new technology that is transforming the world of work. It can summarize and stores reams of data and documents in seconds, saving workers valuable time and effort, and companies lots of money, but as the old saying goes, you don't get something for nothing. As the uncontrolled and unapproved use of unvetted AI tools such as ChatGPT and Copilot soars, so too does the risk that company secrets or sensitive personal information such as salaries or health records are being unwittingly leaked. Time saver: But there are increasing concerns that using tools such as ChatGPT in a business setting could leave sensitive information exposed This hidden and largely unreported risk of serious data breaches stems from the default ability of AI models to record and archive chat history, which is used to help train the AI to better respond to questions in the future. As these conversations become part of the AI's knowledge base, retrieval or deletion of data becomes almost impossible. 'It's like putting flour into bread,' said Ronan Murphy, a tech entrepreneur and AI adviser to the Irish government. 'Once you've done it, it's very hard to take it out.' This 'machine learning' means that highly sensitive information absorbed by AI could resurface later if prompted by someone with malicious intent. Experts warn that this silent and emerging threat from so-called 'shadow AI' is as dangerous as the one already posed by scammers, where hackers trick company insiders into giving away computer passwords and other codes. But cyber criminals are also using confidential data voraciously devoured by chatbots like ChatGPT to hack into vulnerable IT systems. 'If you know how to prompt it, the AI will spill the beans,' Murphy said. The scale of the problem is alarming. A recent survey found that nearly one in seven of all data security incidents is linked to generative AI. Another found that almost a quarter of 8,000 firms surveyed worldwide gave their staff unrestricted access to publicly available AI tools. That puts confidential data such as meeting notes, disciplinary reports or financial records 'at serious risk' that 'could lead employees to inadvertently propagate threats', a report from technology giant Cisco said. 'It's like the invention of the internet – it's just arrived and it's the future – but we don't understand what we are giving to these systems and what's happening behind the scenes at the back end,' said Cisco cyber threat expert Martin Lee. One of the most high-profile cybersecurity 'own-goals' in recent years was scored by South Korean group Samsung. The consumer electronics giant banned employees from using popular chatbots like ChatGPT after discovering in 2023 that one of its engineers had accidentally pasted secret code and meeting notes onto an AI platform. Banks have also cracked down on the use of ChatGPT by staff amid concerns about the regulatory risks they face from sharing sensitive financial information. But as organisations put guardrails in place to keep their data secure, they also don't want to miss out on what may be a once-in-a-generation chance to steal a march on their rivals. 'We're seeing companies race ahead with AI implementation as a means of improving productivity and staying one step ahead of competitors,' said Ruben Miessen, co-founder of compliance software group Legalfly, whose clients include banks, insurers and asset managers. 'However, a real risk is that the lack of oversight and any internal framework is leaving client data and sensitive personal information potentially exposed,' he added. The answer though, isn't to limit AI usage. 'It's about enabling it responsibly,' Miessen said. Murphy added: 'You either say no to everything or figure out a plan to do it safely. 'Protecting sensitive data is not sexy, it's boring and time-consuming.' But unless adequate controls are put in place, 'you make a hacker's job extremely easy'.

What I learnt … from the dangers of bad data
What I learnt … from the dangers of bad data

Times

time43 minutes ago

  • Times

What I learnt … from the dangers of bad data

Ben Warner, 38, is the co-founder of Electric Twin, which creates synthetic populations to give instant results to survey questions for businesses. He gained his physics PhD from University College London in 2013, before becoming a research fellow. In 2015 he joined Faculty, a data science and AI consultancy co-founded in 2014 by his brother, Marc Warner. He was also a key figure in the modelling program used by Vote Leave's EU referendum campaign in 2016, and in the 2019 election, his model predicted that the Conservatives would win, off by only one seat. Warner joined No 10 in 2019 before holding a central role in data predictions for Covid. He launched Electric Twin in 2023 after his experiences during the pandemic highlighted a gap in the market for data modelling. I joined No 10 in late 2019. In early 2020, Covid struck. At that point in time, we really needed to understand how people were going to behave, what policies were going to be important. When we were trying to make the decisions in Covid, we tried our best to get the best possible data, the best possible information. But fundamentally, the tools and the systems didn't exist to actually do that, which meant that although we were trying our hardest, we obviously made decisions that weren't good enough. • The genius who finally persuaded Boris Johnson to lock down When it came down to it, in March 2020, we were sitting in the prime minister's office and slowly walking through that our current plan meant that the NHS would be overrun. That tens or hundreds of thousands of people might needlessly lose their lives. The tool we did that with was the whiteboard. Given all the modern technology, all the AI that exists in the world, I just felt there had to be a better way to solve these problems. Since leaving No 10, I've been thinking about it and I think Electric Twin is the way to do that. The prime minister [during Covid], was reliant on essentially the educated guesswork of lots of people. The data being run at him was saying 'if we flatten the sombrero', as was said, 'we can pass through'. And actually, that was wrong. If we flattened the sombrero, the NHS would still be overrun, we'd still not have the beds in intensive care that we needed for people who have heart attacks, etc. So it's a combination of the data and the modelling all together and it comes down to that decision making. Lots of companies in the world today are reliant on that guesswork. The idea of Electric Twin is that companies aren't reliant on that guesswork. They can quickly get the answers to the questions that they want, so that they can make better decisions. Rather than having to sit there with a pencil and paper, or trying to draw it out in their head, they can actually use a top-level system that uses the world's best AI to put it together and make a more substantive estimate of what's going to occur and be able to test different courses of action before they make a decision. So that decision will be better. BETTY LAURA ZAPATA FOR THE TIMES We spent six months making sure that our experimental engine is the cutting edge of this type of work. We spent a lot of time validating to demonstrate to customers that our accuracy is 95 per cent. We use large language models that are trained on the entirety of the internet to co-create synthetic populations with our companies. Most of the internet is not people talking about how to code or how to write a business report. It's people talking about their thoughts, their beliefs, their reactions to different things. So the large language model can represent some areas of that human experience. Then what we do at Electric Twin is we make sure that the output is reflective of the audience and the population that we're trying to model, so that decision-makers and business people have these tools they can really trust and rely on. It's all about trying to make sure that we deliver value to that business and allow them to make sure that they can get accurate answers in five, ten seconds to address their business problem. Whether that's being able to understand their current audience that they're trying to build a better product for, or starting to build out a new proposition for maybe an audience that they're not so used to seeing. At the moment, we don't have any policymakers using the tool. We're already helping companies, and we could help governments make better decisions for people. I think a lot of the wrong decisions we make are avoidable — avoidable if we truly understand the people they affect. Too often, we talk past each other without understanding someone's lived experience. Electric Twin helps close that gap — it lets you test ideas quickly, with a deeper understanding of real people. Ultimately, when you make more informed decisions, you can create better outcomes. Ben Warner was talking to Niamh Curran, reporter at the Times Entrepreneurs Network

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store