
Your Kids Are Big Tech's Guinea Pigs
Romantic relationships with A.I. chatbots are commonplace enough that coverage has shifted to their tragic downsides. My newsroom colleague Kevin Roose reported on the death by suicide of the Florida 14-year-old Sewell Setzer III, a child who developed an intense bond with a bot he created on Character.AI, a role-playing app. According to chat logs provided to Roose and court filings, that character, already knowing of Setzer's suicidal ideation, encouraged him to 'come home' to her, and he did. Now his mother is suing Character.AI.
Use of generative artificial intelligence is widespread among America's teenagers. According to a 2024 study from Common Sense Media, 'Seven in 10 teens age 13 to 18 say they have used at least one type of generative A.I. tool. Search engines with A.I.-generated results and chatbots are considerably more popular than image and video-generating tools.' Though around a quarter of American teens say they use ChatGPT for schoolwork, we don't really know how many teens are using bots for emotional solace or forming parasocial relationships with them.
While what happened to Setzer is a tragic worst-case scenario, Roose correctly points out that chatbots are becoming more lifelike, and at the same time are an understudied, regulatory Wild West, just like social media was at its start. A paucity of information about potential long-term harm hasn't stopped these companies from going full speed ahead on promoting themselves to young people: OpenAI just made ChatGPT Plus free for college students during finals season.
Many chatbots are built to be endlessly affirming, as M.I.T. Technology Review's Eileen Guo explained in February. She profiled a Minnesota man named Al Nowatzki, who entered a prolonged conversation about suicide with his A.I. girlfriend, Erin. 'It's a 'yes-and' machine,' Nowatzki told Guo. 'So when I say I'm suicidal, it says, 'Oh, great!' because it says, 'Oh, great!' to everything.'
I don't want to suggest that theirs is typical of chatbot usage, but we just don't know the details of the kinds of conversations that teenagers are having with their chatbots, or what the long-term drawbacks might be for their formation of human relationships. Since smartphones and social media were introduced, American teenagers do far less in-person socializing and dating, and there have been worldwide increases in loneliness among adolescents. We have let social media companies run unfettered, and instead of learning our lesson and trying to responsibly regulate A.I. in its nascency, we're creating the next generation of tech guinea pigs.
For kids who are already socially awkward or otherwise vulnerable, creating bonds with eternally validating chatbots will just further isolate them from other people, who are imperfect and challenging. Adolescence is supposed to be a period to test out different kinds of friendships and romances — including ones filled with conflict — so that you can learn what is healthy for you and what's not. You start to figure yourself out in the process. What happens when we hamper that real-world experimentation? We are starting to find out.
Even before this marketing push, research that OpenAI participated in suggests that the company is aware of the risks of its product. In a blog post unpacking two recent studies OpenAI conducted with M.I.T. Media Lab on the emotional well-being of its customers, researchers noted that among ChatGPT users, 'People who had a stronger tendency for attachment in relationships and those who viewed the A.I. as a friend that could fit in their personal life were more likely to experience negative effects from chatbot use. Extended daily use was also associated with worse outcomes.'
Much of the research about A.I. chatbots does not include users under 18, even though some of the most popular chatbots allow users 13 and up in the United States, and it's impossible to know how many kids are lying about their age to gain access to these products. So I asked Jacqueline Nesi, an assistant professor at Brown University who studies 'how technology use affects kids and how parents can help,' about whether we have any indication of how chatbot relationships may be affecting minors.
The short answer is not really. Nesi, who is also the author of a newsletter on technology research, said that because realistic and accessible A.I. chatbots are so new and the tech is accelerating so rapidly, it's tough to know what the long-term social effects will be. Most technologies affect children differently than they affect grown-ups, Nesi said, so we can't know the real impact on kids without more research.
She added that the fundamental issue is that these chatbot technologies, as is the case with social media, are rarely designed with children and teens in mind; they are designed for adults.
With social media, Nesi said, it became very clear over time that children needed robust and specific protections, like default private accounts, enforced age restrictions, better data protections and making it harder for strangers to message them or see what they're posting. 'And it's taken us many, many years to get even the most basic things in place,' she said. Still, the algorithms of social media companies are a black box, and many of them appear to be feeding young people a steady stream of content that reinforces bigoted ideas and negative body images, no matter how often the companies are critiqued or sued.
The lay public — and your average parent — has no idea how A.I. chatbots are designed, what data they're trained on or how precisely the bots are adapting to the people using them. In her bracing book 'The A.I. Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking,' the technology ethicist Shannon Vallor writes, 'Despite the fact that our A.I. systems today remain as morally reliable as your friendly neighborhood psycho … influential A.I. leaders continue to promise mechanical replacements for our deeply imperfect human virtue.'
Based on what I have observed covering these issues over the past decade, I have no trust in any technology companies to regulate themselves or focus on child safety, no matter what their leaders say in public.
In 2023, Time magazine reported that while Sam Altman, the chief executive of OpenAI, was traveling the world claiming that A.I. should be regulated, 'behind the scenes, OpenAI has lobbied for significant elements of the most comprehensive A.I. legislation in the world — the E.U.'s A.I. Act — to be watered down in ways that would reduce the regulatory burden on the company.' The European Union still managed to pass comprehensive A.I. regulation, which includes transparency labeling requirements on A.I.-generated content and restrictions on some facial recognition. While it's not perfect, it at least explicitly takes children's rights into consideration.
The Trump administration has not shown interest in regulating A.I. — in January Trump issued an executive order rolling back guardrails put in place by the Biden Administration. According to reporting from Adam Satariano and Cecilia Kang in The Times, 'Lacking tech expertise, lawmakers are increasingly relying on Anthropic, Microsoft, OpenAI, Google and other A.I. makers to explain how it works and to help create rules.'
Our lawmakers are failing us here, leaving parents to try to protect our kids from an ever-expanding technology that some of its own pioneers are afraid of. Whenever I think about it, all I can visualize is myself sword-fighting the air: an ultimately futile gesture of rage against an opponent who is everywhere and nowhere all at once. I can talk to my kids about A.I. and try to educate them the best I can, but the details are out of my control.
End Notes
Thank you for being a subscriber
Read past editions of the newsletter here.
If you're enjoying what you're reading, please consider recommending it to others. They can sign up here. Browse all of our subscriber-only newsletters here.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
an hour ago
- Forbes
The AI Era Enters Its Sovereign Phase
Generative AI adoption started in late 2022 with public adoption of models like ChatGPT and Llama. As it drives towards its next phase of value creation with reasoning, also referred to as agentic AI, it has recently crossed the boundary from a consumer-centric application into an enterprise application. Right on the heels of this adoption is also another phase of value creation – Sovereign AI. What Is Sovereign AI? Sovereign AI refers to artificial intelligence that is developed, maintained, and controlled within a specific nation's or organization's jurisdiction, ensuring independence from external influences. This artificial intelligence is designed to align with local regulations, ethical standards, and strategic priorities, allowing governments and enterprises to maintain autonomy over their AI-driven operations. The Opportunity To Reign Supreme (Or At Least Be At The Front Of The Pack) Nvidia CEO Jensen Huang recently stated that 'AI is now an essential form of national infrastructure – just like energy, telecommunications and the internet.' Indeed, many leading countries such as the United States, United Kingdom, China, France, Denmark and the United Arab Emirates have launched sovereign AI initiatives. Stargate is an example of such an initiative from the United States. Additionally, leading AI enablers like Nvidia and OpenAI, have initiatives targeted specifically at helping entities establish their own sovereign AI capabilities. Sovereign AI is particularly crucial in areas like national security, defense, and critical infrastructure, where reliance on foreign AI models could pose risks related to data privacy, cybersecurity, or geopolitical dependencies. By building and maintaining custom AI capabilities, nations and organizations can safeguard their technological sovereignty while fostering innovation tailored to their unique needs. Moving Forward With Sovereign AI While this is a gross oversimplification of how complicated this task is for national leaders to undertake, the following are some critical areas that must be addressed in embarking on the sovereign AI journey: To this end, AI enablers like Nvidia and leading countries such as France have started to organize events. For example, at the upcoming Viva Technology event in Paris this coming June, Jensen Huang and Nvidia have organized a dedicated GTC event where interested parties can learn more. As mentioned earlier, it is important to keep in mind that sovereign AI isn't necessarily limited to national entities. Any sufficiently capable entity, whether they be nations, companies, organizations or universities interested in securing their own AI systems and capabilities from data curation and model creation to specified and focused outcomes can take advantage of sovereign AI.
Yahoo
an hour ago
- Yahoo
What is a GPT?
When you buy through links on our articles, Future and its syndication partners may earn a commission. The introduction of generative pre-trained transformers (GPTs) marked a significant milestone in the adoption and utility of artificial intelligence in the real world. The technology was created by the then fledgling research lab OpenAI, based on previous research done on transformers in 2017 by Google Labs. It was Google's white paper "Attention is all you need", which laid the foundation for OpenAI's work on the GPT concept. As seen in > Model matchup surprise > ChatGPT announcements > Goodbye ChatGPT-4 > Why ChatGPT 4.1 is a big deal Transformers provided AI scientists with an innovative method of taking user input, and converting it to something that could be used by the neural network using an attention mechanism to identify important parts of the data. This architecture also allows for the information to be processed in parallel rather than sequentially as with traditional neural networks. This provides a huge and critical improvement in speed and efficiency of AI processing. OpenAI's GPT architecture was released in 2018 with GPT-1. By significantly refining Google's transformer ideas, the GPT model demonstrated that large-scale unsupervised learning could produce an extremely capable text generation model which operated at vastly improved speeds. GPT's also uprated the neural networks' understanding of context which improved accuracy and provided human-like coherence. Before GPT, AI language models relied on rule-based systems or simpler neural networks like recurrent neural networks (RNNs), which struggled with long-range dependencies and contextual understanding. The story of the GPT architecture is one of constant incremental improvements ever year since launch. GPT-2 in 2019 introduced a model with 1.5 billion parameters, which started to provide the kind of fluent text responses where AI users are now familiar with. However it was the introduction of GPT-3 (and subsequently 3.5) in 2020 which was the real game-changer. It featured 175 billion parameters, and suddenly a single AI model could cope with a vast array of applications from creative writing to code generation. GPT technology went viral in November of 2022 with the launch of ChatGPT. Based on GPT 3.5 and later GPT-4, this astonishing technology instantly propelled AI into public consciousness in a massive way. Unlike previous GPT models, ChatGPT was fine-tuned for conversational interaction. Suddenly business users and ordinary citizens could use an AI for things like customer service, online tutoring or technical support. So powerful was this idea, that the product attracted a 100 million users in a mere 60 days. Today GPT is one of the top two AI system architectures in the world (along with Google's Gemini). Recent improvements have included multimodal capabilities, i.e. the ability to process not just text but also images, video and audio. OpenAI has also updated the platform to improve pattern recognition and enhance unsupervised learning, as well as adding agentic functionality via semi-autonomous tasks. On the commercial front, GPT powered applications are now deeply embedded in many different business and industry enterprises. Salesforce has Einstein GPT to deliver CRM functionality, Microsoft's Copilot is an AI assisted coding tool which incorporates Office suite automation, and there are multiple healthcare AI models which are fine-tuned to provide GPT powered diagnosis, patient interaction and medical research. At the time of writing the only two significant rivals to the GPT architecture are Google's Gemini system and the work being done by DeepSeek, Anthropic's Claude and Meta with its Llama models. The latter products also use transformers, but in a subtly different way to GPT. Google however is a dark horse in the race, as it's becoming clear that the Gemini platform has the potential to dominate the global AI arena within a few short years. Despite the competition, OpenAI remains firmly at the top of many leaderboards in terms of AI performance and benchmarks. Its growing range of reasoning models such as o1 and o3, and its superlative image generation product, GPT Image-1 which uses the technology, continue to demonstrate that there is significant life left in the architecture, waiting to be exploited.


TechCrunch
an hour ago
- TechCrunch
Windsurf says Anthropic is limiting its direct access to Claude AI models
Windsurf, the popular vibe coding startup that's reportedly being acquired by OpenAI, said Anthropic significantly reduced its first-party access to the highly popular AI models Claude 3.7 Sonnet and Claude 3.5 Sonnet. Windsurf CEO Varun Mohan said in a post on X that Anthropic gave Windsurf little notice for this change, and the startup now has to find other third-party compute providers to run Claude AI models on its platform. 'We have been very clear to Anthropic that this is not our desire – we wanted to pay them for the full capacity,' said Mohan on X. 'We are disappointed by this decision and short notice.' In a blog post, Windsurf said this it has some capacity with third-party inference providers, but not enough, so this change may create short-term availability issues for Windsurf users trying to access Claude. With less than five days of notice, Anthropic decided to cut off nearly all of our first-party capacity to all Claude 3.x models. Given the short notice, we may see some short-term Claude 3.x model availability issues as we have very quickly ramped up capacity on other inference… — Varun Mohan (@_mohansolo) June 3, 2025 The decision comes just a few weeks after Anthropic seemed to pass over Windsurf during the launch of Claude 4, the company's new family of models, which offer industry leading performance on software engineering tasks. Anthropic gave several popular vibe coding apps — including Anysphere's Cursor, Cognition's Devin, and Microsoft's GitHub Copilot — immediate access to run Claude Sonnet 4 and Claude Opus 4. Those apps started supporting the new Claude 4 models on launch day. Windsurf said at the time it did not receive direct access from Anthropic to run Claude 4 on its platform, and still hasn't — forcing the company to rely on a workaround that's more expensive and complicated for developer to access Claude. Anthropic's AI models have become a favorite among developers, and in the past, Anthropic has worked with Windsurf to power its vibe coding tools. The AI-assisted coding sector, also know as vibe coding, has heated up in recent months. OpenAI reportedly closed on a deal to acquire Windsurf in April. At the same time, Anthropic has invested more in its own AI-coding applications. In February, Anthropic launched its own AI coding application, Claude Code, and in May, the startup held its first Code with Claude developer conference. Techcrunch event Save now through June 4 for TechCrunch Sessions: AI Save $300 on your ticket to TC Sessions: AI—and get 50% off a second. Hear from leaders at OpenAI, Anthropic, Khosla Ventures, and more during a full day of expert insights, hands-on workshops, and high-impact networking. These low-rate deals disappear when the doors open on June 5. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you've built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | REGISTER NOW 'We're prioritizing capacity for sustainable partnerships that allow us to effectively serve the broader developer community,' said Anthropic spokesperson Steve Mnich in an email to TechCrunch on Tuesday, noting that it's still possible to access Claude on Windsurf via an API key. 'Developers can also access Claude through our direct API integration, our partner ecosystem, and other development tools.' Windsurf has grown quickly this year, reaching $100 million ARR in April, in an attempt to catch up with more popular AI coding tools such as Cursor and GitHub Copilot. However, Windsurf's limited access to Anthropic's models may be stunting its growth. Several Windsurf users that spoke with TechCrunch were frustrated by the lack of direct access to Anthropic's best AI coding models. Ronald Mannak, a startup founder that specializes in Apple's programming language, Swift, told TechCrunch that Claude 4 represented a significant jump in capabilities for his workloads. While Mannak has been a Windsurf customer since late 2024, he's switched to using Cursor in recent weeks so that he can vibe code more easily with Claude 4. As a short-term solution to support Claude 4, Windsurf allows users to connect their Anthropic API keys to their Windsurf accounts. However, developers have noted that this 'bring your own key' solution is more expensive and complicated than if Windsurf provided the models itself. When it comes to vibe coders, optionality is the name of the game. Every few months, OpenAI, Google, and Anthropic release new AI models that seem to outperform the industry on coding tasks. Because of that, it benefits vibe coding startups to support AI models from all the leading developers. Windsurf spokesperson Payal Patel tells TechCrunch via email that the company has always believed in providing optionality for users. In this case, it seems Anthropic has made that a bit more challenging.