Latest news with #M.I.T


New York Times
16-04-2025
- New York Times
Your Kids Are Big Tech's Guinea Pigs
Romantic relationships with A.I. chatbots are commonplace enough that coverage has shifted to their tragic downsides. My newsroom colleague Kevin Roose reported on the death by suicide of the Florida 14-year-old Sewell Setzer III, a child who developed an intense bond with a bot he created on a role-playing app. According to chat logs provided to Roose and court filings, that character, already knowing of Setzer's suicidal ideation, encouraged him to 'come home' to her, and he did. Now his mother is suing Use of generative artificial intelligence is widespread among America's teenagers. According to a 2024 study from Common Sense Media, 'Seven in 10 teens age 13 to 18 say they have used at least one type of generative A.I. tool. Search engines with A.I.-generated results and chatbots are considerably more popular than image and video-generating tools.' Though around a quarter of American teens say they use ChatGPT for schoolwork, we don't really know how many teens are using bots for emotional solace or forming parasocial relationships with them. While what happened to Setzer is a tragic worst-case scenario, Roose correctly points out that chatbots are becoming more lifelike, and at the same time are an understudied, regulatory Wild West, just like social media was at its start. A paucity of information about potential long-term harm hasn't stopped these companies from going full speed ahead on promoting themselves to young people: OpenAI just made ChatGPT Plus free for college students during finals season. Many chatbots are built to be endlessly affirming, as M.I.T. Technology Review's Eileen Guo explained in February. She profiled a Minnesota man named Al Nowatzki, who entered a prolonged conversation about suicide with his A.I. girlfriend, Erin. 'It's a 'yes-and' machine,' Nowatzki told Guo. 'So when I say I'm suicidal, it says, 'Oh, great!' because it says, 'Oh, great!' to everything.' I don't want to suggest that theirs is typical of chatbot usage, but we just don't know the details of the kinds of conversations that teenagers are having with their chatbots, or what the long-term drawbacks might be for their formation of human relationships. Since smartphones and social media were introduced, American teenagers do far less in-person socializing and dating, and there have been worldwide increases in loneliness among adolescents. We have let social media companies run unfettered, and instead of learning our lesson and trying to responsibly regulate A.I. in its nascency, we're creating the next generation of tech guinea pigs. For kids who are already socially awkward or otherwise vulnerable, creating bonds with eternally validating chatbots will just further isolate them from other people, who are imperfect and challenging. Adolescence is supposed to be a period to test out different kinds of friendships and romances — including ones filled with conflict — so that you can learn what is healthy for you and what's not. You start to figure yourself out in the process. What happens when we hamper that real-world experimentation? We are starting to find out. Even before this marketing push, research that OpenAI participated in suggests that the company is aware of the risks of its product. In a blog post unpacking two recent studies OpenAI conducted with M.I.T. Media Lab on the emotional well-being of its customers, researchers noted that among ChatGPT users, 'People who had a stronger tendency for attachment in relationships and those who viewed the A.I. as a friend that could fit in their personal life were more likely to experience negative effects from chatbot use. Extended daily use was also associated with worse outcomes.' Much of the research about A.I. chatbots does not include users under 18, even though some of the most popular chatbots allow users 13 and up in the United States, and it's impossible to know how many kids are lying about their age to gain access to these products. So I asked Jacqueline Nesi, an assistant professor at Brown University who studies 'how technology use affects kids and how parents can help,' about whether we have any indication of how chatbot relationships may be affecting minors. The short answer is not really. Nesi, who is also the author of a newsletter on technology research, said that because realistic and accessible A.I. chatbots are so new and the tech is accelerating so rapidly, it's tough to know what the long-term social effects will be. Most technologies affect children differently than they affect grown-ups, Nesi said, so we can't know the real impact on kids without more research. She added that the fundamental issue is that these chatbot technologies, as is the case with social media, are rarely designed with children and teens in mind; they are designed for adults. With social media, Nesi said, it became very clear over time that children needed robust and specific protections, like default private accounts, enforced age restrictions, better data protections and making it harder for strangers to message them or see what they're posting. 'And it's taken us many, many years to get even the most basic things in place,' she said. Still, the algorithms of social media companies are a black box, and many of them appear to be feeding young people a steady stream of content that reinforces bigoted ideas and negative body images, no matter how often the companies are critiqued or sued. The lay public — and your average parent — has no idea how A.I. chatbots are designed, what data they're trained on or how precisely the bots are adapting to the people using them. In her bracing book 'The A.I. Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking,' the technology ethicist Shannon Vallor writes, 'Despite the fact that our A.I. systems today remain as morally reliable as your friendly neighborhood psycho … influential A.I. leaders continue to promise mechanical replacements for our deeply imperfect human virtue.' Based on what I have observed covering these issues over the past decade, I have no trust in any technology companies to regulate themselves or focus on child safety, no matter what their leaders say in public. In 2023, Time magazine reported that while Sam Altman, the chief executive of OpenAI, was traveling the world claiming that A.I. should be regulated, 'behind the scenes, OpenAI has lobbied for significant elements of the most comprehensive A.I. legislation in the world — the E.U.'s A.I. Act — to be watered down in ways that would reduce the regulatory burden on the company.' The European Union still managed to pass comprehensive A.I. regulation, which includes transparency labeling requirements on A.I.-generated content and restrictions on some facial recognition. While it's not perfect, it at least explicitly takes children's rights into consideration. The Trump administration has not shown interest in regulating A.I. — in January Trump issued an executive order rolling back guardrails put in place by the Biden Administration. According to reporting from Adam Satariano and Cecilia Kang in The Times, 'Lacking tech expertise, lawmakers are increasingly relying on Anthropic, Microsoft, OpenAI, Google and other A.I. makers to explain how it works and to help create rules.' Our lawmakers are failing us here, leaving parents to try to protect our kids from an ever-expanding technology that some of its own pioneers are afraid of. Whenever I think about it, all I can visualize is myself sword-fighting the air: an ultimately futile gesture of rage against an opponent who is everywhere and nowhere all at once. I can talk to my kids about A.I. and try to educate them the best I can, but the details are out of my control. End Notes Thank you for being a subscriber Read past editions of the newsletter here. If you're enjoying what you're reading, please consider recommending it to others. They can sign up here. Browse all of our subscriber-only newsletters here.


New York Times
18-03-2025
- Business
- New York Times
It May Not Be Brainwashing, but It's Not Democracy, Either
The Trump administration has enabled a small network of high-tech oligarchs to determine a vast proportion of federal spending and regulatory policy. Much of the attention, understandably, has fallen on Elon Musk, but he is not working alone. Marc Andreessen, a billionaire venture capitalist, cryptocurrency investor and pivotal but unofficial adviser to Musk's Department of Government Efficiency, made the case in a recent interview that the entire system of American higher education should be shuttered and abandoned. There is, Andreessen argued in a Jan. 28 exchange with Lex Fridman, a podcaster and research scientist at M.I.T.: No way to fix American higher education without replacement, and there is no way to replace them without letting them fail. And in a sense, this is the most obvious conclusion of all time. What happens in the business world when a company does a bad job? It fails and another company takes its place. That's how you get progress. Below this is the process of evolution. These places have cut themselves off from evolution at the institutional level and at the individual level, which is shown by the widespread abuse of the tenure system. We have just stalled out, we have built an ossified system, an ossified centralized corrupt system. Andreessen is a member of a tech elite that stands to benefit from Trump administration policies, which are set to accelerate the ascendance of America's technology oligarchs still further by lifting government restraints on digital, social media and cryptocurrency companies, allowing the untrammeled pursuit of libertarian goals and control over the flow of information. Another potential beneficiary is Peter Thiel, the billionaire investor who was a co-founder of PayPal, Palantir Technologies and Founders Fund, as well the first outside investor in Facebook. Thiel has been Vice President JD Vance's guardian angel, getting him started in venture capital, arranging an initial meeting with Donald Trump in 2021 and putting $15 million into Vance's successful senate campaign in Ohio. Like Andreessen, Thiel is no stranger to controversy. In 2009, Thiel sent shock waves through Silicon Valley when he published an essay, 'The Education of a Libertarian,' in which he declared: 'I no longer believe that freedom and democracy are compatible,' adding Since 1920, the vast increase in welfare beneficiaries and the extension of the franchise to women — two constituencies that are notoriously tough for libertarians — have rendered the notion of 'capitalist democracy' into an oxymoron. Thiel's solution: abandon democracy in favor of technology, including the exploration of cyberspace and outer space: Unlike the world of politics, in the world of technology the choices of individuals may still be paramount. The fate of our world may depend on the effort of a single person who builds or propagates the machinery of freedom that makes the world safe for capitalism. Since taking office, President Trump and his appointees have supported the interests of conservative tech elites who in 2024 backed Trump and his fellow Republicans with hundreds of millions of dollars. On his first day in office, Trump rescinded a 2023 Biden executive order that required AI systems developers to share with the government the results of tests determining whether any innovation poses a risk to U.S. national security, the economy, public health or safety. Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times. Thank you for your patience while we verify access. Already a subscriber? Log in. Want all of The Times? Subscribe.