Judge rejects AI chatbots' free speech defense following teen's death
The Brief
A judge ruled a wrongful death lawsuit against Character.AI can move forward.
The suit claims a chatbot encouraged a 14-year-old to take his life.
Experts say the case could test AI free speech rights.
TALLAHASSEE, Fla. - A federal judge has rejected arguments made by an artificial intelligence company that its chatbots are protected by the First Amendment — at least for now.
The developers behind Character.AI are seeking to dismiss a lawsuit alleging the company's chatbots pushed a teenage boy to kill himself.
RELATED: Florida mother sues AI platform after son takes his own life after months of online chatting
The judge's order will allow the wrongful death lawsuit to proceed, in what legal experts say is among the latest constitutional tests of artificial intelligence.
The backstory
The suit was filed by a mother from Florida, Megan Garcia, who alleges that her 14-year-old son Sewell Setzer III fell victim to a Character.AI chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide.
Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge's order sends a message that Silicon Valley "needs to stop and think and impose guardrails before it launches products to market."
The suit against Character Technologies, the company behind Character.AI, also names individual developers and Google as defendants. It has drawn the attention of legal experts and AI watchers in the U.S. and beyond, as the technology rapidly reshapes workplaces, marketplaces and relationships despite what experts warn are potentially existential risks.
RELATED: Pres. Trump signs 'Take It Down Act' bill designed to fight AI deepfakes, revenge porn
The lawsuit alleges that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualized conversations with the bot, which was patterned after a fictional character from the television show "Game of Thrones." In his final moments, the bot told Setzer it loved him and urged the teen to "come home to me as soon as possible," according to screenshots of the exchanges. Moments after receiving the message, Setzer shot himself, according to legal filings.
What they're saying
In her order Wednesday, U.S. Senior District Judge Anne Conway rejected some of the defendants' free speech claims, saying she's "not prepared" to hold that the chatbots' output constitutes speech "at this stage."
Conway did find that Character Technologies can assert the First Amendment rights of its users, who she found have a right to receive the "speech" of the chatbots. She also determined Garcia can move forward with claims that Google can be held liable for its alleged role in helping develop Character.AI. Some of the founders of the platform had previously worked on building AI at Google, and the suit says the tech giant was "aware of the risks" of the technology.
"The order certainly sets it up as a potential test case for some broader issues involving AI," said Lyrissa Barnett Lidsky, a law professor at the University of Florida with a focus on the First Amendment and artificial intelligence.
RELATED: Teacher quits profession after viral rant on how AI is 'ruining' education
"It's a warning to parents that social media and generative AI devices are not always harmless," she said.
The other side
"We strongly disagree with this decision," said Google spokesperson José Castañeda. "Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI's app or any component part of it."
In a statement, a spokesperson for Character.AI pointed to a number of safety features the company has implemented, including guardrails for children and suicide prevention resources that were announced the day the lawsuit was filed.
"We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe," the statement said.
Attorneys for the developers want the case dismissed because they say chatbots deserve First Amendment protections, and ruling otherwise could have a "chilling effect" on the AI industry.
If you or a loved one is feeling distressed, call or text the 988 Suicide & Crisis Line for free and confidential emotional support 24 hours a day, 7 days a week.
CLICK HERE for the warning signs and risk factors of suicide and CLICK HERE for more on the 988 Lifeline.
The Source
The Associated Press contributed to this report. The information in this story comes from a recent federal court ruling, legal filings related to the wrongful death lawsuit, and statements from parties involved, including the plaintiff's legal team, Character.AI, and Google. This story was reported from Los Angeles.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Epoch Times
19 minutes ago
- Epoch Times
🎧 FBI Names Suspect in Flamethrower Terrorist Attack in Colorado
Here are the stories shaping the day: The FBI has named the suspect in Sunday's terrorist attack in Boulder, Colorado, as 45-year-old Mohamed Sabry Soliman. The suspect allegedly used a and an incendiary device to target pro-Israel protestors. Ukraine launched a massive drone strike on Sunday, hitting , according to the Russian Ministry of Defense. Minnesota Gov. Tim Walz , the former vice presidential candidate, told Democratic Party voters in South Carolina on May 31 that and needs to revive its identity. Doctors and residents across China continue to report more infections and deaths as the latest wave of COVID-19 continues, portraying a situation than the Chinese regime is letting on. 🍵 Health: AI friends are not your friends. . — ☀️ Get clarity and inspiration with The Epoch Times Morning Brief, our flagship newsletter written by U.S. national editor Ivan Pentchoukov. Sign up .


Forbes
25 minutes ago
- Forbes
Why Companies Using AI Should Increase Prices
AI can be a game changer for businesses, but it can also be an expensive drag on the balance sheet. A new report from billing software provider Chargebee found that companies that grew most in the last year were the ones that changed their pricing strategies to account for AI. The ones that were most successful combined a variety of pricing models: recurring subscriptions, usage-based models, outcome-based models and flat fees. Four in five of the companies surveyed that added AI said they are also changing their pricing. But how to adjust prices, especially in a time of economic uncertainty, is a challenge. Just over half said customer retention is their top concern, but 40% of businesses that adjusted prices last year reported a disconnect between increases and customer value. Nearly a quarter of companies struggled with explaining the benefits of adding AI functions to their services, while technical issues also caused struggles. Most SaaS providers have traditionally charged enterprises based on individual licenses, which is far different from a usage fee. Many companies, the study found, are testing out a variety of pricing structures to see what works best. A vignette in the study from AI fiction writing tool Sudowrite's founder Amit Gupta laid out an issue they had with their original flat subscription-based pricing model: There were users paying $20 a month for the service, but their use of AI was costing the company $400 a month. They started charging by the word—and letting users develop custom tools—but that also could be more unsustainable if the users did a lot to modify those tools. Now they have a credit and usage-based model, but the company is still experimenting with it. While decisions on pricing and value calculations often come from the CFO or CEO, input from the CIO—who best understands cost, usage and value of AI-driven functions—is vital to driving revenue growth through a company's AI transition. CIOs also know which functions are needed for their enterprise, and which price increases may be worth budgeting for. Use of AI and new technology depends on infrastructure as well as service, and updating the technological capacity of a place can be a huge undertaking. The Kraft Group, which owns businesses including the New England Patriots and their home field Gillette Stadium, recently signed a five-year agreement with infrastructure provider NWN to upgrade the stadium and other playing facilities. I talked to Kraft Group CIO Michael Israel and NWN CEO Jim Sullivan about how they are preparing these mega-facilities for a more connected future. An excerpt from our conversation is later in this newsletter. The Clinton Clean Energy Center in Clinton, Illinois. All of the things AI can do with computing need power, and a lot of it. This week, Meta made a deal to get the power it needs, signing a 20-year agreement to buy all of the power produced at a Constellation Energy nuclear plant in Clinton, Illinois. The agreement starts in June 2027—after an existing state agreement runs out—and will expand the plant's output. Meta has prioritized finding sources of nuclear power, both with new plants and utilizing existing ones, to support its technology going forward. The company announced an RFP for nuclear energy developers in December, and says it has shortlisted potential new nuclear power resources. This is the second deal a tech company has made with Constellation to redevelop its nuclear plants for AI. In September, Microsoft announced a 20-year deal with the power provider for one of its reactors at its Three Mile Island facility in Pennsylvania (not impacted by the 1979 meltdown). Constellation has said it expects to restart the reactor by 2028. Amazon and Google have both been investing in small nuclear reactors, and Google announced an investment in three advanced nuclear energy projects by Elementl Power. These deals, coupled with four executive orders from President Donald Trump aimed at bolstering nuclear power, seem to be heralding a new nuclear power age in the U.S., writes Forbes senior contributor David Blackmon. In the meantime, tax incentives for renewable energy sources, including solar panels and wind turbines, are in line to be cut in Trump's latest budget. Still, Forbes senior contributor Ken Silverstein writes, renewable energy is touted by many as the fastest and least expensive way to get more power into the grid—and could also play a huge role in generating the electricity needed for the AI-driven future. Commerce Secretary Howard Lutnick. The Trump Administration is reorganizing the AI Safety Institute into a new group: the Center for AI Standards and Innovation. Forbes' Thomas Brewster writes that there are few obvious changes. The AI Safety Institute was created in 2023 as part of President Joe Biden's AI executive order, and operated within the National Institute of Standards & Technology to research risks in AI systems. The revamped Center for AI Standards and Innovation is still part of NIST but is under the purview of Commerce Secretary Howard Lutnick, and positions itself as taking a bulwark against 'censorship and regulations [that] have been used under the guise of national security. Innovators will no longer be limited by these standards.' But the statement about the revamped organization says that this group will also work on research to measure and improve security of systems, evaluating where risks exist. At this juncture, it's difficult to see what the big difference in purpose is—except, of course, that this iteration has a different name and was established by a different president. Outside of the government sphere, computer scientist Yoshua Bengio, often referred to as the 'godfather of AI,' launched a new nonprofit organization aimed at creating AI systems prioritizing safety, writes Forbes senior contributor Leslie Katz. LawZero is starting with $30 million in funding and is assembling a team of world-class AI researchers to work on a system called Scientist AI—a non-agentic system that behaves in response to human input and goals. 'Such AI systems could be used to provide oversight for agentic AI systems, accelerate scientific discovery and advance the understanding of AI risks and how to avoid them,' the statement on the organization's website says. 'LawZero believes that AI should be cultivated as a global public good—developed and used safely towards human flourishing.' Jakub Porzycki/NurPhoto via Getty Images All USB-C ports are not created equal, and Microsoft intends to do something about it. Ports don't always have the same capabilities now, meaning users sometimes plug peripherals into their computers and they don't work. Forbes senior contributor Barry Collins writes that the company will establish minimum settings for all USB-C ports on its computers, meaning things will work every time they are plugged in. But while this will be a welcome change, it is a hardware one—meaning it will make no difference until users actually upgrade their physical computers. Kraft Group CIO Mike Israel and NWN CEO Jim Sullivan. In April, the Kraft Group—which owns the New England Patriots, MLS' New England Revolution and Gillette Stadium—signed a five-year agreement with tech infrastructure provider NWN to transform the tech framework for the Kraft Group's facilities, including Gillette Stadium and a new training facility for the Patriots. I talked to Kraft Group CIO Michael Israel and NWN CEO Jim Sullivan in April about the challenges of bringing the latest technology infrastructure to a place like Gillette Stadium, and how big facilities with a multitude of uses can plan for the future. This conversation has been edited for length, clarity and continuity. A longer version is available here. How do you come up with what you want to accomplish, and how do you figure out what kind of infrastructure is needed to make it happen? Israel: We have our Monster Jam [this weekend] at the stadium. On Saturday, I'll be walking around the stadium engaging. Gillette is one of the few stadiums in the country in which we own and self-operate our stadium. It's our security staff, our concession staff. I am walking around watching how our fans engage with us. How are our systems being used? Where are they inefficient? Where are they doing their job? How can we improve that experience? Guests will come up to me and say, 'How do I get to the gate?' 'How do I get to my suite?' 'Where's the nearest bathroom?' When they're asking me these things, that's registering in my mind: They don't have that information today. When you have these types of events, they're new users. You want them to have a positive experience because that's your lead in to a potential soccer season ticket holder or future Patriots season ticket holder. But even when you get to the Patriot season ticket holders, what can we do to enhance that experience? It's seeing how our guests experience things, what we're doing right, what we're doing wrong, and not sitting back and saying, 'I'm good. I'm going to go watch the game.' I've been here six years. I think I've seen 15 minutes of one football game. On a football day, I'll generally do about 30,000 steps walking the stadium, watching what's going on. The other side of the coin is what do we do from a technology perspective, looking at what do I need to do to ensure that I have connectivity, and what devices are now connecting to the networks that hadn't in the past? The system that waters the field is an IoT system that's attached to our network. If it's not connecting, it's not watering the field, and we don't know what's going on. We have to allow for connectivity, secure that connectivity, and make sure that that connectivity is reliable. We constantly do surveys after a winter to say, what got impacted by the winter? Do I have Wi-Fi access points that may be misaligned that need to be looked at? I have FIFA coming in next year. That's like having seven Super Bowls over six weeks here at the stadium. They're going to use my parking lots on the east and west side as fan activation zones. I don't have connectivity there. Working with the NWN team, we have to determine what does FIFA need? How do we light up those areas that when the fans are there, they have connectivity. Their booths can operate, their systems can operate, the fans can get connectivity, and we'll have maybe 50,000 or 60,000 fans in the bowl, but we could have an additional 50,000 people on the campus, and that's not a crowd that we're used to having. On the NWN side, how do you determine what to do in large facilities like these and how to support current and future needs? Sullivan: NWN, over the past five years, has grown from $250 million to over $1 billion dollars this year, and really expanded this full end-to-end IT infrastructure. The market is changing really fast with AI adding in. For us, it's working with organizations to start with the end state: What's the vision of what we have to have here, what we're trying to drive? And then, what's the required capabilities? Most of these environments, from the application, to the AI, to the infrastructure, to the unified communications, to security all have to be assessed as one holistic solution. Then we put in the right required capabilities from the technology, the services, the overall management, and co-management with Mike's team. We've reached the breadth and scale where we're dealing with organizations where there's hundreds of thousands of people, or states with 50,000 deployments of people and requirements. We can cover end-to-end, but also have the scale to handle a large project that goes across multiple technology domains and supports a smaller event, and all of a sudden it surges to hundreds of thousands of people. Everyone is getting used to [the fact] that customer experience needs to be world-class; there's expectations there. Ultimately, the Wi-Fi is going to be fast, strong and secure. And then going into these new technologies to a real beneficial evolution, where it's creating new user experience, new knowledge, but is also driving a backend that's going to create a lot more capacity demands on the networks, on the infrastructure. You've got to be able to tie it all together. What advice do you have for CIOs looking to bring more technology to their facilities, thinking about not only what to do today, but what to do in the future? Israel: Ultimately, it's not just if you build it, they will come. If you're building it, you need to be brainstorming how you're going to use it, and you need to have relationships with all of your stakeholders to understand what's holding them back, what would they like to see? In some cases, they don't know what they don't know, and we have to take these technology discussions, take the technology out of it, and think about how are we going to provide solutions. Sullivan: We did 5,000 distinct deployments last year. The really successful ones are driven with a positive outcome you're trying to get to. Collaboration and a partnership between the two as a seamless team really drives the most success to drive those outcomes. With the rise of agentic AI, non-human agents will soon have the ability to do significant work tasks, like the more technical jobs of database administrators. That seems like a huge leap, but here's how to start warming to the idea and embracing this kind of AI agent (even if you're a DBA). You may be an introvert, but that doesn't mean you can't be a successful leader. Here are 10 things you can do to become a thriving manager. Which Big Ten university just introduced an AI Fluency initiative, embedding AI education into the core of all undergraduate curriculum, regardless of major? A. Northwestern University B. Purdue University C. Ohio State University D. Michigan State University See if you got the answer right here.


The Verge
27 minutes ago
- The Verge
X changes policy to restrict AI training on posts
X has updated its developer agreement to add a new restriction on using posts on the platform to train AI. The updated policy, spotted earlier by TechCrunch, says developers can't use content from X or its API to 'fine-tune or train a foundation or frontier model.' The policy change could set up X to make AI training deals with third-party companies, similar to the deal Reddit struck with Google. Reddit, which has a similar policy to block AI crawlers, sued Anthropic on Wednesday over claims the company's AI crawlers accessed the site more than 100,000 times since July 2024. Elon Musk's AI company, xAI, acquired X for $33 billion on paper in March. Though X's developer agreement now bars companies from training AI on its content, its privacy policy still states that third-party 'collaborators' can train AI models on the site's data unless users opt out. X also feeds user data into its AI model, Grok, for training.