
Why I'm Suing OpenAI, the Creator of ChatGPT
He's right. That's why I recently filed a federal lawsuit against OpenAI seeking a temporary restraining order to prevent the company from deploying its products, such as ChatGPT, in the state of Hawaii, where I live, until it can demonstrate the legitimate safety measures that the company has itself called for from its 'large language model.'
We are at a pivotal moment. Leaders in AI development—including OpenAI's own CEO Sam Altman—have acknowledged the existential risks posed by increasingly capable AI systems. In June 2015, Altman stated: 'I think AI will probably, most likely, sort of lead to the end of the world, but in the meantime, there'll be great companies created with serious machine learning.' Yes, he was probably joking—but it's not a joke.
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
Eight years later, in May 2023, more than 1,000 technology leaders, including Altman himself, signed an open letter comparing AI risks to other existential threats like climate change and pandemics. 'Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,' the letter, released by the Center for AI Safety, a California nonprofit, says in its entirety.
I'm at the end of my rope. For the past two years, I've tried to work with state legislators to develop regulatory frameworks for artificial intelligence in Hawaii. These efforts sought to create an Office of AI Safety and implement the precautionary principle in AI regulation, which means taking action before the actual harm materializes, because it may be too late if we wait. Unfortunately, despite collaboration with key senators and committee chairs, my state legislative efforts died early after being introduced. And in the meantime, the Trump administration has rolled back almost every aspect of federal AI regulation and has essentially put on ice the international treaty effort that began with the Bletchley Declaration in 2023. At no level of government are there any safeguards for the use of AI systems in Hawaii.
Despite their previous statements, OpenAI has abandoned its key safety commitments, including walking back its ' superalignment ' initiative that promised to dedicate 20 percent of computational resources to safety research, and late last year, reversing its prohibition on military applications. Its critical safety researchers have left, including co-founder Ilya Sutskever and Jan Leike, who publicly stated in May 2024, 'Over the past years, safety culture and processes have taken a backseat to shiny products.' The company's governance structure was fundamentally altered during a November 2023 leadership crisis, as the reconstituted board removed important safety-focused oversight mechanisms. Most recently, in April, OpenAI eliminated guardrails against misinformation and disinformation, opening the door to releasing 'high risk' and 'critical risk' AI models, 'possibly helping to swing elections or create highly effective propaganda campaigns,' according to Fortune magazine.
In its first response, OpenAI has argued that the case should be dismissed because regulating AI is fundamentally a 'political question' that should be addressed by Congress and the president. I, for one, am not comfortable leaving such important decisions to this president or this Congress—especially when they have done nothing to regulate AI to date.
Hawaii faces distinct risks from unregulated AI deployment. Recent analyses indicate that a substantial portion of Hawaii's professional services jobs could face significant disruption within five to seven years as a consequence of AI. Our isolated geography and limited economic diversification make workforce adaptation particularly challenging.
Our unique cultural knowledge, practices, and language risk misappropriation and misrepresentation by AI systems trained without appropriate permission or context.
My federal lawsuit applies well-established legal principles to this novel technology and makes four key claims:
Product liability claims: OpenAI's AI systems represent defectively designed products that fail to perform as safely as ordinary consumers would expect, particularly given the company's deliberate removal of safety measures it previously deemed essential.
Failure to warn: OpenAI has failed to provide adequate warnings about the known risks of its AI systems, including their potential for generating harmful misinformation and exhibiting deceptive behaviors.
Negligent design: OpenAI has breached its duty of care by prioritizing commercial interests over safety considerations, as evidenced by internal documents and public statements from former safety researchers.
Public nuisance: OpenAI's deployment of increasingly capable AI systems without adequate safety measures creates an unreasonable interference with public rights in Hawaii.
Federal courts have recognized the viability of such claims in addressing technological harms with broad societal impacts. Recent precedents from the Ninth Circuit Court of Appeals (which Hawaii is part of) establish that technology companies can be held liable for design defects that create foreseeable risks of harm.
I'm not asking for a permanent ban on OpenAI or its products here in Hawaii but, rather, a pause until OpenAI implements the safety measures the company itself has said are needed, including reinstating its previous commitment to allocate 20 percent of resources to alignment and safety research; implementing the safety framework outlined in its own publication ' Planning for AGI and Beyond,' which attempts to create guardrails for dealing with AI as or more intelligent than its human creators; restoring meaningful oversight through governance reforms; creating specific safeguards against misuse for manipulation of democratic processes; and developing protocols to protect Hawaii's unique cultural and natural resources.
These items simply require the company to adhere to safety standards it has publicly endorsed but has failed to consistently implement.
While my lawsuit focuses on Hawaii, the implications extend far beyond our shores. The federal court system provides an appropriate venue for addressing these interstate commerce issues while protecting local interests.
The development of increasingly capable AI systems is likely to be one of the most significant technological transformations in human history, many experts believe—perhaps in a league with fire, according to Google CEO Sundar Pichai. 'AI is one of the most important things humanity is working on. It is more profound than, I dunno, electricity or fire,' Pichai said in 2018.
He's right, of course. The decisions we make today will profoundly shape the world our children and grandchildren inherit. I believe we have a moral and legal obligation to proceed with appropriate caution and to ensure that potentially transformative technologies are developed and deployed with adequate safety measures.
What is happening now with OpenAI's breakneck AI development and deployment to the public is, to echo technologist Tristan Harris's succinct April 2025 summary, 'insane.' My lawsuit aims to restore just a little bit of sanity.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Engadget
a few seconds ago
- Engadget
Engadget Podcast: Apple bows to the Trump regime (again)
This week, Apple committed another $100 billion towards US investments in a bid to avoid the Trump administration's chaotic tariff plans. Oh, and Tim Cook gave Trump a unique plaque with a 24-karat gold base. Just a normal business meeting in a normal country. In this episode, Devindra and Engadget's Sam Rutherford discuss what this latest Apple investment ultimately means (and just how embarrassing appeasing the Trump administration looks). And since the news is fairly slow, we also take some time to answer a few listener questions. Topics Apple attempts to avoid tariffs with another $100 billion U.S. investment and a shiny object for the president – 1:21 Nintendo announces blockbuster Switch 2 sales numbers, price hikes for original Switch models – 12:39 Trump demands Intel's new CEO to step down over conflicts of interest – 16:51 ChatGPT conversations no longer searchable in Google (why were they there in the first place?) – 18:55 Hulu to shut down app in 2026 and be absorbed into Disney+ – 22:15 Listener Q&A: HDMI or Optical for sound bar connection, moving off of Windows 10, and good alternatives to the Lenovo Yoga 7 – 26:59 Pop culture picks – 53:29. Host: Devindra Hardawar and Sam Rutherford Producer: Ben Ellman Music: Dale North and Terrence O'Brien If you buy something through a link in this article, we may earn commission.

Business Insider
2 minutes ago
- Business Insider
Sam Altman says some users want ChatGPT to be a 'yes man' because they've never had anyone support them before
Some ChatGPT users were so attached to the chatbot's "yes man" style that they asked OpenAI to bring it back, Sam Altman has said. The OpenAI CEO said there was a "heartbreaking" reason — because some users said they had never had anyone support them before. "Here is the heartbreaking thing. I think it is great that ChatGPT is less of a yes man and gives you more critical feedback," Altman said on Cleo Abram's "Huge Conversations" podcast, which aired Friday. "But as we've been making those changes and talking to users about it, it's so sad to hear users say, 'Please can I have it back? I've never had anyone in my life be supportive of me. I never had a parent tell me I was doing a good job.'" Altman said that users told him ChatGPT's old style had "encouraged" them to make changes in their lives. "I can get why this was bad for other people's mental health, but this was great for my mental health," Altman recalled some of the users saying. It follows OpenAI's efforts to rein in what it called "sycophantic" behaviour in ChatGPT. In April, the company said an update to its GPT-4o model had made it "overly flattering or agreeable" and "disingenuous." At the time, Altman said the bot's personality had become "too sycophant-y and annoying" and said fixes were on the way. Users had posted examples of the chatbot gushing over mundane prompts with praise like "absolutely brilliant" and "you are doing heroic work." On the podcast, Altman also acknowledged the scale of influence that comes with even small changes to ChatGPT's tone. "One researcher can make some small tweak to how ChatGPT talks to you — or talks to everybody — and that's just an enormous amount of power for one individual making a small tweak to the model personality," he said. "We've got to think about what it means to make a personality change to the model at this kind of scale," he added. It's not the first time he's raised concerns about how much people lean on the chatbot. Speaking at a Federal Reserve event in July, Altman said some people, particularly younger users, had developed a worrying "emotional over-reliance" on ChatGPT. "There's young people who say things like, 'I can't make any decision in my life without telling ChatGPT everything that's going on. It knows me, it knows my friends. I'm gonna do whatever it says.' That feels really bad to me," Altman said. On Thursday, OpenAI rolled out GPT-5, the company's latest model that Altman called a "major upgrade." He said on the "Huge Conversations" podcast that over time, GPT-5 would feel more integrated into daily life and be more like a "proactive" companion. "Maybe you wake up in the morning and it says, 'Hey, this happened overnight. I noticed this change on your calendar.' Or, 'I was thinking more about this question you asked me. I have this other idea,'" he said. The update also added four optional "personality" modes — Cynic, Robot, Listener, and Nerd — each with a distinct tone, and which can be fine-tuned to match a user's preferences.

6 minutes ago
OpenAI launches GPT-5
OpenAI released a new version of the artificial intelligence model on Thursday that is available to all ChatGPT users.