10 hours ago
What to do when an AI lies about you
As AI chatbots spread throughout American life — from personal assistants to romantic partners — one of the biggest puzzles is what to do when they misbehave. They're famous for making things up, or 'hallucinating,' to use the industry term. And when those hallucinations hurt people, it's not clear how they can fight back.
One tool, of course, is existing defamation law — and a new federal lawsuit in Minnesota could start to answer the question of whether a company can be held liable for what an AI does.
The suit was filed by a 200-employee solar panel company called Wolf River Electric, which alleges Google's AI Overview hurt its business by offering made-up, defamatory facts about the company.
Assuming that Wolf River Electric can back up its complaint, this may be a telling test for whether existing law can rein in AI that harms people's reputations — or whether it falls short, and lawmakers need to step in.
'This might be one of the first cases where we actually get to see how the courts are going to really dig down and apply the basic principles of defamation law to AI,' said Ari Cohn, lead counsel for tech policy at the Foundation for Individual Rights in Education (FIRE).
So, what did the AI get wrong? According to Wolf River Electric's complaint, employees were Googling the company when they discovered problems with the search engine's AI Overview, which summarizes the results for a particular query.
A screenshot in the exhibits appears to show an AI Overview inaccurately stating that the company was being sued by Minnesota Attorney General Keith Ellison for deceptive sales practices, among other legal violations.
The screenshotted overview cites multiple sources, but if you go back and read through them, none of webpages actually say Wolf River Electric is being sued. Some other solar companies are being sued, and Wolf River Electric is named in an article — but the connection appears to have been made up entirely by the AI.
The screenshotted AI Overview warns at the end of its summary: 'Generative AI is experimental. For legal advice, consult a professional.'
Wolf River Electric did, indeed, consult a professional — and sued Google.
Google generally denied the defamation allegations in a filing last week, though it's yet to fully elaborate on its defense.
In a statement to DFD, Google said, 'The vast majority of our AI Overviews are accurate and helpful but like with any new technology, mistakes can happen. As soon as we found out about the problem, we acted quickly to fix it.'
In its own statement, Wolf River said: 'We've earned our reputation one customer and one project at a time. Now, that decade of hard work is at risk.'
Wolf River's lawsuit adds to a small but growing roster of cases trying to pin some responsibility on companies when AI makes up defamatory information.
Nationally syndicated radio host Mark Walters filed a lawsuit in 2023 against OpenAI, accusing ChatGPT of falsely claiming that he was being sued for embezzlement. A Georgia court dismissed Walters' suit in May, though the ruling isn't a binding precedent for the Minnesota case. Robby Starbuck, an anti-DEI activist with more than 800,000 X followers, sued Meta in April alleging that its chatbot inaccurately identified him as a participant in the January 6, 2021 insurrection. The case is ongoing.
Wolf River's case is different for a couple of important reasons, say legal experts. One is that it purports to show actual harm from the defamation — lost business from particular clients.
'Wolf River Electric claims they've got the receipts,' UCLA Law professor Eugene Volokh told DFD. 'They've got what a lot of people who've been libeled have a hard time proving.'
More importantly, the company doesn't have as much name recognition as Walters or Starbuck, which gives it a different status under First Amendment law. Well-known people have to meet a higher 'actual malice' standard to prove they've been defamed — whereas if the judge agrees that Wolf River Electric is a private figure, which the company asserted in a statement to DFD, then it would only have to show that Google was negligent.
Assuming the company's argument holds up, the case will steer into uncharted and important territory: What counts as 'reasonable' or 'negligent' in AI design?
Yale Law professor Robert Post, who specializes in the First Amendment, said product liability laws are a helpful analogy for navigating these murky issues. 'If you're marketing a toaster, you're responsible that it doesn't burst into flames,' he told DFD. 'What precautions did you take?'
Because tools like ChatGPT have only become more popular over the past few years, it's hard to tell what the industry's standard is for preventing defamatory hallucinations — if AI companies have widely adopted one at all. Without these standards, courts and juries may be left to decide whether an AI developer acted reasonably, making judgments on a whole new kind of product without many helpful signposts to guide them.
AI software is arguably the most complex in the world, so some regulation may be merited to help juries make sense of negligent design, regardless of how these cases turn out.
'It's not the kind of issue that you'd want different juries deciding on throughout the country,' said Post. 'You'd want national standards laid out by people who are technically well informed about what makes sense and what doesn't.'
OpenAI builds out its Stargate vision
Ambitions for Stargate are getting pretty lofty — and political.
OpenAI released a policy paper titled 'Ideas to Power Democratic AI' on Tuesday, which puts its 'Stargate' data center initiative at the core of its aims to catalyze U.S. reindustrialization, ward off repressive governments abroad and build education-based communities across the country.
Reading through this aspirational to-do list, you'd be forgiven for forgetting that Stargate was initially conceived as a data center construction project. President Donald Trump announced the initiative in January, as OpenAI entered into a $100 billion deal with SoftBank and Oracle to build the data centers that power AI systems like ChatGPT. The White House pledged to expedite permitting processes to aid the venture.
OpenAI's policy paper brings Stargate to the forefront of the day's buzziest policy debates. The company promotes plans to develop infrastructure based on Stargate abroad, to ensure that global AI ecosystems are built on 'democratic rails' to counteract China's 'autocratic version of AI.' It pushes for data centers to be fixtures of their communities, with 'Stargate Academies' to train high-skilled workforces, and endowments for local governments. The company further trumpets Stargate as an important component in industrial policies like modernizing the energy grid.
Stargate, it seems, is no longer just a construction deal – it's quickly becoming a political platform.
Phone subsidies lead to mass privacy protections
Roughly 1.4 million Californians rely on phones subsidized by the state's Lifeline program, and a leading proposal to expand its services now includes measures to shield all user data from the federal government.
POLITICO's California Decoded team reported on Tuesday that Democratic Assemblymember Avelino Valencia is broadening privacy protections in AB 1303, a bill that would make undocumented immigrants without Social Security numbers eligible for state Lifeline subsidies. The bill's initial privacy measures restricted certain Lifeline data from being shared with the federal government, out of concern that undocumented users would unintentionally disclose their immigration statuses to the Trump administration. The amendment expands those protections to all customers, prohibiting telecom companies from sharing such data with federal officials without a warrant or subpoena.
These privacy-oriented modifications to the bill come as Democrats and immigration advocates raise concerns about the administration wielding government databases to further its deportation efforts. An Associated Press investigation last weekend found that the Centers for Medicare and Medicaid Services shared data on Medicaid enrollees, including immigrants, with the Department of Homeland Security. The Internal Revenue Service and Department Housing and Urban Development have also entered data-sharing agreements with Homeland Security for immigration enforcement.
post of the day
THE FUTURE IN 5 LINKS
Stay in touch with the whole team: Aaron Mak (amak@ Mohar Chatterjee (mchatterjee@ Steve Heuser (sheuser@ Nate Robson (nrobson@ and Daniella Cheslow (dcheslow@