
What to do when an AI lies about you
One tool, of course, is existing defamation law — and a new federal lawsuit in Minnesota could start to answer the question of whether a company can be held liable for what an AI does.
The suit was filed by a 200-employee solar panel company called Wolf River Electric, which alleges Google's AI Overview hurt its business by offering made-up, defamatory facts about the company.
Assuming that Wolf River Electric can back up its complaint, this may be a telling test for whether existing law can rein in AI that harms people's reputations — or whether it falls short, and lawmakers need to step in.
'This might be one of the first cases where we actually get to see how the courts are going to really dig down and apply the basic principles of defamation law to AI,' said Ari Cohn, lead counsel for tech policy at the Foundation for Individual Rights in Education (FIRE).
So, what did the AI get wrong? According to Wolf River Electric's complaint, employees were Googling the company when they discovered problems with the search engine's AI Overview, which summarizes the results for a particular query.
A screenshot in the exhibits appears to show an AI Overview inaccurately stating that the company was being sued by Minnesota Attorney General Keith Ellison for deceptive sales practices, among other legal violations.
The screenshotted overview cites multiple sources, but if you go back and read through them, none of webpages actually say Wolf River Electric is being sued. Some other solar companies are being sued, and Wolf River Electric is named in an article — but the connection appears to have been made up entirely by the AI.
The screenshotted AI Overview warns at the end of its summary: 'Generative AI is experimental. For legal advice, consult a professional.'
Wolf River Electric did, indeed, consult a professional — and sued Google.
Google generally denied the defamation allegations in a filing last week, though it's yet to fully elaborate on its defense.
In a statement to DFD, Google said, 'The vast majority of our AI Overviews are accurate and helpful but like with any new technology, mistakes can happen. As soon as we found out about the problem, we acted quickly to fix it.'
In its own statement, Wolf River said: 'We've earned our reputation one customer and one project at a time. Now, that decade of hard work is at risk.'
Wolf River's lawsuit adds to a small but growing roster of cases trying to pin some responsibility on companies when AI makes up defamatory information.
Nationally syndicated radio host Mark Walters filed a lawsuit in 2023 against OpenAI, accusing ChatGPT of falsely claiming that he was being sued for embezzlement. A Georgia court dismissed Walters' suit in May, though the ruling isn't a binding precedent for the Minnesota case. Robby Starbuck, an anti-DEI activist with more than 800,000 X followers, sued Meta in April alleging that its chatbot inaccurately identified him as a participant in the January 6, 2021 insurrection. The case is ongoing.
Wolf River's case is different for a couple of important reasons, say legal experts. One is that it purports to show actual harm from the defamation — lost business from particular clients.
'Wolf River Electric claims they've got the receipts,' UCLA Law professor Eugene Volokh told DFD. 'They've got what a lot of people who've been libeled have a hard time proving.'
More importantly, the company doesn't have as much name recognition as Walters or Starbuck, which gives it a different status under First Amendment law. Well-known people have to meet a higher 'actual malice' standard to prove they've been defamed — whereas if the judge agrees that Wolf River Electric is a private figure, which the company asserted in a statement to DFD, then it would only have to show that Google was negligent.
Assuming the company's argument holds up, the case will steer into uncharted and important territory: What counts as 'reasonable' or 'negligent' in AI design?
Yale Law professor Robert Post, who specializes in the First Amendment, said product liability laws are a helpful analogy for navigating these murky issues. 'If you're marketing a toaster, you're responsible that it doesn't burst into flames,' he told DFD. 'What precautions did you take?'
Because tools like ChatGPT have only become more popular over the past few years, it's hard to tell what the industry's standard is for preventing defamatory hallucinations — if AI companies have widely adopted one at all. Without these standards, courts and juries may be left to decide whether an AI developer acted reasonably, making judgments on a whole new kind of product without many helpful signposts to guide them.
AI software is arguably the most complex in the world, so some regulation may be merited to help juries make sense of negligent design, regardless of how these cases turn out.
'It's not the kind of issue that you'd want different juries deciding on throughout the country,' said Post. 'You'd want national standards laid out by people who are technically well informed about what makes sense and what doesn't.'
OpenAI builds out its Stargate vision
Ambitions for Stargate are getting pretty lofty — and political.
OpenAI released a policy paper titled 'Ideas to Power Democratic AI' on Tuesday, which puts its 'Stargate' data center initiative at the core of its aims to catalyze U.S. reindustrialization, ward off repressive governments abroad and build education-based communities across the country.
Reading through this aspirational to-do list, you'd be forgiven for forgetting that Stargate was initially conceived as a data center construction project. President Donald Trump announced the initiative in January, as OpenAI entered into a $100 billion deal with SoftBank and Oracle to build the data centers that power AI systems like ChatGPT. The White House pledged to expedite permitting processes to aid the venture.
OpenAI's policy paper brings Stargate to the forefront of the day's buzziest policy debates. The company promotes plans to develop infrastructure based on Stargate abroad, to ensure that global AI ecosystems are built on 'democratic rails' to counteract China's 'autocratic version of AI.' It pushes for data centers to be fixtures of their communities, with 'Stargate Academies' to train high-skilled workforces, and endowments for local governments. The company further trumpets Stargate as an important component in industrial policies like modernizing the energy grid.
Stargate, it seems, is no longer just a construction deal – it's quickly becoming a political platform.
Phone subsidies lead to mass privacy protections
Roughly 1.4 million Californians rely on phones subsidized by the state's Lifeline program, and a leading proposal to expand its services now includes measures to shield all user data from the federal government.
POLITICO's California Decoded team reported on Tuesday that Democratic Assemblymember Avelino Valencia is broadening privacy protections in AB 1303, a bill that would make undocumented immigrants without Social Security numbers eligible for state Lifeline subsidies. The bill's initial privacy measures restricted certain Lifeline data from being shared with the federal government, out of concern that undocumented users would unintentionally disclose their immigration statuses to the Trump administration. The amendment expands those protections to all customers, prohibiting telecom companies from sharing such data with federal officials without a warrant or subpoena.
These privacy-oriented modifications to the bill come as Democrats and immigration advocates raise concerns about the administration wielding government databases to further its deportation efforts. An Associated Press investigation last weekend found that the Centers for Medicare and Medicaid Services shared data on Medicaid enrollees, including immigrants, with the Department of Homeland Security. The Internal Revenue Service and Department Housing and Urban Development have also entered data-sharing agreements with Homeland Security for immigration enforcement.
post of the day
THE FUTURE IN 5 LINKS
Stay in touch with the whole team: Aaron Mak (amak@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); Nate Robson (nrobson@politico.com); and Daniella Cheslow (dcheslow@politico.com).
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
13 minutes ago
- Yahoo
YouTube Launches AI Age-Verification in U.S., Which Will Automatically Restrict Users Estimated to Be Under 18
Are you a kid watching YouTube? The Google-owned platform is testing technology in the U.S. that can predict if you're under 18 — and automatically add certain restrictions to your account. YouTube says the move is aimed at providing better protections for younger users. On Wednesday, it began rolling out an 'age-estimation model' in the U.S. that uses AI to determine if someone is under 18, regardless of the birthday they've entered into their account. More from Variety G&B Digital Management Launches Free 'Creator Economy' Master Class for Hollywood Guild Members (EXCLUSIVE) YouTube Donating $3 Million to L.A. Creative Professionals Whose Homes Were Destroyed or Damaged by Wildfires MrBeast, Mark Rober Launch TeamWater, Their Biggest Fundraiser to Date With 3,000-Plus YouTube Creators Aiming to Raise $40 Million for Clean Water Projects If YouTube's AI-based system calculates that someone is likely less than 18 years old, it will place restrictions on and add other security measures to the account. According to YouTube, users will 'have the option to verify your age (through government ID, selfie or a credit card) if you believe our age estimation model is incorrect.' The rollout of AI will initially cover a 'small set of users' in the U.S. to estimate their age, 'so that teens are treated as teens and adults as adults,' James Beser, senior director of product management for YouTube's youth products, wrote in blog post. 'This technology will allow us to infer a user's age and then use that signal, regardless of the birthday in the account, to deliver our age-appropriate product experiences and protections.' YouTube has used machine learning to estimate users' ages in other countries 'for some time, where it is working well,' according to Beser. In the U.S., YouTube will 'closely monitor the user experience, and partner with Creators to ensure that the entire ecosystem benefits from this update,' he added. According to YouTube, the age estimation model uses a variety of signals such as YouTube activity and longevity of the account. If the system determines that you are under 18, you will be notified and 'standard protections for teen accounts on YouTube will automatically be enabled.' Those 'protections' (which are already applied for users who have told YouTube they're under 18) include: showing only non-personalized ads; enabling 'digital wellbeing' tools by default, including 'take a break' and bedtime reminders; showing reminders about privacy when uploading a video or commenting publicly; minimizing recommendations of videos with content that could be 'problematic if viewed in repetition'; and blocking access to videos that are age-restricted for only viewers 18 and older (determined by YouTube or verified by users). For creators, YouTube will apply some additional protections including setting uploads as private by default for anyone and restricting the ability to earn from gifts on vertical live streams. While the video platform expects the changes to have 'limited impact' for most creators, YouTube noted that 'some creators may experience a shift in their audience categorized as teens (under 18). This may result in a decrease in ad revenue since we only serve non-personalized ads to those viewers.' 'YouTube was one of the first platforms to offer experiences designed specifically for young people, and we're proud to again be at the forefront of introducing technology that allows us to deliver safety protections while preserving teen privacy,' Beser wrote in the blog post. 'Families trust YouTube to provide a safe and enriching experience, and we'll continue to invest to protect their ability to explore safely online.' Best of Variety New Movies Out Now in Theaters: What to See This Week What's Coming to Disney+ in August 2025 What's Coming to Netflix in August 2025


USA Today
14 minutes ago
- USA Today
DOT launches faster, more secure system for airline complaints
The Department of Transportation announced an updated process for travelers to file air travel-related complaints. The agency said Wednesday, Aug. 13, that the change would make the resolution process more streamlined. According to the DOT, the complaint system previously relied on technology from the 1990s, which slowed the process for airlines and ticket agents to receive copies of the complaint and work toward its resolution. The new system, which users can access online, will automatically notify relevant parties once a complaint is filed. 'I'm committed to making USDOT work better for the American people. By modernizing our technology and getting rid of outdated legacy systems, we can better serve the traveling public and maximize efficiencies,' Transportation Secretary Sean P. Duffy said in a statement. The DOT said the new system also improves data and privacy protection for users. The agency recommends travelers try to resolve their issue directly with the airline or their travel agency before filing a consumer complaint. Consumer complaints help the DOT's Office of Aviation Consumer Protection (OCAP) track industry trends and identify when enforcement action is needed for a particular problem. "For civil rights complaints, OACP will investigate every complaint that it receives and provide its findings to the consumer at the conclusion of the investigation," the agency's website explains. "For all other complaints, OACP is not able to respond to each complaint individually given the volume of complaints received. Although OACP does not respond to individual consumers who file complaints not involving civil rights, OACP will review and analyze non-civil rights related complaints from consumers when conducting targeted or sample reviews to determine airlines' and travel agencies' compliance with aviation consumer protection requirements."

Business Insider
15 minutes ago
- Business Insider
Mark Cuban says companies should be taxed more for buying back their own shares
In an X post on Tuesday, the billionaire investor said raising the federal tax on the practice would push companies to reinvest in their businesses and hit wealthy shareholders — including himself — the hardest. The "Shark Tank" star called it "a way to charge the biggest public companies more" while shifting incentives toward long-term growth. Stock buybacks, also called share repurchases, happen when a company buys back its stock from investors, often reducing the number of shares in circulation. This can boost earnings per share and, in turn, the stock price, benefiting remaining shareholders. Critics say the practice can prioritize short-term gains over long-term investment. American companies bought back $166 billion in shares in July — the highest July total on record — bringing the year-to-date tally to $926 billion, surpassing the previous year-to-date record set in 2022 by $108 billion, per data from stock market research firm Birinyi Associates. The US has had a 1% tax on stock repurchases on publicly traded corporations since the Inflation Reduction Act, which took effect on January 1, 2023. Cuban said that a higher tax could encourage firms to use the cash to expand or pay dividends to shareholders, which he said would be tax-free for many Americans. "Married households making under 94k pay no taxes on it," Cuban wrote. "If I own it. I pay full taxes." In a follow-up X post, Cuban suggested exempting companies from the higher tax if they distributed repurchased shares to all employees, from interns to the CEO, based on each worker's share of total annual cash earnings. He said this would be a "baby step" toward reducing income inequality and boosting workers' net worth. A market correction could encourage more buybacks Citi predicted in a March note that there would be $1 trillion in buybacks for the year, up 11% from about $900 billion in 2024. The bank said market declines could spur more repurchases, as companies seize the chance to buy their shares at discounted levels. Citi said large firms like Apple, Alphabet, Nvidia, Wells Fargo, and Visa repurchased roughly $190 billion in stock last year alone. Citi's forecast came before a series of market warnings from Wall Street strategists. Analysts at BTIG, Evercore ISI, Stifel, Morgan Stanley, and Wells Fargo have all flagged the potential for a correction in the S&P 500 in the coming months. They cited stretched valuations, seasonal weakness in August and September, and uncertainty over tariffs' economic impact.