logo
State AGs fill the AI regulatory void

State AGs fill the AI regulatory void

Reuters19-05-2025

May 19, 2025 - As generative artificial intelligence (AI) technologies rapidly proliferate and permeate society, state attorneys general (AGs) have warned of potential AI misuse and legal violations. And while only California, Colorado, and Utah have enacted laws governing AI, the dearth of AI-specific laws has not prevented states from advising and taking AI-related action under existing law. Indeed, state AGs have indicated that they will utilize privacy, consumer protection, and anti-discrimination laws to regulate AI.
AGs are focused on how AI systems utilize personal identifying information, potentially facilitate fraud using deepfakes, operate relative to company representations, and could perpetrate bias and discrimination in decision-making processes. In January 2024, a bipartisan group of AGs sent a letter to the Federal Communications Commission (FCC) warning of potential fraud where AI is used to imitate human voices in telemarketing campaigns. See Implications of Artificial Intelligence Technologies on Protecting Consumers from Unwanted Robocalls and Robotexts, Federal Communications Commission, CG Docket No. 23-362 (January 17, 2024).
While California, Colorado, and Utah will ramp up enforcement under their AI laws, businesses must be aware of regulatory risks from other states under traditional laws and ensure robust AI compliance measures. Since the FCC letter, AGs from California, Massachusetts, Oregon, New Jersey, and Texas have issued AI-specific guidance or taken AI-related enforcement actions that address common themes, despite not having AI-specific laws in their states.
California AG Rob Bonta has been among the most vocal state AGs on the use of AI. Earlier this year, AG Bonta issued two legal advisories warning that companies may be liable under California's Unfair Competition Law and Civil Rights Act if AI tools mislead consumers, deny access to services, or cause other discriminatory outcomes.
In particular, AG Bonta has highlighted the risks associated with using AI in the health care industry and hiring, where replacing trained professionals with opaque and unproven AI systems could result in actionable harm.
Massachusetts AG Joy Campbell was the first to issue formal guidance warning that AI systems could potentially run afoul of existing law. Specifically, companies risk violating the Massachusetts Consumer Protection Act by misrepresenting the reliability of an AI system or falsely advertising its quality, such as touting functionality that does not in fact exist.
AG Campbell also warns of fraud by misrepresenting "audio or video content of a person for the purpose of deceiving another to engage in a business transaction or supply personal information as if to a trusted business partner as in the case of deepfakes, voice cloning, or chatbots." Attorney General Advisory on the Application of the Commonwealth's Consumer Protection, Civil Rights, and Data Privacy Laws to Artificial Intelligence, Mass. Atty. Gen. (April 16, 2024).
There are also privacy risks under the "Standards for the Protection of Personal Information of Residents of the Commonwealth," requiring AI developers and deployers to take appropriate steps to safeguard personal information used within AI systems. Finally, the advisory warns that Massachusetts' Anti-Discrimination Law may be implicated if AI makes decisions based on "legally protected characteristics," such as race or gender.
Following suit, former Oregon AG Ellen Rosenblum issued AI guidance highlighting concerns with the unpredictability of AI outputs, noting that they can compromise privacy, affect fairness, obscure accountability, and impact trustworthiness. See What You Should Know About How Oregon's Laws May Affect Your Company's Use of Artificial Intelligence, Or. Atty. Gen (December 24, 2024). The lack of transparency in AI decision-making processes can make it difficult for humans to identify, understand, and correct such decisions. With these risks in mind, the guidance notes that the state will regulate AI under several laws.
The Oregon Unlawful Trade Practices Act prohibits misrepresentations in consumer transactions, including misrepresenting the characteristics, uses, benefits, or qualities of AI products, or using AI to falsely claim nonexistent sponsorships, approvals, or connections (such as artificial celebrity endorsements).
The Oregon Consumer Privacy Act guarantees a consumer's right to control the distribution of their personal data, which is particularly relevant for generative AI systems trained with such data. AI developers should obtain express consumer consent before using data to train AI systems.
Consumers must also be able to opt out of AI profiling when AI is used to make significant decisions, such as those pertaining to housing, education, or lending. The Oregon Consumer Information Protection Act also requires AI developers to safeguard personal information through reasonable cybersecurity measures.
Finally, like Massachusetts, Oregon warns of possible bias and anti-discrimination violations. The Oregon Equality Act prohibits discrimination based on protected classes and bars discrimination resulting from AI use, particularly in housing and public accommodations.
New Jersey AG Matthew Platkin launched a "Civil Rights and Technology Initiative" aimed at addressing discrimination and bias-based harassment risks associated with AI. As part of that effort, he published guidance to explain how the New Jersey Law Against Discrimination (LAD) may apply to "algorithmic discrimination" inherent in some AI systems. Guidance on Algorithmic Discrimination and the New Jersey Law Against Discrimination, N.J. Atty. Gen (January 2025).
Through the LAD, New Jersey residents are protected from AI algorithmic discrimination by "covered entities," which include employers, housing providers, places of public accommodation, credit providers, and contractors, among others. Misuse of AI tools may violate the LAD in these areas. For example, in the employment context, automated decision-making tools have been used to determine whether an employee should be retained or terminated. While these tools streamline and simplify processes, their use may result in racial and gender discrimination based on the tools' design, training, or deployment.
Covered entities can violate the LAD even absent an intent to discriminate, such as by simply deploying an AI system developed and controlled by a third party. Specifically, tool deployment may violate the LAD if their use results in either disparate treatment or disparate impact based on protected characteristics. The law also provides recourse for affected individuals when AI decisions result in the preclusion or impediment of reasonable accommodations.
Texas AG Ken Paxton has taken the lead with action under traditional law enforcement. In September 2024, AG Paxton reached a settlement with health care technology company Pieces Technology (Pieces) under the Texas Deceptive Trade Practices-Consumer Protection Act (DTPA). See State of Texas v. Pieces Technologies Inc., Cause No. DC-24-13476 (2025). This represents the first AG settlement under a state consumer protection act involving generative AI.
Pieces uses AI to assist medical facilities by summarizing, charting, and drafting clinical notes. To measure accuracy, Pieces developed several metrics and benchmarks. The company advertised the accuracy of its AI product on its website, claiming a "critical hallucination rate" and "severe hallucination rate" of less than .001% and less than 1 per 100,000. AI hallucinations are instances where the output is false or misleading, and Pieces' metrics represent an extremely low incidence.
According to the AG's claims, these representations "may" have violated the DPTA because they were allegedly "false, misleading, or deceptive." Pieces denied any violation of the DPTA under the settlement terms.
The settlement requires that, should Pieces advertise the accuracy of AI products using metrics, it must disclose "the meaning or definition of such metric, benchmark, or similar measurement," and "the method, procedure, or any other process used by Pieces to calculate the metric, benchmark, or similar measurement used in … [its] marketing or advertising of its products and services."
Further, the company is prohibited from making false or misleading statements concerning AI products and must disclose any harmful or "potentially harmful" uses or misuses of its products. The settlement does not contain a monetary penalty; however, Pieces is required to demonstrate its compliance with the settlement upon the state's request for an indefinite period.
State AGs will scrutinize generative AI development and deployment within consumer protection, privacy, and anti-discrimination statutory frameworks. Companies using AI and conducting business with consumers in multiple states should ensure they are engaging in defensible privacy and cybersecurity practices in accordance with state laws. They need a firm grasp on the AI system's foundational model and capabilities and should perform a thorough risk assessment before employing AI products.
Businesses advertising AI products must be aware of state consumer protection acts and avoid false or misleading claims. The Federal Trade Commission has developed guidance for companies employing AI products and advertising their capabilities. See Keep Your AI Claims in Check, Federal Trade Commission (February 27, 2023).
Companies cannot assume they are insulated from liability simply because they are using AI without discriminatory intent. Several state AGs have signaled that enforcement actions could result from an AI system's disparate impact alone. Therefore, companies must carefully examine AI inputs and outputs and ensure the system is producing fair and unbiased results, particularly if legally protected characteristics such as age, gender, or race are implicated.
Companies considering implementing AI systems must prepare for potential exposure under a patchwork of state laws and should consult competent outside counsel accordingly. To mitigate against ever-expanding risk, all levels of decision-makers, including executives, IT staff, and in-house counsel, should be aware of AI risks and capabilities and involved in its implementation.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

USTR eases proposed penalties, fees for non-US LNG tankers, vehicle carriers
USTR eases proposed penalties, fees for non-US LNG tankers, vehicle carriers

Reuters

time19 minutes ago

  • Reuters

USTR eases proposed penalties, fees for non-US LNG tankers, vehicle carriers

LOS ANGELES, June 9 (Reuters) - The U.S. Trade Representative softened fee proposals for non-U.S.-built LNG tankers and car carriers amid its ongoing effort to counter China's dominance on the high seas and revive domestic shipbuilding. The revised proposal, unveiled by USTR on Friday, would remove LNG-related penalties for failing to export a percentage of fuel on U.S.-owned ships. It also would reduce fees when foreign-built car carriers visit domestic ports and exempt those vessels when they are serving the U.S. military. USTR previously exempted ships carrying U.S. exports as well as operators of smaller ships from port fees originally aimed at China-linked vessels. The agency also exempted vessels that service the Great Lakes, Caribbean and U.S. territories. "This is a step in the right direction, and we look forward to working with USTR on a solution that ensures U.S. LNG remains competitive on the global stage," Rob Jennings, vice president of natural gas markets for the American Petroleum Institute, said on Monday. USTR caught the liquified natural gas industry off guard in April with new rules for outbound shipments of that fuel, sparking an outcry. It also surprised the vehicle carrier industry with a plan to impose port fees on all non-U.S.-built vessels in that segment - including U.S.-flagged and U.S.-crewed ships admitted to the U.S. Maritime Security Program (MSP) that supports Washington's military readiness. USTR on Friday removed language saying it could suspend LNG export licenses until its rules for moving a percentage of outgoing shipments on U.S.-built and operated vessels were met. On April 17, USTR said LNG producers would have to transport 1% of their exports on U.S.-built ships starting in April 2029. That percentage would escalate to 15% in April 2047 and beyond. The World Shipping Council, whose members vehicle carriers such as Norway's Wallenius Wilhelmsen ( opens new tab, did not immediately comment on the revisions. The vehicle carrier fee effective October 14 was to be $150 per car capacity of a non-U.S.-built ship known as roll-on/roll-offs, or RoRos. Typical RoRos have capacity to carry nearly 5,000 vehicles. In the revision, USTR lowered that fee to $14 per net ton. It also exempted vessels in the MSP, as well as U.S. government cargo - matching previous exemptions made for other vessel segments. Companies with ships in the MSP include Florida-based American Roll-On, Roll-Off Carrier Group, a U.S.-flag operator of vehicle carriers that is part of Wallenius Wilhelmsen Group, which did not immediately comment. The RoRo fees come on top of steep, 25% fees on auto imports imposed by Trump. These affect mainly European vehicles. U.S. exporters also use RoRos to export U.S.-made BMW SUVs, John Deere (DE.N), opens new tab tractors and other goods. Shipping industry groups and attorneys have said USTR overreached by levying fees on RoRos made in countries that were not part of the Biden administration's fast-track investigation into China. The USTR's revisions continued to reference "non-U.S. built" vehicle carriers. Interested parties, which were not previously given the opportunity to comment on rules for RoRos or LNG tankers, have until July 7 to submit feedback on the revisions.

Electric vehicle myths: more than half Australians wrongly believe EVs are more likely to catch fire, survey shows
Electric vehicle myths: more than half Australians wrongly believe EVs are more likely to catch fire, survey shows

The Guardian

time24 minutes ago

  • The Guardian

Electric vehicle myths: more than half Australians wrongly believe EVs are more likely to catch fire, survey shows

More than a third of Australians believe misinformation about electric vehicles, while almost half accept the myth that battery-powered cars are more likely to catch fire, a study shows. Interventions using fact sheets and artificial intelligence chatbots may help lower false beliefs, researchers say, but those with a strong 'conspiracy mentality' are harder to convince. Researchers from the University of Queensland and three German institutions released the findings late on Monday, after surveys of more than 6,300 people in Australia, the US, Germany and Austria. Their results mirrored concerns in a NRMA study that found misinformation about battery safety was preventing motorists from investing in low-emission cars. Sign up for Guardian Australia's breaking news email The UQ peer-reviewed research, published in the Springer Nature Energy journal, included the results of four surveys, the first of which asked non-electric vehicle drivers for their opinions about false information about the cars. Despite the untruthful claims, more than one in three Australians on average went along with the assertions (35%) – slightly lower than the average across all four nations (36%). Incorrect statements that EVs were more likely to catch fire than petrol or diesel cars gained the highest acceptance from Australians surveyed (47%), while 44% also believed EV batteries were deliberately non-upgradeable. Other claims about manufacturing outweighing the lower emissions of electric cars (35%) and EV accidents being under-reported (26%) were also concerning, the University of Queensland environmental psychologist and co-author Dr Chris Bretter said. 'The misinformation statements we tested included that EVs are more likely to catch fire than petrol cars, do not produce emission savings and emit electromagnetic fields that damage health – all claims which are demonstrably false,' Bretter said. 'We know this sort of false information is out there and circulating but the scale of acceptance is concerning and poses a significant challenge to the global transition to more sustainable transport.' Acceptance of EV misinformation was highest among people with a 'conspiracy mentality', the study found, although rates of belief were also higher among those with weaker environmental views, women and older participants. Participants also had their perceptions tested after a three-round conversation about electric vehicles with OpenAI's ChatGPT and after reading a US Department of Energy fact sheet debunking claims. Sign up to Breaking News Australia Get the most important news as it breaks after newsletter promotion Both interventions delivered 'modest increases in pro-EV sentiment', the study found. Bretter said this represented an avenue for future research. 'Given the global need to transition toward more sustainable transport options, it is crucial we address the prevalence of misinformation about energy efficient technologies like EVs,' he said. The findings come days after NRMA released a report, prepared by Ipsos, that found two in three Australians remained concerned about electric vehicle battery fires and safety despite low incident reports. Eight electric vehicle fires have been reported in Australia to November 2024, according to EV Firesafe, including one with an unknown cause.

WPP Media cuts 2025 global advertising revenue growth forecast to 6% on trade concerns
WPP Media cuts 2025 global advertising revenue growth forecast to 6% on trade concerns

Reuters

timean hour ago

  • Reuters

WPP Media cuts 2025 global advertising revenue growth forecast to 6% on trade concerns

June 9 (Reuters) - Global advertising revenue is expected to grow 6% this year, WPP Media said on Monday, lowering its earlier target of 7.7% due to uncertainty over U.S. trade policies. Advertisers are appearing to delay making new commitments to their marketing plans because of the shifting policies, according to a report by the media investment arm of ad group WPP (WPP.L), opens new tab. Digital ad spending by companies is a major driver of revenue for Alphabet-owned (GOOGL.O), opens new tab search giant Google and social media firms such as Meta Platforms (META.O), opens new tab, Pinterest (PINS.N), opens new tab, Reddit (RDDT.N), opens new tab and Snap (SNAP.N), opens new tab. Economic uncertainty is accelerating the adoption of AI tools in ad production and targeting, the WPP Media report said. Meta aims to allow brands to fully create and target ads with its AI tools by the end of 2026, the Wall Street Journal reported last week. Research firm Emarketer recently said companies that rely on traditional keyword-based search ads could lose revenue due to the growing popularity of AI-driven search ads. WPP Media now expects global ad revenue to reach $1.08 trillion in 2025, with 6.1% growth projected for 2026. Digital advertising is expected to account for 73.2% of the global revenue this year. In 2025, user-generated content will account for a greater share of ad revenue than professionally produced content, the report said. It forecast print advertising revenue will fall 3.1% to $45.5 billion this year, while search revenue is expected to grow 7.3%. WPP Media said brands are expected to prioritize flexible ad contracts, shift budgets toward media placements that reach consumers directly and focus on secure data strategies amid economic uncertainty. The U.S. remains the largest ad market and is expected to grow 5.6% to $404.7 billion, followed by China and the UK, according to the report.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store