logo
State AGs fill the AI regulatory void

State AGs fill the AI regulatory void

Reuters19-05-2025

May 19, 2025 - As generative artificial intelligence (AI) technologies rapidly proliferate and permeate society, state attorneys general (AGs) have warned of potential AI misuse and legal violations. And while only California, Colorado, and Utah have enacted laws governing AI, the dearth of AI-specific laws has not prevented states from advising and taking AI-related action under existing law. Indeed, state AGs have indicated that they will utilize privacy, consumer protection, and anti-discrimination laws to regulate AI.
AGs are focused on how AI systems utilize personal identifying information, potentially facilitate fraud using deepfakes, operate relative to company representations, and could perpetrate bias and discrimination in decision-making processes. In January 2024, a bipartisan group of AGs sent a letter to the Federal Communications Commission (FCC) warning of potential fraud where AI is used to imitate human voices in telemarketing campaigns. See Implications of Artificial Intelligence Technologies on Protecting Consumers from Unwanted Robocalls and Robotexts, Federal Communications Commission, CG Docket No. 23-362 (January 17, 2024).
While California, Colorado, and Utah will ramp up enforcement under their AI laws, businesses must be aware of regulatory risks from other states under traditional laws and ensure robust AI compliance measures. Since the FCC letter, AGs from California, Massachusetts, Oregon, New Jersey, and Texas have issued AI-specific guidance or taken AI-related enforcement actions that address common themes, despite not having AI-specific laws in their states.
California AG Rob Bonta has been among the most vocal state AGs on the use of AI. Earlier this year, AG Bonta issued two legal advisories warning that companies may be liable under California's Unfair Competition Law and Civil Rights Act if AI tools mislead consumers, deny access to services, or cause other discriminatory outcomes.
In particular, AG Bonta has highlighted the risks associated with using AI in the health care industry and hiring, where replacing trained professionals with opaque and unproven AI systems could result in actionable harm.
Massachusetts AG Joy Campbell was the first to issue formal guidance warning that AI systems could potentially run afoul of existing law. Specifically, companies risk violating the Massachusetts Consumer Protection Act by misrepresenting the reliability of an AI system or falsely advertising its quality, such as touting functionality that does not in fact exist.
AG Campbell also warns of fraud by misrepresenting "audio or video content of a person for the purpose of deceiving another to engage in a business transaction or supply personal information as if to a trusted business partner as in the case of deepfakes, voice cloning, or chatbots." Attorney General Advisory on the Application of the Commonwealth's Consumer Protection, Civil Rights, and Data Privacy Laws to Artificial Intelligence, Mass. Atty. Gen. (April 16, 2024).
There are also privacy risks under the "Standards for the Protection of Personal Information of Residents of the Commonwealth," requiring AI developers and deployers to take appropriate steps to safeguard personal information used within AI systems. Finally, the advisory warns that Massachusetts' Anti-Discrimination Law may be implicated if AI makes decisions based on "legally protected characteristics," such as race or gender.
Following suit, former Oregon AG Ellen Rosenblum issued AI guidance highlighting concerns with the unpredictability of AI outputs, noting that they can compromise privacy, affect fairness, obscure accountability, and impact trustworthiness. See What You Should Know About How Oregon's Laws May Affect Your Company's Use of Artificial Intelligence, Or. Atty. Gen (December 24, 2024). The lack of transparency in AI decision-making processes can make it difficult for humans to identify, understand, and correct such decisions. With these risks in mind, the guidance notes that the state will regulate AI under several laws.
The Oregon Unlawful Trade Practices Act prohibits misrepresentations in consumer transactions, including misrepresenting the characteristics, uses, benefits, or qualities of AI products, or using AI to falsely claim nonexistent sponsorships, approvals, or connections (such as artificial celebrity endorsements).
The Oregon Consumer Privacy Act guarantees a consumer's right to control the distribution of their personal data, which is particularly relevant for generative AI systems trained with such data. AI developers should obtain express consumer consent before using data to train AI systems.
Consumers must also be able to opt out of AI profiling when AI is used to make significant decisions, such as those pertaining to housing, education, or lending. The Oregon Consumer Information Protection Act also requires AI developers to safeguard personal information through reasonable cybersecurity measures.
Finally, like Massachusetts, Oregon warns of possible bias and anti-discrimination violations. The Oregon Equality Act prohibits discrimination based on protected classes and bars discrimination resulting from AI use, particularly in housing and public accommodations.
New Jersey AG Matthew Platkin launched a "Civil Rights and Technology Initiative" aimed at addressing discrimination and bias-based harassment risks associated with AI. As part of that effort, he published guidance to explain how the New Jersey Law Against Discrimination (LAD) may apply to "algorithmic discrimination" inherent in some AI systems. Guidance on Algorithmic Discrimination and the New Jersey Law Against Discrimination, N.J. Atty. Gen (January 2025).
Through the LAD, New Jersey residents are protected from AI algorithmic discrimination by "covered entities," which include employers, housing providers, places of public accommodation, credit providers, and contractors, among others. Misuse of AI tools may violate the LAD in these areas. For example, in the employment context, automated decision-making tools have been used to determine whether an employee should be retained or terminated. While these tools streamline and simplify processes, their use may result in racial and gender discrimination based on the tools' design, training, or deployment.
Covered entities can violate the LAD even absent an intent to discriminate, such as by simply deploying an AI system developed and controlled by a third party. Specifically, tool deployment may violate the LAD if their use results in either disparate treatment or disparate impact based on protected characteristics. The law also provides recourse for affected individuals when AI decisions result in the preclusion or impediment of reasonable accommodations.
Texas AG Ken Paxton has taken the lead with action under traditional law enforcement. In September 2024, AG Paxton reached a settlement with health care technology company Pieces Technology (Pieces) under the Texas Deceptive Trade Practices-Consumer Protection Act (DTPA). See State of Texas v. Pieces Technologies Inc., Cause No. DC-24-13476 (2025). This represents the first AG settlement under a state consumer protection act involving generative AI.
Pieces uses AI to assist medical facilities by summarizing, charting, and drafting clinical notes. To measure accuracy, Pieces developed several metrics and benchmarks. The company advertised the accuracy of its AI product on its website, claiming a "critical hallucination rate" and "severe hallucination rate" of less than .001% and less than 1 per 100,000. AI hallucinations are instances where the output is false or misleading, and Pieces' metrics represent an extremely low incidence.
According to the AG's claims, these representations "may" have violated the DPTA because they were allegedly "false, misleading, or deceptive." Pieces denied any violation of the DPTA under the settlement terms.
The settlement requires that, should Pieces advertise the accuracy of AI products using metrics, it must disclose "the meaning or definition of such metric, benchmark, or similar measurement," and "the method, procedure, or any other process used by Pieces to calculate the metric, benchmark, or similar measurement used in … [its] marketing or advertising of its products and services."
Further, the company is prohibited from making false or misleading statements concerning AI products and must disclose any harmful or "potentially harmful" uses or misuses of its products. The settlement does not contain a monetary penalty; however, Pieces is required to demonstrate its compliance with the settlement upon the state's request for an indefinite period.
State AGs will scrutinize generative AI development and deployment within consumer protection, privacy, and anti-discrimination statutory frameworks. Companies using AI and conducting business with consumers in multiple states should ensure they are engaging in defensible privacy and cybersecurity practices in accordance with state laws. They need a firm grasp on the AI system's foundational model and capabilities and should perform a thorough risk assessment before employing AI products.
Businesses advertising AI products must be aware of state consumer protection acts and avoid false or misleading claims. The Federal Trade Commission has developed guidance for companies employing AI products and advertising their capabilities. See Keep Your AI Claims in Check, Federal Trade Commission (February 27, 2023).
Companies cannot assume they are insulated from liability simply because they are using AI without discriminatory intent. Several state AGs have signaled that enforcement actions could result from an AI system's disparate impact alone. Therefore, companies must carefully examine AI inputs and outputs and ensure the system is producing fair and unbiased results, particularly if legally protected characteristics such as age, gender, or race are implicated.
Companies considering implementing AI systems must prepare for potential exposure under a patchwork of state laws and should consult competent outside counsel accordingly. To mitigate against ever-expanding risk, all levels of decision-makers, including executives, IT staff, and in-house counsel, should be aware of AI risks and capabilities and involved in its implementation.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Tariffs: US-China talks end with plan for Trump and Xi to approve
Tariffs: US-China talks end with plan for Trump and Xi to approve

BBC News

time34 minutes ago

  • BBC News

Tariffs: US-China talks end with plan for Trump and Xi to approve

The US and China say they have agreed in principle to a framework for de-escalating trade tensions between the world's two biggest Commerce Secretary Howard Lutnick said the deal should result in restrictions on rare earths and magnets being sides said they would now take the plan to their country's presidents - Donald Trump and Xi Jinping - for announcement came after two days of negotiations in London between top officials from Beijing and Washington. Chinese exports of rare earth minerals, which are crucial for modern technology, were high on the agenda of the month, Washington and Beijing agreed a temporary truce over trade tariffs but each country has since accused the other of breaching the deal."We have reached a framework to implement the Geneva consensus," Lutnick told reporters."Once the presidents approve it, we will then seek to implement it," he added."The two sides have, in principle, reached a framework for implementing the consensus reached by the two heads of state during the phone call on June 5th and the consensus reached at the Geneva meeting," Li said.

Trump tariffs may remain in effect while appeals proceed, U.S. appeals court decides
Trump tariffs may remain in effect while appeals proceed, U.S. appeals court decides

Reuters

time37 minutes ago

  • Reuters

Trump tariffs may remain in effect while appeals proceed, U.S. appeals court decides

June 10 (Reuters) - A federal appeals court allowed President Donald Trump's most sweeping tariffs to remain in effect on Tuesday while it reviews a lower court decision blocking them on grounds that Trump had exceeded his authority by imposing them. The decision by the U.S. Court of Appeals for the Federal Circuit in Washington, D.C. means Trump may continue to enforce, for now, his "Liberation Day" tariffs on imports from most U.S. trading partners, as well as a separate set of tariffs levied on Canada, China and Mexico. The appeals court has yet to rule on whether the tariffs are permissible under an emergency economic powers act that Trump cited to justify them, but it allowed the tariffs to remain in place while the appeals play out. The tariffs, used by Trump as negotiating leverage with U.S. trading partners, and their on-again, off-again nature have shocked markets and whipsawed companies of all sizes as they seek to manage supply chains, production, staffing and prices. The ruling has no impact on other tariffs levied under more traditional legal authority, such as tariffs on steel and aluminum imports. A three-judge panel of the U.S. Court of International Trade ruled on May 28 that the U.S. Constitution gave Congress, not the president, the power to levy taxes and tariffs, and that the president had exceeded his authority by invoking the International Emergency Economic Powers Act, a law intended to address "unusual and extraordinary" threats during national emergencies. The Trump administration quickly appealed the ruling, and the Federal Circuit in Washington put the lower court decision on hold the next day while it considered whether to impose a longer-term pause. The ruling came in a pair of lawsuits, one filed by the nonpartisan Liberty Justice Center on behalf of five small U.S. businesses that import goods from countries targeted by the duties and the other by 12 U.S. states. Trump has claimed broad authority to set tariffs under IEEPA. The 1977 law has historically been used to impose sanctions on enemies of the U.S. or freeze their assets. Trump is the first U.S. president to use it to impose tariffs. Trump has said that the tariffs imposed in February on Canada, China and Mexico were to fight illegal fentanyl trafficking at U.S. borders, denied by the three countries, and that the across-the-board tariffs on all U.S. trading partners imposed in April were a response to the U.S. trade deficit. The states and small businesses had argued the tariffs were not a legal or appropriate way to address those matters, and the small businesses argued that the decades-long U.S. practice of buying more goods than it exports does not qualify as an emergency that would trigger IEEPA. At least five other court cases have challenged the tariffs justified under the emergency economic powers act, including other small businesses and the state of California. One of those cases, in federal court in Washington, D.C., also resulted in an initial ruling against the tariffs, and no court has yet backed the unlimited emergency tariff authority Trump has claimed.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store