Consultant behind AI-generated robocalls mimicking Biden goes on trial in New Hampshire
CONCORD, N.H. (AP) — A political consultant who sent voters artificial intelligence-generated robocalls mimicking former President Joe Biden last year goes on trial Thursday in New Hampshire, where jurors may be asked to consider not just his guilt or innocence but whether the state actually held its first-in-the-nation presidential primary.
Steven Kramer, who faces decades in prison if convicted of voter suppression and impersonating a candidate, has admitted orchestrating a message sent to thousands of voters two days before the Jan. 23, 2024, primary. The message played an AI-generated voice similar to the Democratic president's that used his phrase 'What a bunch of malarkey' and suggested that voting in the primary would preclude voters from casting ballots in November.
'It's important that you save your vote for the November election,' voters were told. 'Your votes make a difference in November, not this Tuesday.'
Kramer, who owns a firm specializing in get-out-the-vote projects, has said he wasn't trying to influence the outcome of the primary election but rather wanted to send a wake-up call about the potential dangers of AI when he paid a New Orleans magician and self-described 'digital nomad' $150 to create the recording.
'Maybe I'm a villain today, but I think in the end we get a better country and better democracy because of what I've done, deliberately,' Kramer told The Associated Press in February 2024.
Ahead of the trial in Belknap County Superior Court, state prosecutors sought to prevent Kramer from arguing that the primary was a meaningless straw poll because it wasn't sanctioned by the Democratic National Committee. At Biden's request, the DNC dislodged New Hampshire from its traditional early spot in the nominating calendar, but later dropped its threat not to seat the state's national convention delegates. Biden did not put his name on the ballot or campaign there, but won as a write-in.
The state argued that such evidence was irrelevant and would risk confusing jurors, but Judge Elizabeth Leonard denied the motion in March, saying the DNC's actions and Kramer's understanding of them were relevant to his motive and intent in sending the calls. She did grant the prosecution's request that the court accept as fact that the state held its presidential primary election as defined by law on Jan. 23, 2024. Jurors will be informed of that conclusion but won't be required to accept it.
Kramer faces 11 felony charges, each punishable by up to seven years in prison, alleging he attempted to prevent or deter someone from voting based on 'fraudulent, deceptive, misleading or spurious grounds or information.' He also faces 11 misdemeanor charges that each carry a maximum sentence of a year in jail accusing him of falsely representing himself as a candidate by his own conduct or that of another person.
He also has been fined $6 million by the Federal Communications Commission, but it's unclear whether he has paid it, and the FCC did not respond to a request for comment earlier this week.
The agency was developing AI-related rules when Donald Trump won the presidency, but has since shown signs of a possible shift toward loosening regulations. In April, it recommended that a telecom company be added back to an industry consortium just weeks after the agency had proposed fining the company for its role in illegal robocalls impersonating the FCC.
Half of all U.S. states have enacted legislation regulating AI deepfakes in political campaigns, according to the watchdog organization Public Citizen.
But House Republicans in Congress recently added a clause to their party's signature 'big beautiful' tax bill that would ban states and localities from regulating artificial intelligence for a decade, though it faces long odds in the Senate.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
37 minutes ago
- Yahoo
AOC's 6-Word Response To The Donald Trump And Elon Musk Breakup Is Going Viral
We're on day two post-breakup. At this point, we need to remember the "good times." Six months is quite the accomplishment, and honestly, waaay longer than we all thought it would last. One person who had some thoughts about it was AOC. Related: This Senator's Clap Back Fully Gagged An MSNBC Anchor, And The Clip Is Going Viral Here six-word response about it is going viral: Spectrum News/gtconway3d/Twitter: @gtconway3d "The girls are fighting, aren't they?" Related: This Republican Lawmaker's Embarrassing Lack Of Knowledge Of The Term "Intersex" Went Viral After He Proposed An Amendment To Cut LGBTQ+ Funding People in the replies are loving it. "I generally don't care for it when pols do snide, but AOC's charm is so through the roof that she gets away with it," this person commented. "Gonna lib out for a second and say I love her a lot," another person wrote. And this person joked, "Men are too emotional to lead." I'd ALSO like to point out what this person said: "I like how this meme originated with azealia banks chiming in on nicki vs cardi beef in 2018 lol." IYKYK. Also in In the News: People Can't Believe This "Disgusting" Donald Trump Jr. Post About Joe Biden's Cancer Diagnosis Is Real Also in In the News: Republicans Are Calling Tim Walz "Tampon Tim," And The Backlash From Women Is Too Good Not To Share Also in In the News: "We Don't Import Food": 31 Americans Who Are Just So, So Confused About Tariffs And US Trade
Yahoo
38 minutes ago
- Yahoo
CEO of Anthropic warns against AI deregulation
The CEO of artificial intelligence company Anthropic called on the White House to drop its plan to limit AI regulation. Dario Amodei said a provision in President Donald Trump's bill to place a 10-year moratorium on state-level regulation of AI was 'far too blunt' with AI advancing 'head-spinningly fast.' He called instead for a national framework requiring leading firms to disclose their safety policies and efforts to reduce AI risk. Amodei — who warned last week that technology could wipe out half of all entry-level white-collar jobs in five years — is not the only AI boss calling for regulation of the technology: DeepMind's leader recently called for internationally agreed rules to implement safeguards across borders.


Fox News
41 minutes ago
- Fox News
Federal AI power grab could end state protections for kids and workers
Just as AI begins to upend American society, Congress is considering a move that would sideline states from enforcing commonsense safeguards. Tucked into the recently passed House reconciliation package is Section 43201, a provision that would pre-empt nearly all state and local laws governing "artificial intelligence models," "artificial intelligence systems," and "automated decision systems" for the next 10 years. Last night, the Senate released its own version of the moratorium that would restrict states from receiving federal funding for broadband infrastructure if they don't fall in line. Supporters argue that a moratorium is needed to avoid a patchwork of state rules that could jeopardize U.S. AI competitiveness. But this sweeping approach threatens to override legitimate state efforts to curb Big Tech's worst abuses—with no federal safeguards to replace them. It also risks undermining the constitutional role of state legislatures to protect the interests and rights of American children and working families amid AI's far-reaching social and economic disruptions. In the absence of Congressional action, states have been the first line of defense against Big Tech. Texas, Florida, Utah, and other states have led the way to protect children online, safeguard data privacy, and rein in platform censorship. Section 43201 puts many of those laws—even those not directly related to AI—at risk. The provision defines "automated decision systems" broadly, potentially capturing core functions of social media platforms, such as TikTok's For You feed or Instagram's recommendation engine. At least 12 states have enacted laws requiring parental consent or age verification for minors accessing these platforms. However, because these laws specifically apply to social media platforms, they could easily be construed as regulating "automated decision systems"— and thus be swept up in the moratorium. Further, Section 43201 might also block provisions of existing state privacy laws that restrict the use of algorithms—including AI—to predict consumer behavior, preferences, or characteristics. Even setting aside concerns with the moratorium's expansive scope, it suffers from a more fundamental flaw. The moratorium threatens to short-circuit American federalism by undermining state laws that ensure AI lives up to the promise outlined by Vice President J.D. Vance. Speaking at the Paris AI Summit, he warned against viewing "AI as a purely disruptive technology that will inevitably automate away our labor force." Instead, Vance called for "policies that ensure that AI… make[s] our workers more productive" and rewards them with "higher wages, better benefits, and safer and more prosperous communities." That vision is nearly impossible without state-level action. Legislators, governors, and attorneys general from Nashville to Salt Lake City are already advancing creative, democratically accountable solutions. Tennessee's novel ELVIS Act protects music artists from nonconsensual AI-generated voice and likeness cloning. Utah's AI consumer protection law requires that generative AI model deployers notify consumers when they are interacting with an AI. Other states, including Arkansas and Montana, are building legal frameworks for digital property rights with respect to AI models, algorithms, data, and model outputs. All of this is now at risk. As laboratories of democracy, states are essential to navigating the inevitable and innumerable trade-offs entailed by the diffusion of emerging technologies. Federalism enables continuous experimentation and competition between states—exposing the best and worst approaches to regulation in highly dynamic environments. That's critical when confronting AI's vast and constantly evolving sphere of impact on children and employment—to say nothing of the technology's wider socio-economic effects. Sixty leading advocacy and research organizations have warned that AI chatbots pose a significant threat to kids. They cite harrowing stories of teens who have been induced to suicide, addiction, sexual perversion, and self-harm at the hands of Big AI. Even industry leaders are sounding alarms: Anthropic CEO Dario Amodei estimates that AI could force up to 20% unemployment over the next five years. Innovation inherently brings disruption—but disruption without guardrails can harm the very communities AI is purportedly meant to uplift. That's why 40 state attorneys general, Democrats and Republicans alike, signed a letter opposing Section 43201, warning that it would override "carefully tailored laws targeting specific harms related to the use of AI." To be sure, not all laws are drafted equal. States like California and Colorado are imposing European-style AI regulations particularly detrimental to "Little Tech" and open-source model developers. But Congress shouldn't throw out federalism with the "doomer" bathwater. Rather than a blanket pre-emption, it should consider narrow, targeted limits carefully tailored to stymie high-risk bills—modeled on California and Colorado's approach—that foist doomer AI standards on the rest of the nation. Absent a comprehensive federal AI framework, states must retain freedom to act—specifically, to ensure that AI bolsters American innovation and competitiveness in pursuit of a thriving middle class. America's AI future has great potential. But our laboratories of democracy are key to securing it.