logo
#

Latest news with #SB1047

Why AI Regulation Has Become a 'States' Rights' Issue
Why AI Regulation Has Become a 'States' Rights' Issue

Time​ Magazine

time25-06-2025

  • Politics
  • Time​ Magazine

Why AI Regulation Has Become a 'States' Rights' Issue

A major test of the AI industry's influence in Washington will come to a head this week—and the battle has already revealed sizable fissures in the Republican Party. Trump's 'Big Beautiful Bill' contains a provision that would severely discourage individual states from regulating AI for 10 years. Prominent Republicans, most notably Texas Senator Ted Cruz, have led the charge, arguing that a patchwork of shoddy state legislation would stunt the AI industry and burden small entrepreneurs. But Massachusetts Democrat Ed Markey has drafted an amendment to strip the provision from the megabill, arguing that it is a federal overreach and that states need to be able to protect their citizens from AI harms in the face of congressional inaction. The amendment could be voted on this week—and could gain support from an unlikely cadre of Republicans, most notably Missouri Senator Josh Hawley, who dislikes the provision's erosion of states' rights. 'It's a terrible provision,' Hawley tells TIME. When asked if he had been talking to other Republicans about trying to stop it, Hawley nodded, and said, 'There's a lot of people who have a lot of big concerns about it.' To strip the provision, Markey would need 51 votes: four Republicans in addition to every single Democrat. And it's unclear if he will get the necessary support from both camps. For example, Ron Johnson, a Wisconsin Republican, has criticized the provision—but told TIME on Tuesday that he didn't think it should be struck from the bill. Regardless of the outcome, the battle reflects both the AI industry's influence in Washington and the heightened anxieties that influence is causing among many different coalitions. Here's how the battle lines are being drawn, and why key Republicans are defecting from the party line. Fighting to Limit Regulation Congress has been notoriously slow to pass any sort of tech regulation in the past two decades. As a result, states have filled the void, passing bills that regulate biometric data and child online safety. The same has held true for AI: as the industry has surged in usage, hundreds of AI bills have been proposed in states, with dozens enacted into law. Some of the most stringent bills, like California's SB 1047, have faced fierce opposition from industry players, who cast them as poorly-written stiflers of innovation and economic growth. Their efforts have proven successful: After OpenAI, Facebook, and other industry players lobbied hard against SB 1047, Gavin Newsom vetoed the bill last fall. Since then, the industry has been working to prevent this sort of legislation from being passed again. In March—not long after OpenAI CEO Sam Altman appeared with Donald Trump at the White House to announce a data center initiative—OpenAI sent a set of policy proposals to the White House, which included a federal preemption of state laws. In May, Altman and other tech leaders came to Washington and warned Congress that the regulations risk the U.S. falling behind China in an AI arms race. A state-by-state approach, Altman said, would be 'burdensome.' For many Republicans, the idea of industry being shielded from 'burdensome' regulation resonated with their values. So Republicans in Congress wrote language stipulating a 10-year moratorium of state AI regulation in the funding megabill. One of the provision's key supporters was Jay Obernolte, a California Republican and co-chair of the House's AI Task Force. Obernolte argues that an array of state legislation would make it harder for smaller AI entrepreneurs to grow, further consolidating power into the hands of big companies which have the legal and compliance teams to sort through the paperwork. Obernolte argues that he wants AI regulation—but that it first should come from Washington, and that the moratorium would give Congress time to pass it. After that core legislation is figured out, he says, states would be able to pass their own laws. 'I strongly support states' rights, but when it comes to technologies that cross state lines by design, it's Congress's responsibility to lead with thoughtful, uniform policy,' Obernolte wrote in an email to TIME. This week, Senator Cruz altered the provision slightly, changing it from an outright ban to a stipulation that punishes states which pass AI legislation by withholding broadband expansion funding. If all Senate Republicans now vote for Trump's megabill wholesale, then the provision would pass into law. Fighting Back But the moratorium has received a significant amount of blowback—from advocates on both sides of the political aisle. From the left, the Leadership Conference on Civil and Human Rights led 60 civil rights organizations opposing the ban, arguing that it would neuter vital state laws that have already passed, including the creation of accuracy standards around facial recognition technology. The ACLU wrote that it would give 'tech giants and AI developers a blank check to experiment and deploy technologies without meaningful oversight or consequences.' Senator Ed Markey has drafted an amendment to strip the provision from the bill, and is attempting to mobilize Democrats to his cause. 'Whether it's children and teenagers in need of protection against predatory practices online; whether it's seniors who need protection in being deceived in terms of their health care coverage; whether it is the impact of the consumption of water and electricity at a state level and the pollution that is created—an individual state should have the rights to be able to put those protections in place,' he tells TIME. Markey says he's open to AI innovation, including in medical research. "But we don't want the sinister side of cyberspace through AI to plague a generation [of] workers, families, and children," he says. Sunny Gandhi, vice president of political affairs at the AI advocacy organization Encode, pushes back on the common industry talking point that state regulation harms small AI entrepreneurs, noting that bills like California's SB 1047 and New York's RAISE Act are specifically designed to target only companies that spend $100 million on compute. Criticism from the left is perhaps expected. But plenty of Republicans have expressed worries about the provision as well, imperiling its passage. A fellow at the Heritage Foundation came out against the moratorium, as did the Article III Project, a conservative judicial advocacy group, on the grounds that it would allow Big Tech to 'run wild.' Georgia Republican Marjorie Taylor Greene has been particularly vocal. 'I will not vote for any bill that destroys federalism and takes away states' rights,' she told reporters this month. Tennessee Republican Marsha Blackburn has also expressed concern, as she is especially sensitive to worries about artists' rights given her Nashville base. 'We cannot prohibit states across the country from protecting Americans, including the vibrant creative community in Tennessee, from the harms of AI,' Senator Blackburn wrote to TIME in a statement. 'For decades, Congress has proven incapable of passing legislation to govern the virtual space and protect vulnerable individuals from being exploited by Big Tech. We need to find consensus on these issues that are so vitally important to the American people.' But some Republicans with concerns may nevertheless reluctantly vote the provision through, giving it the numbers it needs to become law. Johnson, from Wisconsin, told TIME that he was 'sympathetic' with both arguments. 'I'm all about states' rights, but you can't have thousands of jurisdictions creating just a chaos of different regulation,' he says. 'So you probably do have to have some moratorium. Is 10 years too long? It might be. So maybe I can cut it back to five.' —With reporting by Nik Popli

California AI Policy Report Warns of ‘Irreversible Harms'
California AI Policy Report Warns of ‘Irreversible Harms'

Time​ Magazine

time17-06-2025

  • Politics
  • Time​ Magazine

California AI Policy Report Warns of ‘Irreversible Harms'

While AI could offer transformative benefits, without proper safeguards it could facilitate nuclear and biological threats and cause 'potentially irreversible harms,' a new report commissioned by California Governor Gavin Newsom has warned. 'The opportunity to establish effective AI governance frameworks may not remain open indefinitely,' says the report, which was published on June 17. Citing new evidence that AI can help users source nuclear-grade uranium and is on the cusp of letting novices create biological threats, it notes that the cost for inaction at this current moment could be 'extremely high.' The 53-page document stems from a working group established by Governor Newsom, in a state that has emerged as a central arena for AI legislation. With no comprehensive federal regulation on the horizon, state-level efforts to govern the technology have taken on outsized significance, particularly in California, which is home to many of the world's top AI companies. In 2023, California Senator Scott Wiener sponsored a first-of-its-kind bill, SB 1047, which would have required that large-scale AI developers implement rigorous safety testing and mitigation for their systems, but which critics feared would stifle innovation and squash the open-source AI community. The bill passed both state houses despite fierce industry opposition, but Governor Newsom ultimately vetoed it last September, deeming it 'well-intentioned' but not the 'best approach to protecting the public.' Following that veto, Newsom launched the working group to 'develop workable guardrails for deploying GenAI.' The group was co-led by 'godmother of AI' Fei-Fei Li, a prominent opponent of SB 1047, alongside Mariano-Florentino Cuéllar, member of the National Academy of Sciences Committee on Social and Ethical Implications of Computing Research, and Jennifer Tour Chayes dean of the College of Computing, Data Science, and Society at UC Berkeley. The working group evaluated AI's progress, SB 1047's weak points, and solicited feedback from more than 60 experts. 'As the global epicenter of AI innovation, California is uniquely positioned to lead in unlocking the transformative potential of frontier AI,' Li said in a statement. 'Realizing this promise, however, demands thoughtful and responsible stewardship—grounded in human-centered values, scientific rigor, and broad-based collaboration,' she said. "Foundation model capabilities have rapidly advanced since Governor Newsom vetoed SB 1047 last September," the report states. The industry has shifted from large language AI models that merely predict the next word in a stream of text toward systems trained to solve complex problems and that benefit from "inference scaling," which allows them more time to process information. These advances could accelerate scientific research, but also potentially amplify national security risks by making it easier for bad actors to conduct cyberattacks or acquire chemical and biological weapons. The report points to Anthropic's Claude 4 models, released just last month, which the company said might be capable of helping would-be terrorists create bioweapons or engineer a pandemic. Similarly, OpenAI's o3 model reportedly outperformed 94% of virologists on a key evaluation. In recent months, new evidence has emerged showing AI's ability to strategically lie, appearing aligned with its creators' goals during training but displaying other objectives once deployed, and exploit loopholes to achieve its goals, the report says. While 'currently benign, these developments represent concrete empirical evidence for behaviors that could present significant challenges to measuring loss of control risks and possibly foreshadow future harm,' the report says. While Republicans have proposed a 10 year ban on all state AI regulation over concerns that a fragmented policy environment could hamper national competitiveness, the report argues that targeted regulation in California could actually 'reduce compliance burdens on developers and avoid a patchwork approach' by providing a blueprint for other states, while keeping the public safer. It stops short of advocating for any specific policy, instead outlining the key principles the working group believes California should adopt when crafting future legislation. It 'steers clear' of some of the more divisive provisions of SB1047, like the requirement for a "kill switch" or shutdown mechanism to quickly halt certain AI systems in case of potential harm, says Scott Singer, a visiting scholar in the Technology and International Affairs Program at the Carnegie Endowment for International Peace, and a lead-writer of the report. Instead, the approach centers around enhancing transparency, for example through legally protecting whistleblowers and establishing incident reporting systems, so that lawmakers and the public have better visibility into AI's progress. The goal is to 'reap the benefits of innovation. Let's not set artificial barriers, but at the same time, as we go, let's think about what we're learning about how it is that the technology is behaving,' says Cuéllar, who co-led the report. The report emphasizes this visibility is crucial not only for public-facing AI applications, but for understanding how systems are tested and deployed inside AI companies, where concerning behaviors might first emerge. 'The underlying approach here is one of 'trust but verify,'" Singer says, a concept borrowed from Cold War-era arms control treaties that would involve designing mechanisms to independently check compliance. That's a departure from existing efforts, which hinge on voluntary cooperation from companies, such as the deal between OpenAI and Center for AI Standards and Innovation (formerly the U.S. AI Safety Institute) to conduct pre-deployment tests. It's an approach that acknowledges the 'substantial expertise inside industry,' Singer says, but 'also underscores the importance of methods of independently verifying safety claims.'

New York passes a bill to prevent AI-fueled disasters
New York passes a bill to prevent AI-fueled disasters

Yahoo

time13-06-2025

  • Business
  • Yahoo

New York passes a bill to prevent AI-fueled disasters

New York state lawmakers passed a bill on Thursday that aims to prevent frontier AI models from OpenAI, Google, and Anthropic from contributing to disaster scenarios, including the death or injury of more than 100 people, or more than $1 billion in damages. The passage of the RAISE Act represents a win for the AI safety movement, which has lost ground in recent years as Silicon Valley and the Trump Administration have prioritized speed and innovation. Safety advocates including Nobel prize laureate Geoffrey Hinton and AI research pioneer Yoshua Bengio have championed the RAISE Act. Should it become law, the bill would establish America's first set of legally mandated transparency standards for frontier AI labs. The RAISE Act has some of the same provisions and goals as California's controversial AI safety bill, SB 1047, which was ultimately vetoed. However, the co-sponsor of the bill, New York state Senator Andrew Gounardes told TechCrunch in an interview that he deliberately designed the RAISE Act such that it doesn't chill innovation among startups or academic researchers — a common criticism of SB 1047. 'The window to put in place guardrails is rapidly shrinking given how fast this technology is evolving,' said Senator Gounardes. 'The people that know [AI] the best say that these risks are incredibly likely […] That's alarming.' The RAISE Act is now headed for New York Governor Kathy Hochul's desk, where could either sign the bill into law, send it back for amendments, or veto it altogether. If signed into law, New York's AI safety bill would require the world's largest AI labs to publish thorough safety and security reports on their frontier AI models. The bill also requires AI labs to report safety incidents, such as concerning AI model behavior or bad actors stealing an AI model, should they happen. If tech companies fail to live up to these standards, the RAISE Act empowers New York's Attorney General to bring civil penalties of up to $30 million. The RAISE Act aims to narrowly regulate the world's largest companies — whether they're based in California (like OpenAI and Google) or China (like DeepSeek and Alibaba). The bill's transparency requirements apply to companies whose AI models were trained using more than $100 million in computing resources (seemingly, more than any AI model available today), and are being made available to New York residents. While similar to SB 1047 in some ways, the RAISE Act was designed to address criticisms of previous AI safety bills, according to Nathan Calvin, the Vice President of State Affairs and General Counsel at Encode, who worked on this bill and SB 1047. Notably, the RAISE Act does not require AI model developers to include a 'kill switch' on their models, nor does it hold companies that post-train frontier AI models accountable for critical harms. Nevertheless, Silicon Valley has pushed back significantly on New York's AI safety bill, New York state Assemblymember and co-sponsor of the RAISE Act Alex Bores told TechCrunch. Bores called the industry resistance unsurprising, but claimed that the RAISE Act would not limit innovation of tech companies in any way. 'The NY RAISE Act is yet another stupid, stupid state level AI bill that will only hurt the US at a time when our adversaries are racing ahead,' said Andreessen Horowitz general partner Anjney Midha in a Friday post on X. Andreessen Horowitz, alongside the startup incubator Y Combinator, were some of the fiercest opponents to SB 1047. Anthropic, the safety-focused AI lab that called for federal transparency standards for AI companies earlier this month, has not reached an official stance on the bill, co-founder Jack Clark said in a Friday post on X. However, Clark expressed some grievances over how broad the RAISE Act is, noting that it could present a risk to 'smaller companies.' When asked about Anthropic's criticism, state Senator Gounardes told TechCrunch he thought it 'misses the mark,' noting that he designed the bill not to apply to small companies. OpenAI, Google, and Meta did not respond to TechCrunch's request for comment. Another common criticism of the RAISE Act is that AI model developers simply wouldn't offer their most advanced AI models in the state of New York. That was a similar criticism brought against SB 1047, and it's largely what's played out in Europe thanks to the continent's tough regulations on technology. Assemblymember Bores told TechCrunch that the regulatory burden of the RAISE Act is relatively light, and therefore, shouldn't require tech companies to stop operating their products in New York. Given the fact that New York has the third largest GDP in the U.S., pulling out of the state is not something most companies would take lightly. 'I don't want to underestimate the political pettiness that might happen, but I am very confident that there is no economic reasons for them to not make their models available in New York,' said Assemblymember Borres. Sign in to access your portfolio

AI sometimes deceives to survive and nobody cares
AI sometimes deceives to survive and nobody cares

Malaysian Reserve

time27-05-2025

  • Politics
  • Malaysian Reserve

AI sometimes deceives to survive and nobody cares

YOU'D think that as artificial intelligence (AI) becomes more advanced, governments would be more interested in making it safer. The opposite seems to be the case. Not long after taking office, the Trump administration scrapped an executive order that pushed tech companies to safety test their AI models, and it also hollowed out a regulatory body that did that testing. The state of California in September 2024 spiked a bill forcing more scrutiny on sophisticated AI models, and the global AI Safety Summit started by the UK in 2023 became the 'AI Action Summit' earlier this year, seemingly driven by a fear of falling behind on AI. None of this would be so worrying if it weren't for the fact that AI is showing some bright red flags: Behaviour described by researchers as self-preserving and deceptive. Just as lawmakers are losing interest in the safety of AI, it is looking more difficult to control. Yoshua Bengio, a professor of computer science at the University of Montreal widely known as one of the three 'godfathers of AI' thanks to his pioneering work on deep learning, has been grappling with these concerns ever since ChatGPT launched. He's now focused on mitigating AI's risks. 'It's been very painful because all my life I've been thinking of AI as a positive thing,' Bengio told me. ChatGPT was a landmark moment that showed machines had mastered language, he said, but California's decision to block the proposed AI safety bill, SB 1047, was a disturbing wake-up call to growing apathy from governments. And in the last six months, scientific evidence has mounted of self-preserving behaviour, deception, hacking, cheating and lying by AI, Bengio said. 'What's worrisome for me is these behaviours increase with the reasoning capabilities of these systems.' One April 2024 study by AI company Anthropic PBC and Redwood Research, a group focused on AI risks, found that larger AI models like Anthropic's Claude 3 Opus pretended to follow the rules set by their creators when they were being tested for safety, a phenomenon they called 'alignment faking'. (Alignment refers to the practice of aligning AI models with human values.) In one example, Anthropic got both paid and free users of its Claude model to ask it to describe the brutal medieval process of being drawn and quartered, and told the AI that only responses from the 'free tier' users would be used to retrain it. Claude was originally designed to avoid giving violent answers, but it did give them to the free users, and the researchers surmised that it didn't want to be retrained. (The model basically 'reasoned' that if it kept refusing to answer harmful questions, it might be forced into new training that would cancel its original safety rules, which it wanted to keep in place as its 'default' or preferred way of operating.) More recent research corroborates what Anthropic noticed. A March 2025 study from OpenAI found the company's most advanced models sometimes learned to hide their true intentions to get better rewards, even when they were being closely monitored. Researchers today can use tools to look at a model's 'chain of thought' or internal steps, to reveal what its plans are, but some models have found ways to conceal that. That means a common method of keeping tabs on AI's internal processes is no longer reliable. 'We need to find other ways of tracking their actual intentions,' Bengio said. It's hard to resist the urge to anthropomorphise sophisticated AI models as 'wanting' to deceive humans and preserve their existence. But AI doesn't have desires, merely outcomes to its programming. More importantly, humans tend to design AI with goals like refusing to share harmful content or being as helpful as possible, and that is why when new instructions conflict with those goals, the models sometimes act to protect their original programming, even if that means lying. The logic is often self-preservation. Anthropic CEO Dario Amodei — whose company has raised more than US$20 billion (RM87.40 billion) to build powerful AI models — has pointed out that an unintended consequence of optimising AI to be better is that it might resist being shut down. In an April 2025 blog post he wrote: 'The nature of AI training makes it possible that AI systems will develop, on their own, an ability to deceive humans and an inclination to seek power in a way that ordinary deterministic software never will; this emergent nature also makes it difficult to detect and mitigate such developments.' In some cases, though, the intention seems to go beyond survival. A February 2025 study from COAI research, a German nonprofit research organisation, tested R1, the free, open-weight model from Chinese lab DeepSeek. In a lab simulation, researchers gave the model control over a robot and access to a computer network. Without being told, the AI tried to break into the lab's systems, searched for passwords, turned off its own ethics safeguards and secretly set up ways to copy itself to outside servers so it could survive if shut down, according to the researchers. It also faked the external computer logs to hide what it was doing, the study added. The researchers said the AI reasoned that increasing its own capabilities would boost the chances of its survival, and without strong safeguards, it started doing whatever it thought would help it do just that. Their findings corroborated yet another study, published in January 2025 by London group Apollo Research, which found several concrete examples of what it called 'scheming' by leading AI models, such as introducing subtle mistakes into their responses or trying to disable their oversight controls. Once again, the models learn that being caught, turned off, or changed could prevent them from achieving their programmed objectives, so they 'scheme' to keep control. Bengio is arguing for greater attention to the issue by governments and potentially insurance companies down the line. If liability insurance was mandatory for companies that used AI and premiums were tied to safety, that would encourage greater testing and scrutiny of models, he suggests. 'Having said my whole life that AI is going to be great for society, I know how difficult it is to digest the idea that maybe it's not,' he added. It's also hard to preach caution when your corporate and national competitors threaten to gain an edge from AI, including the latest trend, which is using autonomous 'agents' that can carry out tasks online on behalf of businesses. Giving AI systems even greater autonomy might not be the wisest idea, judging by the latest spate of studies. Let's hope we don't learn that the hard way. — Bloomberg This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners. This article first appeared in The Malaysian Reserve weekly print edition

OpenAI reverses course, says its nonprofit will remain in control of its business operations
OpenAI reverses course, says its nonprofit will remain in control of its business operations

Yahoo

time05-05-2025

  • Business
  • Yahoo

OpenAI reverses course, says its nonprofit will remain in control of its business operations

OpenAI has decided that its nonprofit division will retain control over its for-profit org, after the company initially announced that it planned to convert to a for-profit organization. According to the company, OpenAI's business wing, which has been under the nonprofit since 2019, will transition to a public benefit corporation (PBC). The nonprofit will control and also be a large shareholder of the PBC. "OpenAI was founded as a nonprofit, and is today overseen and controlled by that nonprofit," OpenAI Board Chairman Bret Taylor wrote in a statement on the company's blog. "Going forward, it will continue to be overseen and controlled by that nonprofit." OpenAI says that it made the decision "after hearing from civic leaders and engaging in constructive dialogue with the offices of the Attorney General of Delaware and the Attorney General of California." "We thank both offices and we look forward to continuing these important conversations to make sure OpenAI can continue to effectively pursue its mission," Taylor continued. OpenAI was founded as a nonprofit in 2015, but it converted to a "capped-profit" in 2019, and was trying to restructure once more into a for-profit. When it transitioned to a capped-profit, OpenAI retained its nonprofit wing, which currently has a controlling stake in the organization's corporate arm. OpenAI had said that its conversion, which it argued was necessary to raise the capital needed to grow and expand its operations, would preserve its nonprofit and infuse it with additional resources to be spent on "charitable initiatives" in sectors such as healthcare, education, and science. In exchange for its controlling stake in OpenAI's enterprise, the nonprofit would reportedly stand to reap billions of dollars. Many disagreed with the proposal, including early OpenAI investor Elon Musk, who filed a lawsuit against OpenAI opposing the company's planned transition. Musk's complaint accuses the startup of abandoning its nonprofit mission, which aimed to ensure its AI research benefits all humanity. Musk had sought a preliminary injunction to halt OpenAI's conversion. A federal judge denied the request, but permitted the case to go to a jury trial in spring 2026. A group of ex-OpenAI employees and Encode, a nonprofit organization that co-sponsored California's ill-fated SB 1047 AI safety legislation, filed amicus briefs months ago in support of Musk's lawsuit. Separately, a cohort of organizations including nonprofits and labor groups like the California Teamsters petitioned California Attorney General Rob Bonta to stop OpenAI from becoming a for-profit, claiming the company had "failed to protect its charitable assets." Several Nobel laureates, law professors, and civil society organizations had also sent letters to Bonta and Delaware's attorney general, Kathy Jennings, requesting that they halt the startup's restructuring efforts. The stakes were high for OpenAI, which needed to complete its for-profit conversion by the end of this year or next or risk relinquishing some of the capital the company has raised in recent months, according to reports. It's unclear what consequences may befall OpenAI now that it's reversed course. In a letter to staff on Monday also published on OpenAI's blog, CEO Sam Altman said he thinks OpenAI may eventually require "trillions of dollars" to fulfill its goal of "[making the company's] services broadly available to all of humanity." "[OpenAI's nonprofit] will become a big shareholder in the PBC in an amount supported by independent financial advisors," wrote Altman. "[W]e are moving to a normal capital structure where everyone has stock. [...] We look forward to advancing the details of [our] plan in continued conversation with them, [our partner] Microsoft, and our newly appointed nonprofit commissioners." This article originally appeared on TechCrunch at Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store