logo
#

Latest news with #JayObernolte

Rep. Obernolte (R-CA) discusses L.A. protests and Artificial Intelligence
Rep. Obernolte (R-CA) discusses L.A. protests and Artificial Intelligence

Yahoo

time2 hours ago

  • Politics
  • Yahoo

Rep. Obernolte (R-CA) discusses L.A. protests and Artificial Intelligence

Of all the complexities involving the One, Big Beautiful Bill, congressional Republicans are now dealing with a new-age issue: how to handle artificial intelligence. The House version includes language that would essentially put all regulatory control in the hands of the federal government for 10 years. NewsNation's Blake Burman spoke with Rep Jay Obernolte (R-CA), Chairman of the Subcommittee on Research and Technology, who is pushing for that provision to remain.

Rep. Obernolte (R-CA) discusses L.A. protests and Artificial Intelligence
Rep. Obernolte (R-CA) discusses L.A. protests and Artificial Intelligence

The Hill

time3 hours ago

  • Politics
  • The Hill

Rep. Obernolte (R-CA) discusses L.A. protests and Artificial Intelligence

Of all the complexities involving the One, Big Beautiful Bill, congressional Republicans are now dealing with a new-age issue: how to handle artificial intelligence. The House version includes language that would essentially put all regulatory control in the hands of the federal government for 10 years. NewsNation's Blake Burman spoke with Rep Jay Obernolte (R-CA), Chairman of the Subcommittee on Research and Technology, who is pushing for that provision to remain.

Here Are The Republicans Changing Their Minds About Trump's Policy Bill
Here Are The Republicans Changing Their Minds About Trump's Policy Bill

Forbes

time5 days ago

  • Business
  • Forbes

Here Are The Republicans Changing Their Minds About Trump's Policy Bill

Some Republicans said they were unaware of certain provisions in the massive Trump policy bill that passed the House last month and threatened to vote against it when it returns to the lower chamber after revisions in the Senate, throwing its future into doubt. Rep. Marjorie Taylor Greene, R-Ga., speaks to reporters as she arrives for a closed-door meeting ... More with House Republicans, at the Republican National Committee office on Capitol Hill on March 25, 2025, in Washington, DC. (Photo by DREW ANGERER/AFP via Getty Images) Rep. Marjorie Taylor Greene, R-Ga., said Tuesday she wouldn't have voted for the bill if she knew a provision to block states from regulating artificial intelligence for the next 10 years was included, writing on X that she is 'adamantly OPPOSED' to the measure, calling it 'a violation of state rights.' Greene said she won't vote for the legislation again if the Senate doesn't remove the moratorium, though proponents of the provision, such as Rep. Jay Obernolte, R-Calif., have said it's designed to motivate Congress to adopt national artificial intelligence regulation, arguing a patchwork of state rules complicates development, according to The Intercept. Rep. Mike Flood, R-Neb., also said during a town hall meeting in his district last week he wouldn't have voted for the bill if he knew it included a provision that would make it more burdensome for federal judges to hold people in contempt. 'I'm not going to hide the truth, this provision was unknown to me when I voted for that bill,' Flood said. The provision would require judges to set bonds for parties in federal civil suits seeking a preliminary injunction or temporary restraining order to cover any costs that might be incurred if the injunction or restraining order were overturned. Advocates of the provision claim it's designed to prevent frivolous lawsuits, while opponents allege it's designed to protect Trump and his allies if they violate court orders, according to The New York Times. Elon Musk has strongly come out against the bill in recent days. He ripped the legislation in a series of more than a dozen X posts this week, calling it a 'massive, outrageous, pork-filled . . . disgusting abomination.' He also appeared to threaten Republicans who voted for it, suggesting voters would remove them from office in next year's midterms. Congress has set a goal to have what's formally known as the 'One Big Beautiful Bill Act' on President Donald Trump's desk by July 4, though it's expected to undergo significant changes in the upper chamber and be sent back to the House before Trump signs it. Multiple Republican senators, including Sen. Rand Paul, R-Ky., have said it adds too much to the federal deficit, while others, such as Sen. Josh Hawley, R-Mo., have expressed concerns about cuts to Medicaid. Assuming all Democrats vote against the bill, Republicans can afford to lose no more than three votes to pass the bill. The House approved the legislation in a 215-214 vote on May 22, with only two Republicans, Reps. Thomas Massie, R-Ky., and Warren Davidson, R-Ohio, voting against the bill, while three others voted 'present.' Trump personally lobbied Republican holdouts to approve the bill, and House Speaker Mike Johnson, R-La., spearheaded several last-minute changes, including raising the State and Local Tax (SALT) deduction rate and moving up the deadline for Medicaid work requirements. The legislation would also deliver on some of Trump's campaign promises, including ending taxes on tips and overtime, extending his 2017 tax cuts and additional funding for border security. Trump's Signature Policy Agenda Passes House After Last-Minute Revisions Including SALT Cap Increase (Forbes) 'Disgusting Abomination': Musk Turns On Trump—Rips Policy Bill In New Angry Rant (Forbes) Here's Where Medicaid Cuts Stand In Trump's Mega-Bill—Affecting More Than 7 Million Americans (Forbes)

A 10-Year Pause on State AI Laws Is the Smart Move
A 10-Year Pause on State AI Laws Is the Smart Move

Yahoo

time21-05-2025

  • Business
  • Yahoo

A 10-Year Pause on State AI Laws Is the Smart Move

Congress is currently considering a policy that could define America's technological future: a proposed 10-year moratorium on a broad swath of state-level artificial intelligence (AI) regulations. While the idea of pausing state legislative action might seem radical to some—and has certainly caught proponents of localized AI governance off guard—it is precisely the bold stroke this moment demands. This is not about stifling oversight, but about fostering mutually assured innovation—a framework where a unified, predictable national approach to AI governance becomes the default, ensuring that the transformative power of AI reaches every corner of our nation, especially the people who need it the most. The concept of a consistent national strategy has garnered support from a diverse chorus of voices, including Colorado Democratic Gov. Jared Polis, Rep. Jay Obernolte (R–CA), and leading AI developers at OpenAI. They recognize that AI's potential is too vast, and its development too critical, to be balkanized into a patchwork of 50 different regulatory schemes. At Meta's recent Open Source AI Summit, I witnessed firsthand the burgeoning applications of AI that promise to reshape our world for the better. Consider a health care system, like the one at UTHealth Houston, using AI to proactively identify patients likely to miss crucial appointments. By automatically rescheduling these individuals' appointments, the system saved hundreds of thousands of dollars, but more importantly, it ensured continuity of care for potentially vulnerable patients. Consider another innovation: AI tools that meticulously analyze data from colonoscopies, significantly increasing the chances of detecting cancerous or precancerous conditions at their earliest, most treatable stages. Or look at the global efforts of the World Resources Institute, leveraging AI and satellite imagery to track deforestation in near real time, providing invaluable data to combat climate change and inform sustainable land-use policies. These are not abstract academic exercises; they are tangible solutions to pressing human problems, with the potential to drastically improve health care outcomes, facilitate more robust climate forecasts, aid food production, and contribute to more equitable societies. These green shoots of innovation, however, are incredibly fragile. They require not just brilliant minds and dedicated research, but also a stable and predictable environment in which to grow. A moratorium on disparate state regulations provides precisely this—regulatory certainty. This certainty is a powerful catalyst, unlocking further investment, attracting top-tier talent, and allowing nascent technologies to mature and disseminate across the nation. The alternative is a landscape where only the largest, most well-funded labs can navigate the regulatory maze, while groundbreaking tools from startups and research institutes—tools that could disproportionately benefit individuals in precarious social, economic, or health conditions—wither on the vine. This is the crux of mutually assured innovation: states collectively leaning into a uniform path to governance, preventing a scenario where innovation becomes a luxury of the few, rather than a right for all. A hodgepodge of state regulations, however well-intentioned, will inevitably stymie AI innovation. Labs could be subjected to conflicting, sometimes contradictory, compliance schemes. While behemoths like Google or Microsoft might absorb the legal and operational costs of navigating 50 different sets of rules, smaller labs and university research teams would face a disproportionate burden. They would be forced into a perpetual state of vigilance, constantly monitoring legislative trackers, investing in legal counsel to ensure they remain compliant with new provisions, and diverting precious resources away from research and development. Advocates for states' rights in AI regulation often dismiss these concerns as inflated. Let's, for a moment, entertain that skepticism and play out a realistic scenario. Imagine just three of the hundreds of AI-related bills currently pending before state legislatures actually pass into law: California's S.B. 813, Rhode Island's S.B. 358, and New York's proposed Responsible AI Safety and Education (RAISE) Act. California's S.B. 813: The bill establishes a process for the Attorney General (A.G.) to designate a private entity as a Multistakeholder Regulatory Organization (MRO) that certifies AI models and applications based on their risk mitigation plans. MROs must address high-impact risks including cybersecurity threats, Chemical, Biological, Radiological, and Nuclear threats, malign persuasion, and AI model autonomy, with the A.G. establishing minimum requirements and conflict of interest rules. The MRO has the authority to decertify non-compliant AI systems and must submit annual reports to the Legislature and the A.G. on risk evaluation and mitigation effectiveness. Rhode Island's S.B. 358: This bill takes a different tack, seeking to establish "strict liability for AI developers for injuries caused by their AI systems to non-users," according to OneTrust DataGuidance. Liability would apply if the AI's actions were considered negligent "or an intentional tort if performed by a human," with the AI's conduct being "the factual and proximate cause of the injury," and the injury not being "intended or reasonably foreseeable by the user." It even presumes "the AI had the relevant mental state for torts requiring such," a novel legal concept. New York's RAISE Act: This act would empower the state's A.G. to regulate "frontier AI models" to prevent "critical harm" (e.g., mass casualties, major economic damage from AI-assisted weaponry, or autonomous AI criminality). It proposes to do so by requiring labs to implement "written safety and security protocol" based on vague "reasonableness" standards and to avoid deploying models that create an "unreasonable risk." The act also mandates annual third-party audits and relies on an A.G.'s office and judiciary that may lack the specialized expertise for consistent enforcement, potentially penalizing smaller innovators more harshly. The sheer diversity in these approaches is telling. California might mandate specific risk assessment methodologies and an oversight board. Rhode Island could impose a strict liability regime with novel legal presumptions. New York could demand adherence to ill-defined "reasonableness" standards, enforced by an A.G.'s office with manifold other priorities. Now multiply this complexity by 10, 20, or even 50 states, each with its own definitions of "high-risk AI," "algorithmic bias," "sufficient transparency," or unique liability and enforcement standards. The result is a compliance nightmare that drains resources and chills innovation. There are profound questions about whether states possess the institutional capacity—from specialized auditors to technically proficient A.G. offices and judiciaries—to effectively implement and enforce such complex legislation. The challenge of adjudicating novel concepts like strict AI liability, as seen in Rhode Island's bill, or interpreting vague "reasonableness" requirements, as in the New York proposal, further underscores this capacity gap. Creating new, effective regulatory bodies and staffing them with scarce AI expertise is a monumental undertaking, often underestimated by legislative proponents. The risk, as seen in other attempts to regulate emerging tech, is that enforcement becomes delayed, inconsistent, or targets those least able to defend themselves, rather than achieving the intended policy goals. As some states potentially reap the economic and social benefits of AI adoption under a more permissive or nationally harmonized framework, residents and businesses in heavily regulated states may begin to question the wisdom of their localized approach. The political will to maintain stringent, potentially innovation-stifling regulations could erode as the comparative advantages of AI become clearer elsewhere. Finally, the rush to regulate at the state level often neglects full consideration of the coverage afforded by existing laws. As detailed in extensive lists by the A.G.s of California and New Jersey, many state consumer protection statutes already address AI harms. Texas' A.G. has already leveraged the state's primary consumer protection statute to shield consumers from such harms. Though some gaps may exist, legislators ought minimally to do a full review of existing laws prior to adopting new legislation. No one is arguing for a complete abdication of oversight. However, the far more deleterious outcome is a fractured regulatory landscape that slows the development and dissemination of AI systems poised to benefit the most vulnerable among us. These individuals cannot afford to wait for 50 states to achieve regulatory consensus. A 10-year moratorium is not a surrender to unchecked technological advancement. It is a strategic pause—an opportunity to develop a coherent national framework for AI governance that promotes safety, ethics, and accountability while simultaneously unleashing the immense innovative potential of this technology. It is a call for mutually assured innovation, ensuring that the benefits of AI are broadly shared and that America leads the world not just in developing AI, but in deploying it for the common good. The post A 10-Year Pause on State AI Laws Is the Smart Move appeared first on

Vibrant Emotional Health Applauds the Reintroduction of the 9-8-8 Lifeline Cybersecurity Responsibility Act in Congress
Vibrant Emotional Health Applauds the Reintroduction of the 9-8-8 Lifeline Cybersecurity Responsibility Act in Congress

Yahoo

time08-04-2025

  • Health
  • Yahoo

Vibrant Emotional Health Applauds the Reintroduction of the 9-8-8 Lifeline Cybersecurity Responsibility Act in Congress

NEW YORK, April 8, 2025 /PRNewswire/ -- Vibrant Emotional Health (Vibrant), one of the nation's premier leaders in mental health and the administrator of the 988 Suicide & Crisis Lifeline (988 Lifeline), applauds Representatives Jay Obernolte (R-CA) and Debbie Dingell (D-MI) and Senators Mark Wayne Mullin (R-OK) and Alex Padilla (D-CA) for reintroducing the 9-8-8 Lifeline Cybersecurity Responsibility Act. These bipartisan leaders have taken action in Washington, D.C., to improve the security and integrity of the 988 Suicide & Crisis Lifeline. After a 2022 cyberattack on a vendor for the 988 Lifeline resulted in an outage that blocked call access for those seeking mental health support, legislators in Washington introduced the 9-8-8 Lifeline Cybersecurity Responsibility Act. This landmark bill establishes a reporting structure for cybersecurity vulnerabilities or incidents that impact the 988 Lifeline and requires a study of the 988 Lifeline's cybersecurity risks and vulnerabilities. Due to robust bipartisan support for the 988 Lifeline in Congress, the Lifeline Cybersecurity Responsibility Act was almost signed into law in 2024. Vibrant Emotional Health looks forward to working with bipartisan champions in both chambers of Congress to advance this critical bill that will help improve responses to cybersecurity attacks on the 988 Lifeline. Cara McNulty, DPA., Chief Executive Officer of Vibrant Emotional Health, thanked the bill sponsors for their tireless advocacy on behalf of the 988 Lifeline. She said, "I am deeply appreciative of Representatives Obernolte and Dingell and Senators Padilla and Mullin for their efforts to protect the 988 Suicide & Crisis Lifeline from cybersecurity attacks. This important legislation will help Vibrant Emotional Health ensure uninterrupted access to the 988 Lifeline for people in crisis who need vital mental health support. We have worked closely with these Congressional champions for years and look forward to building upon prior support for this bill to fortify the 988 Suicide & Crisis Lifeline, and ask that Congress pass the 9-8-8 Lifeline Cybersecurity Responsibility Act." If you or someone you know is in emotional distress, experiencing mental health challenges, or problematic substance use, help is available. Text or call 988 or chat at for 24/7 judgment-free support from skilled, compassionate crisis counselors. For more information about Vibrant Emotional Health and its mission to improve emotional well-being for all, visit Stay up to date with Vibrant's latest public policy updates at About Vibrant Emotional Health Vibrant Emotional Health is a non-profit organization that helps individuals and families achieve emotional well-being. For over 55 years, our groundbreaking solutions have delivered high-quality services and support when, where, and how people need it. We offer confidential emotional support through our state-of-the-art contact center and crisis hotline services that use leading-edge telephone, text, and web-based technologies, including the 988 Suicide & Crisis Lifeline, Disaster Distress Helpline, and NFL Life Line. Through our community wellness programs, individuals and families obtain the support and skills they need to thrive. Our advocacy and education initiatives promote mental well-being as a social responsibility. We help millions of people live healthier and more vibrant lives yearly. We're advancing access, dignity, and respect for all and revolutionizing the system for good. Visit Follow Vibrant on X, LinkedIn, Facebook, and Instagram. Media Contact:Divendra Jaffar, djaffar@ View original content to download multimedia: SOURCE Vibrant Emotional Health Sign in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store