logo
California is trying to regulate its AI giants — again

California is trying to regulate its AI giants — again

The Verge5 hours ago

Last September, all eyes were on Senate Bill 1047 as it made its way to California Governor Gavin Newsom's desk — and died there as he vetoed the buzzy piece of legislation.
SB 1047 would have required makers of all large AI models, particularly those that cost $100 million or more to train, to test them for specific dangers. AI industry whistleblowers weren't happy about the veto, and most large tech companies were. But the story didn't end there. Newsom, who had felt the legislation was too stringent and one-size-fits-all, tasked a group of leading AI researchers to help propose an alternative plan — one that would support the development and the governance of generative AI in California, along with guardrails for its risks.
On Tuesday, that report was published.
The authors of the 52-page 'California Report on Frontier Policy' said that AI capabilities — including models' chain-of-thought 'reasoning' abilities — have 'rapidly improved' since Newsom's decision to veto SB 1047. Using historical case studies, empirical research, modeling, and simulations, they suggested a new framework that would require more transparency and independent scrutiny of AI models. Their report is appearing against the backdrop of a possible 10-year moratorium on states regulating AI, backed by a Republican Congress and companies like OpenAI.
The report — co-led by Fei-Fei Li, Co-Director of the Stanford Institute for Human-Centered Artificial Intelligence; Mariano-Florentino Cuéllar, President of the Carnegie Endowment for International Peace; and Jennifer Tour Chayes, Dean of the UC Berkeley College of Computing, Data Science, and Society — concluded that frontier AI breakthroughs in California could heavily impact agriculture, biotechnology, clean tech, education, finance, medicine and transportation. Its authors agreed it's important to not stifle innovation and 'ensure regulatory burdens are such that organizations have the resources to comply.'
'Without proper safeguards… powerful Al could induce severe and, in some cases, potentially irreversible harms'
But reducing risks is still paramount, they wrote: 'Without proper safeguards… powerful Al could induce severe and, in some cases, potentially irreversible harms.'
The group published a draft version of their report in March for public comment. But even since then, they wrote in the final version, evidence that these models contribute to 'chemical, biological, radiological, and nuclear (CBRN) weapons risks… has grown.' Leading companies, they added, have self-reported concerning spikes in their models' capabilities in those areas.
The authors have made several changes to the draft report. They now note that California's new AI policy will need to navigate quickly-changing 'geopolitical realities.' They added more context about the risks that large AI models pose, and they took a harder line on categorizing companies for regulation, saying a focus purely on how much compute their training required was not the best approach.
AI's training needs are changing all the time, the authors wrote, and a compute-based definition ignores how these models are adopted in real-world use cases. It can be used as an 'initial filter to cheaply screen for entities that may warrant greater scrutiny,' but factors like initial risk evaluations and downstream impact assessment are key.
That's especially important because the AI industry is still the Wild West when it comes to transparency, with little agreement on best practices and 'systemic opacity in key areas' like how data is acquired, safety and security processes, pre-release testing, and potential downstream impact, the authors wrote.
The report calls for whistleblower protections, third-party evaluations with safe harbor for researchers conducting those evaluations, and sharing information directly with the public, to enable transparency that goes beyond what current leading AI companies choose to disclose.
One of the report's lead writers, Scott Singer, told The Verge that AI policy conversations have 'completely shifted on the federal level' since the draft report. He argued that California, however, could help lead a 'harmonization effort' among states for 'commonsense policies that many people across the country support.' That's a contrast to the jumbled patchwork that AI moratorium supporters claim state laws will create.
In an op-ed earlier this month, Anthropic CEO Dario Amodei called for a federal transparency standard, requiring leading AI companies 'to publicly disclose on their company websites … how they plan to test for and mitigate national security and other catastrophic risks.'
'Developers alone are simply inadequate at fully understanding the technology and, especially, its risks and harms'
But even steps like that aren't enough, the authors of Tuesday's report wrote, because 'for a nascent and complex technology being developed and adopted at a remarkably swift pace, developers alone are simply inadequate at fully understanding the technology and, especially, its risks and harms.'
That's why one of the key tenets of Tuesday's report is the need for third-party risk assessment.
The authors concluded that risk assessments would incentivize companies like OpenAI, Anthropic, Google, Microsoft and others to amp up model safety, while helping paint a clearer picture of their models' risks. Currently, leading AI companies typically do their own evaluations or hire second-party contractors to do so. But third-party evaluation is vital, the authors say.
Not only are 'thousands of individuals… willing to engage in risk evaluation, dwarfing the scale of internal or contracted teams,' but also, groups of third-party evaluators have 'unmatched diversity, especially when developers primarily reflect certain demographics and geographies that are often very different from those most adversely impacted by AI.'
But if you're allowing third-party evaluators to test the risks and blind spots of your powerful AI models, you have to give them access — for meaningful assessments, a lot of access. And that's something companies are hesitant to do.
It's not even easy for second-party evaluators to get that level of access. Metr, a company OpenAI partners with for safety tests of its own models, wrote in a blog post that the firm wasn't given as much time to test OpenAI's o3 model as it had been with past models, and that OpenAI didn't give it enough access to data or the models' internal reasoning. Those limitations, Metr wrote, 'prevent us from making robust capability assessments.' OpenAI later said it was exploring ways to share more data with firms like Metr.
Even an API or disclosures of a model's weights may not let third-party evaluators effectively test for risks, the report noted, and companies could use 'suppressive' terms of service to ban or threaten legal action against independent researchers that uncover safety flaws.
Last March, more than 350 AI industry researchers and others signed an open letter calling for a 'safe harbor' for independent AI safety testing, similar to existing protections for third-party cybersecurity testers in other fields. Tuesday's report cites that letter and calls for big changes, as well as reporting options for people harmed by AI systems.
'Even perfectly designed safety policies cannot prevent 100% of substantial, adverse outcomes,' the authors wrote. 'As foundation models are widely adopted, understanding harms that arise in practice is increasingly important.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Microsoft Shares Slip as OpenAI Dispute Sparks Antitrust Concerns
Microsoft Shares Slip as OpenAI Dispute Sparks Antitrust Concerns

Yahoo

time30 minutes ago

  • Yahoo

Microsoft Shares Slip as OpenAI Dispute Sparks Antitrust Concerns

June 17 - Microsoft (NASDAQ:MSFT) shares edged down 0.7% in Tuesday morning trading as tensions with OpenAI intensified, with disagreements over their partnership structure threatening to derail future collaboration. OpenAI executives have reportedly discussed accusing Microsoft of anticompetitive behavior and possibly involving federal regulators. At the center of the conflict are disputes over Microsoft's future stake in OpenAI, and access to intellectual property from OpenAI's recent acquisition of Windsurf, a coding startup. Warning! GuruFocus has detected 6 Warning Sign with MSFT. Microsoft has invested $13 billion in OpenAI since 2019, gaining a share of future profits rather than equity. However, the partnership is now under pressure amid OpenAI's pivot toward becoming a public-benefit corporation, after backing off plans to convert into a traditional for-profit company. The restructuring is a key condition for a $30 billion investment from Japan's SoftBank, which values OpenAI at $300 billion. That funding would support OpenAI's continued use of Microsoft's cloud infrastructure, although OpenAI has also signed deals with other providers including Oracle (NYSE:ORCL) and CoreWeave. Despite the friction, both companies said in a joint statement that talks are ongoing and they remain optimistic about their partnership's future. This article first appeared on GuruFocus.

The 5 arguments against continued dominance for AI stocks
The 5 arguments against continued dominance for AI stocks

Yahoo

time31 minutes ago

  • Yahoo

The 5 arguments against continued dominance for AI stocks

AI stocks have surged since November 2022, with Nvidia up 761% and Palantir more than 600%. But some experts warn of high valuations and potential overestimation of AI's economic impact. Geopolitical risks, like China-Taiwan tensions, could also disrupt the AI supply chain. Since November 2022, artificial intelligence stocks have been the place to be in the market. Nvidia is up 761% over that time. Palantir is up 604%. Taiwan Semiconductor has returned 165%. And Microsoft is up 88%. It's been a gold rush. But how long can the AI trade last? Some experts, like Morgan Stanley's Head of Global Research Katy Huberty, have said that we're still in the early innings of the technology and robust returns still lie ahead. Few seem to refute the idea that AI will transform the US economy to some degree and be an eventual boon for profits. But some have urged caution about investing in the theme after such a huge run of outperformance. Irrational exuberance and greed are running rampant, they worry, potentially setting AI stocks up for a spectacular bust somewhere down the line. While the outlook on the technology's role in the economy is bullish, there are some threats to AI's dominance in the stock market. Five of them are detailed below. Generally speaking, AI stocks are expensive with their prices relative to their earnings over the last 12 months at elevated levels. For example, the iShares Future AI & Tech ETF (ARTY) has an average trailing 12 months PE ratio of 35.2, and the The Technology Select Sector SPDR Fund (XLK) is trading at 36.7 times earnings. Nvidia trades at a 45 PE ratio. By comparison, the S&P 500, which is at historically expensive levels, has a 23.7 PE. While AI stocks may have stronger growth prospects than those in other industries, high valuations mean those prospects are already priced in. If actual earnings performance underwhelms compared to expectation, then the stocks could start to underperform. High valuations tend to weigh on long-term performance. For example, Microsoft traded at 72-times trailing earnings in 2000. While it went on to lead the way in internet technology, it didn't recover its 2000 highs until 2016. AI may make tasks more efficient, but perhaps not to the degree the market thinks, said Jim Covello, head of Global Equity Research at Goldman Sachs, in a June 2024 report. "People generally substantially overestimate what the technology is capable of today. In our experience, even basic summarization tasks often yield illegible and nonsensical results," Covello wrote. "This is not a matter of just some tweaks being required here and there; despite its expensive price tag, the technology is nowhere near where it needs to be in order to be useful for even such basic tasks," he continued. "And I struggle to believe that the technology will ever achieve the cognitive reasoning required to substantially augment or replace human interactions." This could hurt AI firms, which are pumping hundreds of billions into building out AI infrastructure. What if, in the end, the mammoth spending isn't worth it? Another risk is that you end up investing in the wrong stocks altogether. Just because certain stocks are pioneering a technology, doesn't mean that they will continue to do so. The presumption five years ago "would have been that Intel would be the dominant player" in the AI space, Research Affiliates Founder Rob Arnott told BI in November. "Well, Intel is teetering perilously close to irrelevance, and Nvidia wasn't on anyone's radar screen five years ago. So disruptors get disrupted." As foreign investors start to pull back from US Treasury bonds amid an expanding national debt, and as tariffs and Trump's tax cut bill threaten to boost inflation, long-term Treasury yields are trending upward. When long-end yields go too high, it has historically hurt growth stock performance and brought down valuations. Higher-risk free yields start to attract money, and risky and expensive stocks start to lose their luster. One of the key players in AI development is chipmaker Taiwan Semiconductor. If China were to invade Taiwan, as it has threatened, the AI supply chain could be severely interrupted. "The moment conflict starts in the Taiwan Strait, you have to assume that TSMC shuts down very, very quickly regardless of what any of the players decide to do — regardless of whether anyone decides to disrupt the supply chain or destroy this or that or not," said Chris Miller, author "Chip War," in an interview with BI last year. "Taiwan imports a big chunk of its energy and chip factories need energy. And there are a bunch of critical chemicals and materials that are imported into Taiwan, and those would stop," he continued. "What's more, you couldn't get the ships out of Taiwan if there was a shooting war going on. And so your incentive to produce a lot also declines very rapidly if you can't actually sell chips or get them off-island." Read the original article on Business Insider Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

JBL Q2 Deep Dive: AI Infrastructure Drives Strong Growth Amid Mixed End Market Trends
JBL Q2 Deep Dive: AI Infrastructure Drives Strong Growth Amid Mixed End Market Trends

Yahoo

time32 minutes ago

  • Yahoo

JBL Q2 Deep Dive: AI Infrastructure Drives Strong Growth Amid Mixed End Market Trends

Electronics manufacturing services provider Jabil (NYSE:JBL) reported Q2 CY2025 results exceeding the market's revenue expectations , with sales up 15.7% year on year to $7.83 billion. On top of that, next quarter's revenue guidance ($7.45 billion at the midpoint) was surprisingly good and 4.2% above what analysts were expecting. Its non-GAAP profit of $2.55 per share was 9.8% above analysts' consensus estimates. Is now the time to buy JBL? Find out in our full research report (it's free). Revenue: $7.83 billion vs analyst estimates of $7.04 billion (15.7% year-on-year growth, 11.2% beat) Adjusted EPS: $2.55 vs analyst estimates of $2.32 (9.8% beat) Adjusted EBITDA: $652 million vs analyst estimates of $566.3 million (8.3% margin, 15.1% beat) Revenue Guidance for Q3 CY2025 is $7.45 billion at the midpoint, above analyst estimates of $7.15 billion Management raised its full-year Adjusted EPS guidance to $9.33 at the midpoint, a 4.2% increase Operating Margin: 5.1%, up from 3.9% in the same quarter last year Market Capitalization: $19.41 billion Jabil's second quarter saw a strong positive reaction from the market, as the company delivered above-consensus results fueled by robust demand in its Intelligent Infrastructure segment. Management highlighted that growth was propelled by accelerated spending in AI-related cloud and data center infrastructure, as well as solid contributions from capital equipment and warehouse automation. CEO Mike Dastoor credited the company's regionalized manufacturing model and increased U.S. footprint for helping Jabil navigate ongoing geopolitical and supply chain complexities. Looking ahead, Jabil's outlook is anchored by continued strength in AI and data center infrastructure, which management believes will more than offset ongoing softness in electric vehicle and renewable energy markets. The company is investing $500 million to expand its U.S. manufacturing capacity, aiming to support both existing and new hyperscale customers. CFO Greg Hebard noted that this expansion is expected to sharpen Jabil's competitive edge and drive sustained growth, while management continues to focus on disciplined capital allocation and steady margin improvement. Management attributed the quarter's outperformance to exceptional execution in AI hardware, cloud infrastructure, and operational discipline, while noting that end market demand varied across segments. AI-driven infrastructure growth: The Intelligent Infrastructure segment experienced rapid expansion, with management citing demand for advanced server rack integration, power, and cooling systems to support AI workloads. Jabil's ability to scale design and engineering for hyperscale data centers was a primary driver of the segment's performance. Capital equipment momentum: Robust activity in automated testing equipment—driven by increased complexity in custom chips for AI—contributed meaningfully to growth. Management said the need for advanced testing gear remains strong, although some sub-segments like wafer fab equipment lagged due to weaker automotive and consumer demand. U.S. manufacturing investment: Jabil announced a $500 million commitment to a new Southeastern U.S. site, which will further localize production and expand capacity for AI data center infrastructure. This facility is expected to open by mid-2026, with management emphasizing its role in diversifying the customer base and supporting expanded solutions like liquid cooling. Mixed regulated industry trends: While healthcare showed promising results, management acknowledged ongoing softness in electric vehicle and renewable energy end markets. They are closely watching potential impacts from U.S. legislation and are managing these headwinds through selective customer engagement and cost discipline. Operational leverage and cost control: Improved inventory management and lower capital expenditures, following divestitures, contributed to strong free cash flow. CFO Greg Hebard reaffirmed the company's intent to return most of this cash to shareholders through share repurchases, underscoring a shift to a more asset-light, efficient operating model. Jabil's guidance reflects optimism around AI infrastructure demand, ongoing operational improvements, and a cautious stance on weaker markets such as EVs and renewables. AI and data center strength: Management expects the Intelligent Infrastructure segment, particularly AI and cloud data center projects, to continue driving growth. CEO Mike Dastoor emphasized that ongoing customer demand for complex integration, power management, and cooling solutions remains robust, with additional upside possible as new technologies like liquid cooling mature. U.S. capacity expansion: The new $500 million U.S. facility is designed to support both existing and new customers, with an eye on scaling solutions across the AI ecosystem. Management believes this investment will help diversify revenue streams and enhance Jabil's competitive positioning, though the financial impact will materialize over several years. End market caution and risk management: Ongoing softness in electric vehicle and renewable energy markets remains a headwind, with no near-term turnaround expected. Management is taking a conservative approach to guidance for these segments and is focused on healthcare and digital commerce as longer-term growth drivers. In future quarters, the StockStory team will be monitoring (1) execution of the new U.S. facility buildout and its effect on customer wins, (2) the pace of AI and data center infrastructure demand as new technologies like liquid cooling roll out, and (3) stabilization or recovery in lagging end markets such as EVs and renewables. Progress in healthcare and automation will also be key indicators of Jabil's ability to diversify growth. Jabil currently trades at $199.78, up from $181 just before the earnings. Is there an opportunity in the stock?See for yourself in our full research report (it's free). The market surged in 2024 and reached record highs after Donald Trump's presidential victory in November, but questions about new economic policies are adding much uncertainty for 2025. While the crowd speculates what might happen next, we're homing in on the companies that can succeed regardless of the political or macroeconomic environment. Put yourself in the driver's seat and build a durable portfolio by checking out our Top 5 Growth Stocks for this month. This is a curated list of our High Quality stocks that have generated a market-beating return of 183% over the last five years (as of March 31st 2025). Stocks that made our list in 2020 include now familiar names such as Nvidia (+1,545% between March 2020 and March 2025) as well as under-the-radar businesses like the once-small-cap company Exlservice (+354% five-year return). Find your next big winner with StockStory today. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store