logo
#

Latest news with #AIIndustry

Nvidia's CEO Is Bullish on the U.K.: Should You be Too?
Nvidia's CEO Is Bullish on the U.K.: Should You be Too?

Yahoo

time2 days ago

  • Business
  • Yahoo

Nvidia's CEO Is Bullish on the U.K.: Should You be Too?

Like many global markets, the UK economy had a volatile start to the year. After trending upward from early January, gaining around 7.4%, the FTSE 100 reversed course in early March, falling about 13% by early April. However, it has since rebounded, gaining approximately 15%, highlighting renewed investor confidence and a positive shift in the UK's economic outlook. The British economy grew more than expected in first-quarter 2025, largely driven by the services sector. Recent trade agreements with India, the United States, and the EU provide a strong tailwind for the U.K. economy, potentially supporting an upgrade to its growth outlook. With rising uncertainty over tariff policies and growing fears of an economic slowdown, investors have turned their attention away from U.S. assets, another factor driving increased attention toward alternative markets like the U.K. On Monday, Nvidia CEO Jensen Huang expressed strong admiration for U.K.'s economy, as quoted on CNBC, remarking the economy to be in a goldilocks situation. According to CNBC, Nvidia CEO pledged to ramp up investment in the UK economy's AI industry through his multitrillion-dollar semiconductor company. The U.K. has recently been promoting itself as a future global leader in AI, which can be highlighted by the optimistic comments of the Nvidia CEO. Early in the year, prime minister Keir Starmer introduced an ambitious strategy to strengthen the U.K.'s AI sector, including plans to ease regulations for new data centers and to boost the nation's computing capacity twentyfold by 2030. Huang's comments come as a major boost for the U.K., as he heaped praise on that country's thriving AI ecosystem. While speaking on a panel, as quoted on CNBC, Huang went on to say that the ability to develop AI supercomputers in the U.K. will spark greater interest from startups. Amid global uncertainty, the British economy has demonstrated notable resilience. Further igniting investor interest is British finance minister Rachel Reeves's proposed $2.7 trillion spending plan. Per projections by the IMF, as quoted on Reuters, U.K.'s growth is expected to slightly outpace the Eurozone. However, it will trail behind the United States and Canada. According to Allan Monks, JPMorgan's chief U.K. economist, as quoted on CNBC, a series of positive developments could help lift UK economic growth for the entire second quarter. Investors can increase their portfolio exposure to the U.K. with pure-play ETFs, namely, iShares MSCI United Kingdom ETF EWU, Franklin FTSE United Kingdom ETF FLGB, First Trust United Kingdom AlphaDEX Fund FKU and iShares MSCI United Kingdom Small-Cap ETF EWUS. With a one-month average trading volume of about 1.38 million shares, EWU is the most liquid option, offering investors easier entry and exit while minimizing the risk of significant price fluctuations, ideal for active trading strategies. EWU has also gathered an asset base of $3.09 billion, the largest among the other options. Regarding annual fees, FLGB is the cheapest, charging 0.09%, which makes it more suitable for long-term investing. Performance-wise, EWUS was better over the past month, gaining 8.55% and over the past three months, adding 14.62%. Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report First Trust United Kingdom AlphaDEX ETF (FKU): ETF Research Reports iShares MSCI United Kingdom ETF (EWU): ETF Research Reports iShares MSCI United Kingdom Small-Cap ETF (EWUS): ETF Research Reports Franklin FTSE United Kingdom ETF (FLGB): ETF Research Reports This article originally published on Zacks Investment Research ( Zacks Investment Research

Mistral releases a pair of AI reasoning models
Mistral releases a pair of AI reasoning models

TechCrunch

time3 days ago

  • Business
  • TechCrunch

Mistral releases a pair of AI reasoning models

French AI lab Mistral is getting into the reasoning AI model game. On Tuesday morning, Mistral announced Magistral, its first family of reasoning models. Like other reasoning models — e.g. OpenAI's o3 and Google's Gemini 2.5 Pro — Magistral works through problems step-by-step for improved consistency and reliability across topics such as math and physics. Magistral comes in two flavors: Magistral Small and Magistral Medium. Magistral Small is 24 billion parameters in size, and is available for download from the AI dev platform Hugging Face under a permissive Apache 2.0 license. (Parameters are the internal components of a model that guide its behavior.) Magistral Medium, a more capable model, is in preview on Mistral's Le Chat chatbot platform and the company's API, as well as third-party partner clouds. '[Magistral is] suited for a wide range of enterprise use cases, from structured calculations and programmatic logic to decision trees and rule-based systems,' writes Mistral in a blog post. '[The models are] fine-tuned for multi-step logic, improving interpretability and providing a traceable thought process in the user's language.' Founded in 2023, Mistral is a frontier model lab building a range of AI-powered services, including the aforementioned Le Chat and mobile apps. It's backed by venture investors like General Catalyst, and has raised over €1.1 billion (roughly $1.24 billion) to date. Despite its formidable resources, Mistral has lagged behind other leading AI labs in certain areas, like developing reasoning models. Magistral doesn't appear to be an especially competitive release, either, judging by Mistral's own benchmarks. On GPQA Diamond and AIME, tests that evaluate a model's physics, math, and science skills, Magistral Medium underperforms Gemini 2.5 Pro and Anthropic's Claude Opus 4. Magistral Medium also fails to surpass Gemini 2.5 Pro on a popular programming benchmark, LiveCodeBench. Techcrunch event Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW Perhaps that's why Mistral touts Magistral's other strengths in its blog post. Magistral delivers answers at '10x' the speed of competitors in Le Chat, Mistral claims, and supports a wide array of languages, including Italian, Arabic, Russian, and Simplified Chinese. 'Building on our flagship models, Magistral is designed for research, strategic planning, operational optimization, and data-driven decision making,' the company writes in its post, 'whether executing risk assessment and modelling with multiple factors, or calculating optimal delivery windows under constraints.' The release of Magistral comes after Mistral debuted a 'vibe coding' client, Mistral Code. A few weeks prior to that, Mistral launched several coding-focused models and rolled out Le Chat Enterprise, a corporate-focused chatbot service that offers tools like an AI agent builder and integrates Mistral's models with third-party services like Gmail and SharePoint.

What a Proposed Moratorium on State AI Rules Could Mean for You
What a Proposed Moratorium on State AI Rules Could Mean for You

CNET

time21-05-2025

  • Business
  • CNET

What a Proposed Moratorium on State AI Rules Could Mean for You

States couldn't enforce regulations on artificial intelligence technology for a decade under a plan being considered in the US House of Representatives. The legislation, in an amendment to the federal government's budget bill, says no state or political subdivision "may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems or automated decision systems" for 10 years. The proposal would still need the approval of both chambers of Congress and President Donald Trump before it can become law. The House is expected to vote on the full budget package this week. AI developers and some lawmakers have said federal action is necessary to keep states from creating a patchwork of different rules and regulations across the US that could slow the technology's growth. The rapid growth in generative AI since ChatGPT exploded on the scene in late 2022 has led companies to fit the technology in as many spaces as possible. The economic implications are significant, as the US and China race to see which country's tech will predominate, but generative AI poses privacy, transparency and other risks for consumers that lawmakers have sought to temper. "We need, as an industry and as a country, one clear federal standard, whatever it may be," Alexandr Wang, founder and CEO of the data company Scale AI, told lawmakers during an April hearing. "But we need one, we need clarity as to one federal standard and have preemption to prevent this outcome where you have 50 different standards." Efforts to limit the ability of states to regulate artificial intelligence could mean fewer consumer protections around a technology that is increasingly seeping into every aspect of American life. "There have been a lot of discussions at the state level, and I would think that it's important for us to approach this problem at multiple levels," said Anjana Susarla, a professor at Michigan State University who studies AI. "We could approach it at the national level. We can approach it at the state level too. I think we need both." Several states have already started regulating AI The proposed language would bar states from enforcing any regulation, including those already on the books. The exceptions are rules and laws that make things easier for AI development and those that apply the same standards to non-AI models and systems that do similar things. These kinds of regulations are already starting to pop up. The biggest focus is not in the US, but in Europe, where the European Union has already implemented standards for AI. But states are starting to get in on the action. Colorado passed a set of consumer protections last year, set to go into effect in 2026. California adopted more than a dozen AI-related laws last year. Other states have laws and regulations that often deal with specific issues such as deepfakes or require AI developers to publish information about their training data. At the local level, some regulations also address potential employment discrimination if AI systems are used in hiring. "States are all over the map when it comes to what they want to regulate in AI," said Arsen Kourinian, partner at the law firm Mayer Brown. So far in 2025, state lawmakers have introduced at least 550 proposals around AI, according to the National Conference of State Legislatures. In the House committee hearing last month, Rep. Jay Obernolte, a Republican from California, signaled a desire to get ahead of more state-level regulation. "We have a limited amount of legislative runway to be able to get that problem solved before the states get too far ahead," he said. While some states have laws on the books, not all of them have gone into effect or seen any enforcement. That limits the potential short-term impact of a moratorium, said Cobun Zweifel-Keegan, managing director in Washington for the International Association of Privacy Professionals. "There isn't really any enforcement yet." A moratorium would likely deter state legislators and policymakers from developing and proposing new regulations, Zweifel-Keegan said. "The federal government would become the primary and potentially sole regulator around AI systems," he said. What a moratorium on state AI regulation means AI developers have asked for any guardrails placed on their work to be consistent and streamlined. During a Senate Commerce Committee hearing last week, OpenAI CEO Sam Altman told Sen. Ted Cruz, a Republican from Texas, that an EU-style regulatory system "would be disastrous" for the industry. Altman suggested instead that the industry develop its own standards. Asked by Sen. Brian Schatz, a Democrat from Hawaii, if industry self-regulation is enough at the moment, Altman said he thought some guardrails would be good but, "It's easy for it to go too far. As I have learned more about how the world works, I am more afraid that it could go too far and have really bad consequences." (Disclosure: Ziff Davis, parent company of CNET, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Concerns from companies -- both the developers that create AI systems and the "deployers" who use them in interactions with consumers -- often stem from fears that states will mandate significant work such as impact assessments or transparency notices before a product is released, Kourinian said. Consumer advocates have said more regulations are needed, and hampering the ability of states could hurt the privacy and safety of users. "AI is being used widely to make decisions about people's lives without transparency, accountability or recourse -- it's also facilitating chilling fraud, impersonation and surveillance," Ben Winters, director of AI and privacy at the Consumer Federation of America, said in a statement. "A 10-year pause would lead to more discrimination, more deception and less control -- simply put, it's siding with tech companies over the people they impact." A moratorium on specific state rules and laws could result in more consumer protection issues being dealt with in court or by state attorneys general, Kourinian said. Existing laws around unfair and deceptive practices that are not specific to AI would still apply. "Time will tell how judges will interpret those issues," he said. Susarla said the pervasiveness of AI across industries means states might be able to regulate issues like privacy and transparency more broadly, without focusing on the technology. But a moratorium on AI regulation could lead to such policies being tied up in lawsuits. "It has to be some kind of balance between 'we don't want to stop innovation,' but on the other hand, we also need to recognize that there can be real consequences," she said. Much policy around the governance of AI systems does happen because of those so-called technology-agnostic rules and laws, Zweifel-Keegan said. "It's worth also remembering that there are a lot of existing laws and there is a potential to make new laws that don't trigger the moratorium but do apply to AI systems as long as they apply to other systems," he said. Moratorium draws opposition ahead of House vote House Democrats have said the proposed pause on regulations would hinder states' ability to protect consumers. Rep. Jan Schakowsky called the move "reckless" in a committee hearing on AI regulation Wednesday. "Our job right now is to protect consumers," the Illinois Democrat said. Republicans, meanwhile, contended that state regulations could be too much of a burden on innovation in artificial intelligence. Rep. John Joyce, a Pennsylvania Republican, said in the same hearing that Congress should create a national regulatory framework rather than leaving it to the states. "We need a federal approach that ensures consumers are protected when AI tools are misused, and in a way that allows innovators to thrive." At the state level, a letter signed by 40 state attorneys general -- of both parties -- called for Congress to reject the moratorium and instead create that broader regulatory system. "This bill does not propose any regulatory scheme to replace or supplement the laws enacted or currently under consideration by the states, leaving Americans entirely unprotected from the potential harms of AI," they wrote.

House Republicans include a 10-year ban on US states regulating AI in 'big, beautiful' bill
House Republicans include a 10-year ban on US states regulating AI in 'big, beautiful' bill

Associated Press

time16-05-2025

  • Business
  • Associated Press

House Republicans include a 10-year ban on US states regulating AI in 'big, beautiful' bill

WASHINGTON (AP) — House Republicans surprised tech industry watchers and outraged state governments when they added a clause to Republicans' signature ' big, beautiful ' tax bill that would ban states and localities from regulating artificial intelligence for a decade. The brief but consequential provision, tucked into the House Energy and Commerce Committee's sweeping markup, would be a major boon to the AI industry, which has lobbied for uniform and light touch regulation as tech firms develop a technology they promise will transform society. However, while the clause would be far-reaching if enacted, it faces long odds in the U.S. Senate, where procedural rules may doom its inclusion in the GOP legislation. 'I don't know whether it will pass the Byrd Rule,' said Sen. John Cornyn, R-Texas, referring to a provision that requires that all parts of a budget reconciliation bill, like the GOP plan, focus mainly on the budgetary matters rather than general policy aims. 'That sounds to me like a policy change. I'm not going to speculate what the parliamentarian is going to do but I think it is unlikely to make it,' Cornyn said. Senators in both parties have expressed an interest in artificial intelligence and believe that Congress should take the lead in regulating the technology. But while lawmakers have introduced scores of bills, including some bipartisan efforts, that would impact artificial intelligence, few have seen any meaningful advancement in the deeply divided Congress. An exception is a bipartisan bill expected to be signed into law by President Donald Trump next week that would enact stricter penalties on the distribution of intimate 'revenge porn' images, both real and AI-generated, without a person's consent. 'AI doesn't understand state borders, so it is extraordinarily important for the federal government to be the one that sets interstate commerce. It's in our Constitution. You can't have a patchwork of 50 states,' said Sen. Bernie Moreno, an Ohio Republican. But Moreno said he was unsure if the House's proposed ban could make it through Senate procedure. The AI provision in the bill states that 'no state or political subdivision may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems.' The language could bar regulations on systems ranging from popular commercial models like ChatGPT to those that help make decisions about who gets hired or finds housing. State regulations on AI's usage in business, research, public utilities, educational settings and government would be banned. The congressional pushback against state-led AI regulation is part of a broader move led by the Trump administration to do away with policies and business approaches that have sought to limit AI's harms and pervasive bias. Half of all U.S. states so far have enacted legislation regulating AI deepfakes in political campaigns, according to a tracker from the watchdog organization Public Citizen. Most of those laws were passed within the last year, as incidents in democratic elections around the globe in 2024 highlighted the threat of lifelike AI audio clips, videos and images to deceive voters. California state Sen. Scott Wiener called the Republican proposal 'truly gross' in a social media post. Wiener, a San Francisco Democrat, authored landmark legislation last year that would have created first-in-the-nation safety measures for advanced artificial intelligence models. The bill was vetoed by California Gov. Gavin Newsom, a fellow San Francisco Democrat. 'Congress is incapable of meaningful AI regulation to protect the public. It is, however, quite capable of failing to act while also banning states from acting,' Wiener wrote. A bipartisan group of dozens of state attorneys general also sent a letter to Congress on Friday opposing the bill. 'AI brings real promise, but also real danger, and South Carolina has been doing the hard work to protect our citizens,' said South Carolina Attorney General Alan Wilson, a Republican, in a statement. 'Now, instead of stepping up with real solutions, Congress wants to tie our hands and push a one-size-fits-all mandate from Washington without a clear direction. That's not leadership, that's federal overreach.' As the debate unfolds, AI industry leaders are pressing ahead on research while competing with rivals to develop the best — and most widely used —AI systems. They have pushed federal lawmakers for uniform and unintrusive rules on the technology, saying they need to move quickly on the latest models to compete with Chinese firms. Sam Altman, the CEO of ChatGPT maker OpenAI, testified in a Senate hearing last week that a 'patchwork' of AI regulations 'would be quite burdensome and significantly impair our ability to do what we need to do.' 'One federal framework, that is light touch, that we can understand and that lets us move with the speed that this moment calls for seems important and fine,' Altman told Sen. Cynthia Lummis, a Wyoming Republican. And Sen. Ted Cruz floated the idea of a 10-year 'learning period' for AI at the same hearing, which included three other tech company executives. 'Would you support a 10-year learning period on states issuing comprehensive AI regulation, or some form of federal preemption to create an even playing field for AI developers and employers?' asked the Texas Republican. Altman responded that he was 'not sure what a 10-year learning period means, but I think having one federal approach focused on light touch and an even playing field sounds great to me.' Microsoft's president, Brad Smith, also offered measured support for 'giving the country time' in the way that limited U.S. regulation enabled early internet commerce to flourish. 'There's a lot of details that need to be hammered out, but giving the federal government the ability to lead, especially in the areas around product safety and pre-release reviews and the like, would help this industry grow,' Smith said. It was a change, at least in tone, for some of the executives. Altman had testified to Congress two years ago on the need for AI regulation, and Smith, five years ago, praised Microsoft's home state of Washington for its 'significant breakthrough' in passing first-in-the-nation guardrails on the use of facial recognition, a form of AI. Ten GOP senators said they were sympathetic to the idea of creating a national framework for AI. But whether the majority can work with Democrats to find a filibuster-proof solution is unclear. 'I am not opposed to the concept. In fact, interstate commerce would suggest that it is the responsibility of Congress to regulate these types of activities and not the states,' said Sen. Mike Rounds, a South Dakota Republican. 'If we're going to do it state by state we're going to have a real mess on our hands,' Rounds said. —————— O'Brien reported from Providence, Rhode Island. AP writers Ali Swenson in New York, Jesse Bedayn in Denver, Jeffrey Collins in Columbia, South Carolina, and Trân Nguyễn in Sacramento, California contributed to this report.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store