logo
OpenAI's Altman warns EU regulation may hold Europe back

OpenAI's Altman warns EU regulation may hold Europe back

Yahoo07-02-2025

OpenAI chief Sam Altman on Friday suggested European regulation could hold back the development of artificial intelligence (AI), while promising the US company would abide by new EU legislation.
At a panel discussion on AI at Berlin's Technical University, Altman was asked directly about the EU's "AI Act", considered the most comprehensive regulatory framework for the emerging technology in the world.
"We will comply with the law and respect the wishes of the European people", Altman said.
"There are benefits to different regulatory regimes," the Open AI chief said, but added that "there are going to be economic impacts that will become societal impacts".
"We want to be able to deploy our products in Europe as quickly as we do in the rest of the world", Altman said.
It was "in Europe's interest to be able to adopt AI and not be behind the rest of the world".
The EU AI Act was passed in March 2024. This week regulators gave guidance as to what types of AI tools will be outlawed as too dangerous.
They include tools that scrape online images to create facial recognition databases or allow police to evaluate criminal risk based solely on biometric data.
The United States is taking steps to loosen AI regulation. President Donald Trump last month rescinded an order from his predecessor Joe Biden establishing oversight measures for companies developing AI models.
On Thursday, OpenAI announced it would allow some European customers to store and process data from conversations with its chatbots within the European Union in order to help "organisations operating in Europe meet local data sovereignty requirements".
Altman said he was bullish about the pace of development of AI, despite some experts saying the chances of developing artificial general intelligence (AGI) that surpasses all human capabilities are being exaggerated.
"I think you should all be very sceptical when people start saying this is about to run out... or we're going to hit this limit," Altman told the event.
"I think we'll get to something in the next couple of years that many people will look at and say: 'I really didn't think computer was going to do that.'"
Next week, Altman will be one of the high-profile guests at an AI summit in Paris billed by France as a "wake-up call" for Europe.
OpenAI raised public awareness of AI generative models in 2022 with the launch of ChatGPT. It is to open its first office in Germany in Munich later this year.
jsk/sea/tw

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Kioxia preps XL-Flash SSD that's 3x faster than any SSD available — 10 million IOPS drive has peer-to-peer GPU connectivity for AI servers
Kioxia preps XL-Flash SSD that's 3x faster than any SSD available — 10 million IOPS drive has peer-to-peer GPU connectivity for AI servers

Yahoo

time35 minutes ago

  • Yahoo

Kioxia preps XL-Flash SSD that's 3x faster than any SSD available — 10 million IOPS drive has peer-to-peer GPU connectivity for AI servers

When you buy through links on our articles, Future and its syndication partners may earn a commission. Kioxia aims to change the storage paradigm with a proposed SSD designed to surpass 10 million input/output operations per second (IOPS) in small-block workloads, the company revealed at its Corporate Strategy Meeting earlier this week. That's three times faster than the peak speeds of many modern SSDs. One of the performance bottlenecks of modern AI servers is the data transfer between storage and GPUs, as data is currently transferred by the CPU, which significantly increases latencies and extends access times. To reach the performance target, Kioxia is designing a new controller specifically tuned to maximize IOPS — beyond 10M 512B IOPS — to enable GPUs to access data at speeds sufficient to keep their cores 100% used at all times. The proposed Kioxia 'AI SSD' is set to utilize the company's single-level cell (SLC) XL-Flash memory, which boasts read latencies in the range of 3 to 5 microseconds, significantly lower than the read latencies of 40 to 100 microseconds offered by SSDs based on conventional 3D NAND. Additionally, by storing one bit per cell, SLC offers faster access times and greater endurance, attributes that are crucial for demanding AI workloads. Current high-end datacenter SSDs typically achieve 2 to 3 million IOPS for both 4K and 512-byte random read operations. From a bandwidth perspective, using 4K blocks makes a lot of sense, whereas 512B blocks do not. However, large language models (LLMs) and retrieval-augmented generation (RAG) systems typically perform small, random accesses to fetch embeddings, parameters, or knowledge base entries. In these scenarios, small block sizes, such as 512B, are more representative of actual application behavior than 4K or larger blocks. Therefore, it makes more sense to use 512B blocks to meet the needs of LLMs and RAGs in terms of latencies and use multiple drives for bandwidth. Using smaller blocks could also enable more efficient use of memory semantics for access. It is noteworthy that Kioxia does not disclose which host interface its 'AI SSD' will use, although it does not appear to require a PCIe 6.0 interface from a bandwidth perspective. The 'AI SSD' from Kioxia will also be optimized for peer-to-peer communications between the GPU and SSD, bypassing the CPU for extra performance and lower latency. To that end, there is another reason why Kioxia (well, and Nvidia) plan to use 512B blocks as GPUs typically operate on cache lines of 32, 64, or 128 bytes internally and their memory subsystems are optimized for burst access to many small, independent memory locations, to keep all the stream processors busy at all times. To that end, 512-byte reads align better with GPU designs. Kioxia's 'AI SSD' is designed to support AI training setups where large language models (LLMs) require fast, repeated access to massive datasets. Also, Kioxia envisions it being deployed in AI inference applications, particularly in systems that employ retrieval-augmented generation techniques to enhance generative AI outputs with real-time data (i.e., for reasoning). Low-latency, high-bandwidth storage access is crucial for such machines to ensure both low response times and efficient GPU utilization. The Kioxia 'AI SSD' is scheduled for release in the second half of 2026. Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

SMI CEO claims Nvidia wants SSDs with 100 million IOPS — up to 33X performance uplift could eliminate AI GPU bottlenecks
SMI CEO claims Nvidia wants SSDs with 100 million IOPS — up to 33X performance uplift could eliminate AI GPU bottlenecks

Yahoo

time35 minutes ago

  • Yahoo

SMI CEO claims Nvidia wants SSDs with 100 million IOPS — up to 33X performance uplift could eliminate AI GPU bottlenecks

When you buy through links on our articles, Future and its syndication partners may earn a commission. Now that the AI industry has exceptionally high-performance GPUs with high-bandwidth memory (HBM), one of the bottlenecks that AI training and inference systems face is storage performance. To that end, Nvidia is working with partners to build SSDs that can hit random read performance of 100 million input/output operations per second (IOPS) in small-block workloads, according to Wallace C. Kuo, who spoke with Tom's Hardware in an exclusive interview. "Right now, they are aiming for 100 million IOPS — which is huge," Kuo told Tom's Hardware. Modern AI accelerators, such as Nvidia's B200, feature HBM3E memory bandwidth of around 8 TB/s, which significantly exceeds the capabilities of modern storage subsystems in both overall throughput and latency. Modern PCIe 5.0 x4 SSDs top at around 14.5 GB/s and deliver 2 – 3 million IOPS for both 4K and 512B random reads. Although 4K blocks are better suited for bandwidth, AI models typically perform small, random fetches, which makes 512B blocks a better fit for their latency-sensitive patterns. However, increasing the number of I/O operations per second by 33 times is hard, given the limitations of both SSD controllers and NAND memory. In fact, Kioxia is already working on an 'AI SSD' based on its XL-Flash memory designed to surpass 10 million 512K IOPS. The company currently plans to release this drive during the second half of next year, possibly to coincide with the rollout of Nvidia's Vera Rubin platform. To get to 100 million IOPS, one might use multiple 'AI SSDs.' However, the head of SMI believes that achieving 100 million IOPS on a single drive featuring conventional NAND with decent cost and power consumption will be extremely hard, so a new type of memory might be needed. "I believe they are looking for a media change," said Kuo. "Optane was supposed to be the ideal solution, but it is gone now. Kioxia is trying to bring XL-NAND and improve its performance. SanDisk is trying to introduce High Bandwidth Flash (HBF), but honestly, I don't really believe in it. Right now, everyone is promoting their own technology, but the industry really needs something fundamentally new. Otherwise, it will be very hard to achieve 100 million IOPS and still be cost-effective." Currently, many companies, including Micron and SanDisk, are developing new types of non-volatile memory. However, when these new types of memory will be commercially viable is something that even the head of Silicon Motion is not sure about. Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

Warren Buffett Has $90 Billion Invested in These 9 Artificial Intelligence (AI) Stocks. Here's the Best of the Bunch.
Warren Buffett Has $90 Billion Invested in These 9 Artificial Intelligence (AI) Stocks. Here's the Best of the Bunch.

Yahoo

timean hour ago

  • Yahoo

Warren Buffett Has $90 Billion Invested in These 9 Artificial Intelligence (AI) Stocks. Here's the Best of the Bunch.

Berkshire Hathaway owns only two AI stocks. However, Buffett's "secret portfolio" owns another seven AI stocks. The best Buffett AI stock should benefit tremendously as organizations build and deploy AI models in the cloud. These 10 stocks could mint the next wave of millionaires › Warren Buffett readily admits that he doesn't understand artificial intelligence (AI). He has also said that he won't invest in businesses that he understands. So, does that mean the legendary investor doesn't own any AI stocks? Nope. Actually, Buffett has invested roughly $90 billion in nine companies that are heavily focused on AI. Here they are -- and which one is the best of the bunch. Apple (NASDAQ: AAPL) ranks as the largest holding in Berkshire Hathaway's (NYSE: BRK.A) (NYSE: BRK.B) portfolio. Berkshire's stake in the iPhone maker is valued at close to $59.3 billion. Although Buffett significantly reduced the conglomerate's position in Apple last year, it still makes up 21% of Berkshire's total portfolio. AI has been at the forefront of Apple's development strategy for years. However, the company was seemingly left behind in the generative AI race until it launched Apple Intelligence in 2024. So far, though, Apple Intelligence doesn't appear to be igniting the super-cycle of iPhone upgrades that some analysts predicted. Amazon (NASDAQ: AMZN) is another top AI stock in Berkshire's portfolio, albeit a much smaller one. Berkshire owns around $2.1 billion of the e-commerce and cloud service giant's shares. Buffett didn't make the initial decision to buy Amazon; one of Berkshire's other investment managers (either Todd Combs or Ted Weschler) bought the stock. Most AI models run in the cloud. As the largest cloud services provider in the world, Amazon Web Services (AWS) has been a big winner as organizations scrambled to build and deploy AI models in the cloud. Amazon is also using AI extensively internally to increase efficiency and provide more services to customers. Apple and Amazon are the only AI stocks owned directly by Berkshire Hathaway. But I said that Buffett owned nine AI stocks. Where are the other seven? In Buffett's "secret portfolio." General Reinsurance acquired New England Asset Management (NEAM) in 1995. Three years later, Berkshire Hathaway acquired General Re. NEAM continues to manage investments for insurance companies. Its holdings don't show up in Berkshire Hathaway's regulatory filings, but any stock owned by NEAM is also owned by Buffett. Apple is the only AI stock in both Berkshire's and NEAM's portfolios. NEAM owns two other so-called "Magnificent Seven" stocks -- Google parent Alphabet (NASDAQ: GOOG) (NASDAQ: GOOGL) and Microsoft (NASDAQ: MSFT). Like Amazon, both Alphabet and Microsoft are major cloud service providers and are profiting from the strong AI tailwind. NEAM also has stakes in a couple of tech pioneers that are investing heavily in AI. IBM (NYSE: IBM) made headlines in the past with the success of its Watson AI technology. Texas Instruments (NASDAQ: TXN) isn't exactly a shining star in the AI world. However, the company makes edge AI products (AI deployed on local devices) and is working with Nvidia to develop power management and sensing technologies for data centers. The stocks of three AI chipmakers are also in NEAM's portfolio. Broadcom (NASDAQ: AVGO) manufactures AI products, including Ethernet switches designed to accelerate AI workloads and custom AI accelerators. NXP Semiconductors (NASDAQ: NXPI) and Qualcomm (NASDAQ: QCOM) sell products that support edge AI. If you're an income investor, Texas Instruments is probably the best pick among Buffett's nine AI stocks. Its forward dividend yield stands at 2.73%. Alphabet is arguably the most attractively valued AI stock in the group, with growth prospects factored in. The Google parent's price-to-earnings-to-growth (PEG) ratio is 1.36. I think the best Buffett AI stock all-around, though, is Amazon. The company is poised to profit as more organizations move their apps and data to the cloud. It still has significant growth prospects in e-commerce as well. Amazon is also expanding into new markets, including healthcare, autonomous ride-hailing, and satellite internet services. Ever feel like you missed the boat in buying the most successful stocks? Then you'll want to hear this. On rare occasions, our expert team of analysts issues a 'Double Down' stock recommendation for companies that they think are about to pop. If you're worried you've already missed your chance to invest, now is the best time to buy before it's too late. And the numbers speak for themselves: Nvidia: if you invested $1,000 when we doubled down in 2009, you'd have $368,190!* Apple: if you invested $1,000 when we doubled down in 2008, you'd have $37,294!* Netflix: if you invested $1,000 when we doubled down in 2004, you'd have $653,702!* Right now, we're issuing 'Double Down' alerts for three incredible companies, available when you join , and there may not be another chance like this anytime soon.*Stock Advisor returns as of June 9, 2025 John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Keith Speights has positions in Alphabet, Amazon, Apple, Berkshire Hathaway, and Microsoft. The Motley Fool has positions in and recommends Alphabet, Amazon, Apple, Berkshire Hathaway, International Business Machines, Microsoft, Nvidia, Qualcomm, and Texas Instruments. The Motley Fool recommends Broadcom and NXP Semiconductors and recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy. Warren Buffett Has $90 Billion Invested in These 9 Artificial Intelligence (AI) Stocks. Here's the Best of the Bunch. was originally published by The Motley Fool

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store