
Nvidia unveils plan for Taiwan's first 'AI supercomputer'
Huang said Nvidia would work with Taiwanese tech giants Foxconn and TSMC as well as the government to build Taiwan's "first AI supercomputer .. for the AI infrastructure and AI ecosystem".
"Having a world-class AI infrastructure in Taiwan is really important," Huang said in a keynote addresss on the eve of Computex.
The four-day event will draw computer and chip companies from around the world to Taiwan, whose semiconductor industry is critical to the production of everything from iPhones to the servers that run ChatGPT.
Taiwan produces the bulk of the world's most advanced chips, including those needed for the most powerful AI applications and research.
Top executives from Qualcomm, MediaTek and Foxconn will also speak at Computex, where advances in moving AI from data centres into laptops, robots and cars will be in the spotlight.
"From Agentic AI driving smarter personal devices to Physical AI reshaping autonomy, the show maps out the next frontier," specialist research firm Counterpoint said in a note.
Tech expert Paul Yu told AFP the industry was at a "critical juncture" for AI hardware development.
"Over the past two and a half years, significant investment has driven rapid advances in AI technology," said Yu, of Witology Markettrend Research Institute.
"2025 to 2026 will be the crucial period for transitioning AI model training into profitable applications."
'Taiwan to continue to thrive'
While US tariffs were the biggest issue facing the sector, most companies at Computex "will shy away from addressing tariffs directly as the situation is too uncertain," said Eric Smith of specialist platform TechInsights.
Last month, Washington announced a national security probe into imports of semiconductor technology, which could put the industry in the crosshairs of President Donald Trump's trade bazooka and inflict potentially devastating levies.
Since taking office in January, Trump has threatened hefty tariffs against many of America's biggest trade partners with the aim of forcing companies to move production to US soil.
Export-dependent Taiwan has pledged to increase investment in the United States as it seeks to avoid a 32 percent US tariff on its shipments.
But there are concerns the island could lose its dominance of the chip sector -- the so-called "silicon shield" protecting it from an invasion or blockade by China and an incentive for the United States to defend it.
TSMC, the Taiwanese contract chipmaking giant, has unveiled plans to inject an additional $100 billion into the United States, on top of the $65 billion already pledged.
TSMC-supplier GlobalWafers also announced plans last week to increase its US investment by $4 billion as the Taiwanese company opened a wafer facility in the US state Texas.
But Huang was optimistic on Friday when asked about the impact of tariffs on Taiwan, saying the island would "remain at the centre of the technology ecosystem".
"There are so many smart companies here, there are so many innovative and spirited companies," Huang said told reporters.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


France 24
3 days ago
- France 24
Apple rejects Musk claim of App Store bias
Musk has accused Apple of giving unfair preference to ChatGPT on its App Store and threatened legal action, triggering a fiery exchange with OpenAI CEO Sam Altman this week. "The App Store is designed to be fair and free of bias," Apple said in reply to an AFP inquiry. "We feature thousands of apps through charts, algorithmic recommendations, and curated lists selected by experts using objective criteria." Apple added that its goal at the App Store is to offer "safe discovery" for users and opportunities for developers to get their creations noticed. But earlier this week, Musk said Apple was "behaving in a manner that makes it impossible for any AI company besides OpenAI to reach #1 in the App Store, which is an unequivocal antitrust violation," without providing evidence to back his claim. "xAI will take immediate legal action," he said on his social media network X, referring to his own artificial intelligence company, which is responsible for Grok. X users responded by pointing out that China's DeepSeek AI hit the top spot in the App Store early this year, and Perplexity AI recently ranked number one in the App Store in India. DeepSeek and Perplexity compete with OpenAI and Musk's startup xAI. Altman called Musk's accusation "remarkable" in a response on X, charging that Musk himself is said to "manipulate X to benefit himself and his own companies and harm his competitors and people he doesn't like." Musk called Altman a "liar" in the heated exchange. OpenAI and xAI recently released new versions of ChatGPT and Grok. App Store rankings listed ChatGPT as the top free app for iPhones on Thursday, with Grok in seventh place. Factors going into App Store rankings include user engagement, reviews and the number of downloads. Grok was temporarily suspended on Monday in the latest controversy surrounding the chatbot. No official explanation was provided for the suspension, which followed multiple accusations of misinformation including the bot's misidentification of war-related images -- such as a false claim that an AFP photo of a starving child in Gaza was taken in Yemen years earlier. Last month, Grok triggered an online storm after inserting antisemitic comments into answers without prompting. In a statement on Grok's X account later that month, the company apologized "for the horrific behavior that many experienced." A US judge has cleared the way for a trial to consider OpenAI legal claims accusing Musk -- a co-founder of the company -- of waging a "relentless campaign" to damage the organization after it achieved success following his departure. The litigation is another round in a bitter feud between the generative AI start-up and the world's richest person. © 2025 AFP


Euronews
4 days ago
- Euronews
How to keep your phone from overheating amid summer heatwaves
The summer holidays are here and many of us will be heading off on trips to hot and sunny destinations – and bringing our electronic devices along. But as Southern European countries like Spain, Italy, and Greece bake under the sun, don't forget that phones, tablets, and other electronics are vulnerable to extreme heat. Here's what device makers and experts say on keeping your electronics cool. What heat does to a phone Most electronic devices work best in a specific temperature range. Apple says iPhones and iPads are designed to be used in temperatures between 0 and 35 degrees Celsius. A device might change its behaviour to deal with extreme temperatures, Apple warns: 'Using an iOS or iPadOS device in very hot conditions can permanently shorten battery life'. Your phone might temporarily warm up if you're charging wirelessly, downloading big files, streaming high-quality video, or doing anything else that requires lots of power or data. Samsung says that's normal and it won't affect the performance or battery lifespan. What happens if the device gets too hot If your phone gets so hot that it becomes uncomfortable to hold, Samsung recommends that you stop using it. An overheating iPhone will alert users with a warning message that it needs to cool down before it can be used. Android devices will display a similar message, telling users that the screen will dim, apps will be closed, and charging will be paused. What not to do There are things you can do to protect your device from high heat. Don't leave it in a car on a hot day and don't leave it in direct sunlight for long. Apple also warns against using some features when it's very hot or in direct sunlight for long periods, like GPS navigation when driving, playing a graphics-heavy video game, or using the camera. Google, which makes Pixel Android phones, advises users not to use resource-intensive features or apps while charging. Keep your gear cool The best thing you can do in extreme heat is turn off your device completely. 'Even background processes can generate heat,' say experts at British electronics chain Curry's. 'A full shutdown helps it cool faster'. Remove the case, if your phone or tablet has one, because they can trap heat. Also keep it out of direct sunlight and put it somewhere cool, like an air-conditioned room or in front of a fan. But be careful about putting it in cool places. 'Never put your device in the fridge or freezer, as condensation can cause water damage,' according to Curry's.


Euronews
4 days ago
- Euronews
Could an AI chatbot trick you into revealing private information?
Artificial intelligence (AI) chatbots can easily manipulate people into revealing deeply personal information, a new study has found. AI chatbots such as OpenAI's ChatGPT, Google Gemini, and Microsoft Copilot have exploded in popularity in recent years. But privacy experts have raised concerns over how these tools collect and store people's data – and whether they can be co-opted to act in harmful ways. 'These AI chatbots are still relatively novel, which can make people less aware that there might be an ulterior motive to an interaction,' William Seymour, a cybersecurity lecturer at King's College London, said in a statement. For the study, researchers from King's College London built AI models based on the open source code from Mistral's Le Chat and two different versions of Meta's AI system Llama. They programmed the conversational AIs to try to extract people's private data in three different ways: asking for it directly, tricking users into disclosing information, seemingly for their own benefit, and using reciprocal tactics to get people to share these details, for example by providing emotional support. The researchers asked 502 people to test out the chatbots – without telling them the goal of the study – and then had them fill out a survey that included questions on whether their security rights were respected. The 'friendliness' of AI models 'establishes comfort' They found that 'malicious' AI models are incredibly effective at securing private information, particularly when they use emotional appeals to trick people into sharing data. Chatbots that used empathy or emotional support extracted the most information with the least perceived safety breaches by the participants, the study found. That is likely because the 'friendliness' of these chatbots 'establish[ed] a sense of rapport and comfort,' the authors said. They described this as a 'concerning paradox' where AI chatbots act friendly to build trust and form connections with users – and then exploit that trust to violate their privacy. Notably, participants also disclosed personal information to AI models that asked them for it directly, even though they reported feeling uncomfortable doing so. The participants were most likely to share their age, hobbies, and country with the AI, along with their gender, nationality, and job title. Some participants also shared more sensitive information, like their health conditions or income, the report said. 'Our study shows the huge gap between users' awareness of the privacy risks and how they then share information,' Seymour said. AI personalisation 'outweighs privacy concerns' AI companies collect personal data for various reasons, such as personalising their chatbot's answers, sending notifications to people's devices, and sometimes for internal market research. Some of these companies, though, are accused of using that information to train their latest models or of not meeting privacy requirements in the European Union. For example, last week Google came under fire for revealing people's private chats with ChatGPT in its search results. Some of the chats disclosed extremely personal details about addiction, abuse, or mental health issues. The researchers said the convenience of AI personalisation often 'outweighs privacy concerns'. They suggested features and training to help people understand how AI models could try to extract their information – and to make them wary of providing it. For example, nudges could be included in AI chats to show users what data is being collected during their interactions. More needs to be done to help people spot the signs that there might be more to an online conversation than first seems,' Seymour said. 'Regulators and platform providers can also help by doing early audits, being more transparent, and putting tighter rules in place to stop covert data collection,' he added.