logo
Tech war: Huawei to unveil tech to cut China's reliance on HBM chips, report says

Tech war: Huawei to unveil tech to cut China's reliance on HBM chips, report says

Huawei Technologies is set to unveil a technological breakthrough that could reduce China's reliance on high-bandwidth memory (HBM) chips for running artificial intelligence reasoning models, according to state-run Securities Times.
The announcement will take place in collaboration with China UnionPay at the 2025 Financial AI Reasoning Application Landing and Development Forum in Shanghai on Tuesday, according to the report on Sunday. The event aims to promote AI reasoning models and applications in the financial sector.
Huawei did not immediately respond to a request for comment on Monday.
If confirmed, the development would represent the latest step by the US-sanctioned tech giant to establish a self-sufficient AI hardware ecosystem in China.
A Huawei Ascend AI chip. Photo: China News Service via Getty Images
The top suppliers of HBM semiconductors, often integrated into AI chipsets, are US companies Micron Technology and Advanced Micro Devices, as well as South Korean firms Samsung Electronics and SK Hynix.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Hong Kong chief executive can't treat the environment like a side dish
Hong Kong chief executive can't treat the environment like a side dish

South China Morning Post

time2 minutes ago

  • South China Morning Post

Hong Kong chief executive can't treat the environment like a side dish

Feel strongly about these letters, or any other aspects of the news? Share your views by emailing us your Letter to the Editor at letters@ or filling in this Google form . Submissions should not exceed 400 words, and must include your full name and address, plus a phone number for verification Chief Executive John Lee Ka-chiu has been soliciting public views for his fourth policy address while stressing the importance of boosting the economy, innovative development and people's livelihoods. The recent black rainstorms remind us of the urgency of building climate resilience against extreme weather. Lee must give high priority to sustainability in order to strengthen our resilience to financial, environmental and social risks – the three key pillars of sustainable development. Extreme weather events are getting more intense and frequent worldwide. According to the World Meteorological Organisation, extreme weather caused US$4.3 trillion in economic losses and over two million deaths in 50 years. To minimise economic and social risks, Lee should not treat environmental sustainability like a side dish; instead, it is the main course he needs to cook well for 7.5 million residents. Simply relying on the city's past appeal as a shopping and dining paradise won't revive our economy. The authorities must work on the unique qualities of our city, and build upon them to enable a vibrant and sustainable future.

GPT-5: Has AI just plateaued?
GPT-5: Has AI just plateaued?

AllAfrica

time32 minutes ago

  • AllAfrica

GPT-5: Has AI just plateaued?

Skip to content OpenAI CEO Sam Altman says GPT-5 is PhD-level general intelligence but that's not clearly the case. Photo: Aflo Co Ltd / Alamy OpenAI claims that its new flagship model, GPT-5, marks 'a significant step along the path to AGI' – that is, the artificial general intelligence that AI bosses and self-proclaimed experts often claim is around the corner. According to OpenAI's own definition, AGI would be 'a highly autonomous system that outperforms humans at most economically valuable work.' Setting aside whether this is something humanity should be striving for, OpenAI CEO Sam Altman's arguments for GPT-5 being a 'significant step' in this direction sound remarkably unspectacular. He claims GPT-5 is better at writing computer code than its predecessors. It is said to 'hallucinate' a bit less, and is a bit better at following instructions – especially when they require following multiple steps and using other software. The model is also apparently safer and less 'sycophantic', because it will not deceive the user or provide potentially harmful information just to please them. Altman does say that 'GPT-5 is the first time that it really feels like talking to an expert in any topic, like a PhD-level expert.' Yet it still doesn't have a clue about whether anything it says is accurate, as you can see from its attempt below to draw a map of North America. Sam Altman: With GPT-5, you'll have a PhD-level expert in any area you needMe: Draw a map of North America, highlighting countries, states, and capitalsGPT 5: *Sam Altman forgot to mention that the PhD-level expert used ChatGPT to cheat on all their geography classes… — Luiza Jarovsky, PhD (@LuizaJarovsky) August 10, 2025 It also cannot learn from its own experience, or achieve more than 42% accuracy on a challenging benchmark like 'Humanity's Last Exam', which contains hard questions on all kinds of scientific (and other) subject matter. This is slightly below the 44% that Grok 4, the model recently released by Elon Musk's xAI, is said to have achieved. The main technical innovation behind GPT-5 seems to be the introduction of a 'router'. This decides which model of GPT to delegate to when asked a question, essentially asking itself how much effort to invest in computing its answers (then improving over time by learning from feedback about its previous choices). The options for delegation include the previous leading models of GPT and also a new 'deeper reasoning' model called GPT-5 Thinking. It's not clear what this new model actually is. OpenAI isn't saying it is underpinned by any new algorithms or trained on any new data (since all available data was pretty much being used already). One might therefore speculate that this model is really just another way of controlling existing models with repeated queries and pushing them to work harder until it produces better results. It was back in 2017 when researchers at Google found out that a new type of AI architecture was capable of capturing tremendously complex patterns within long sequences of words that underpin the structure of human language. By training these so-called large language models (LLMs) on large amounts of text, they could respond to prompts from a user by mapping a sequence of words to their most likely continuation in accordance with the patterns present in the dataset. This approach to mimicking human intelligence became better and better as LLMs were trained on larger and larger amounts of data – leading to systems like ChatGPT. Ultimately, these models just encode a humongous table of stimuli and responses. A user prompt is the stimulus, and the model might just as well look it up in a table to determine the best response. Considering how simple this idea seems, it's astounding that LLMs have eclipsed the capabilities of many other AI systems – if not in terms of accuracy and reliability, certainly in terms of flexibility and usability. The jury may still be out on whether these systems could ever be capable of true reasoning, or understanding the world in ways similar to ours, or keeping track of their experiences to refine their behaviour correctly – all arguably necessary ingredients of AGI. In the meantime, an industry of AI software companies has sprung up that focuses on 'taming' general purpose LLMs to be more reliable and predictable for specific use cases. Having studied how to write the most effective prompts, their software might prompt a model multiple times, or use numerous LLMs, adjusting the instructions until it gets the desired result. In some cases, they might 'fine-tune' an LLM with small-scale add-ons to make them more effective. OpenAI's new router is in the same vein, except it's built into GPT-5. If this move succeeds, the engineers of companies further down the AI supply chain will be needed less and less. GPT-5 would also be cheaper to users than its LLM competitors because it would be more useful without these embellishments. At the same time, this may well be an admission that we have reached a point where LLMs cannot be improved much further to deliver on the promise of AGI. If so, it will vindicate those scientists and industry experts who have been arguing for a while that it won't be possible to overcome the current limitations in AI without moving beyond LLM architectures. OpenAI's new emphasis on routing also harks back to the 'meta reasoning' that gained prominence in AI in the 1990s, based on the idea of 'reasoning about reasoning.' Imagine, for example, you were trying to calculate an optimal travel route on a complex map. Heading off in the right direction is easy, but every time you consider another 100 alternatives for the remainder of the route, you will likely only get an improvement of 5% on your previous best option. At every point of the journey, the question is how much more thinking it's worth doing. This kind of reasoning is important for dealing with complex tasks by breaking them down into smaller problems that can be solved with more specialized components. This was the predominant paradigm in AI until the focus shifted to general-purpose LLMs. No more gold rush? Photo: JarTee via The Conversation It is possible that the release of GPT-5 marks a shift in the evolution of AI which, even if it is not a return to this approach, might usher in the end of creating ever more complicated models whose thought processes are impossible for anyone to understand. Whether that could put us on a path toward AGI is hard to say. But it might create an opportunity to move towards creating AIs we can control using rigorous engineering methods. And it might help us remember that the original vision of AI was not only to replicate human intelligence, but also to better understand it. Michael Rovatsos is professor of artificial intelligence, University of Edinburgh This article is republished from The Conversation under a Creative Commons license. Read the original article.

Can Philippines' semiconductor sector weather Trump's 100% tariff storm?
Can Philippines' semiconductor sector weather Trump's 100% tariff storm?

South China Morning Post

time32 minutes ago

  • South China Morning Post

Can Philippines' semiconductor sector weather Trump's 100% tariff storm?

Philippine economists and the nation's chip industry have sounded the alarm over US President Donald Trump 's proposal to impose 100 per cent tariffs on semiconductor exports to America, warning of potentially devastating consequences. While concerns are mounting, some observers remain optimistic that Manila may weather the storm, pointing to the ' Taco ' ('Trump always chickens out') slogan that has gained prominence in recent months – a nod to the mercurial president's reputation for reversing course or calling other countries' bluffs. Last week, Trump revealed plans to levy hefty tariffs on imported semiconductors, though he dangled exemptions for companies willing to relocate their supply chains to the United States as part of his push to reshore electronics manufacturing. Exporters in the Philippines are bracing for the 'devastating' consequences of such a move, according to Dan Lachica, president of the Semiconductor and Electronics Industries in the Philippines, speaking to local media. Some 15 per cent of the Philippines' semiconductor exports currently go to the US, industry insiders say. Photo: Shutterstock Semiconductors account for as much as 70 per cent of the Philippines' exports, with 15 per cent of these destined for the US, Lachica said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store