Latest news with #YuchenJin
Yahoo
07-02-2025
- Business
- Yahoo
Huawei manages to pull $118 billion in revenue despite strict U.S. sanctions and restrictions
When you buy through links on our articles, Future and its syndication partners may earn a commission. Chinese tech giant Huawei appears to have exceeded expectations in 2024, reporting annual revenue of over 860 billion yuan ($118.27 billion). This represents a 22% year-over-year increase, which is impressive despite ongoing U.S. sanctions designed to restrict its access to advanced semiconductor technology and global markets. A key driver of Huawei's financial success has been the revival of its consumer business, particularly in the smartphone sector. The company made a strong comeback in the Chinese market, largely fueled by the release of smartphones featuring domestically produced chipsets — a major win considering the U.S. export restrictions. According to Canalys, Huawei managed to capture a 16% share of China's smartphone market last year, and surpassed Apple in domestic sales. Beyond mobile devices, Huawei's smart car solutions unit also contributed significantly to its revenue growth. The company forged partnerships with Chinese automakers, providing autonomous driving software, in-car operating systems, and advanced chips to power next-generation electric vehicles. Huawei also continued to expand its presence in artificial intelligence (AI) and cloud computing, competing with global tech giants in AI-driven infrastructure and enterprise solutions. The company recently announced that its Ascend AI GPUs will support DeepSeek's R1 large language model (LLM), marking a significant development in China's AI hardware landscape. This integration will allow the R1 model to run efficiently on Huawei's Ascend platform, reducing reliance on Western technologies. The R1 model was initially trained using over two thousand Nvidia H800 GPUs, but inference tasks can now be performed on Huawei's Ascend 910C chips. According to AI industry expert Yuchen Jin, the Ascend 910C achieves approximately 60% of the performance of Nvidia's H100 GPU during inference, with potential for higher performance through move is part of a broader trend among Chinese tech companies to adopt domestic AI solutions. Firms such as Moore Threads and Hygon Information Technology have announced support for DeepSeek's models using their AI chips. Huawei, in collaboration with AI infrastructure startup SiliconFlow, is integrating DeepSeek's models into its Ascend cloud service. Major cloud service providers such as Alibaba, Baidu, and Tencent have also made DeepSeek's models available, signaling a significant shift in China's AI industry toward self-reliance and reduced dependence on the west. Sign in to access your portfolio
Yahoo
05-02-2025
- Business
- Yahoo
DeepSeek research suggests Huawei's Ascend 910C delivers 60% of Nvidia H100 inference performance
When you buy through links on our articles, Future and its syndication partners may earn a commission. Huawei's HiSilicon Ascend 910C is a version of the company's Ascend 910 processor for AI training introduced in 2019. By now, the performance of the Ascend 910 is barely sufficient for the cost-efficient training of large AI models. Still, when it comes to inference, it delivers 60% of Nvidia's H100 performance, according to researchers from DeepSeek. While the Ascend 910C is not a performance champion, it can succeed in reducing China's reliance on Nvidia GPUs. Testing by DeepSeek revealed that the 910C processor exceeded expectations in inference performance. Additionally, with manual optimizations of CUNN kernels, its efficiency could be further improved. DeepSeek's native support for Ascend processors and its PyTorch repository allows for seamless CUDA-to-CUNN conversion with minimal effort, making it easier to integrate Huawei's hardware into AI workflows. This suggests that Huawei's AI processor's capabilities are advancing rapidly, despite sanctions by the U.S. government and the lack of access to leading-edge process technologies of TSMC. While Huawei and SMIC have managed to catch up with TSMC's capabilities in the 2019–2020 era and produce a chip that can be considered competitive with Nvidia's A100 and H100 processors, the Ascend 910C is not the best option for AI training. AI training remains a domain where Nvidia maintains its undisputable lead. DeepSeek's Yuchen Jin said that long-term training reliability is a critical weakness of Chinese processors. This challenge stems from the deep integration of Nvidia's hardware and software ecosystem, which has been developed over two decades. While inference performance can be optimized, sustained training workloads require further improvements in Huawei's hardware and software stack. Just like the original Ascend 910, the new Ascend 910C uses chiplet packaging, and its main compute SoC has around 53 billion transistors. While the original compute chiplet of the Ascend 910 was made by TSMC using its N7+ fabrication technology (7nm-class with EUV), the compute chiplet of the Ascend 910C is made by SMIC on its 2nd Generation 7nm-class process technology known as N+2. Looking ahead, some experts predict that as AI models converge to Transformer architectures, the importance of Nvidia's software ecosystem may decline. DeepSeek's expertise in the optimization of hardware and software could also significantly reduce dependency on Nvidia, offering AI companies a more cost-effective alternative, particularly for inference. However, to compete at a global scale, China must overcome the challenge of training stability and further refine its AI computing infrastructure.