Huawei Unveils Fully-Upgraded Xinghe Intelligent Network Offerings to Accelerate Digital and Intelligent Transformation in Asia Pacific
HONG KONG, June 11, 2025 /PRNewswire/ -- At Huawei Network Summit 2025 (Asia Pacific), Huawei unveiled fully-upgraded Xinghe Intelligent Network products and solutions for the Asia Pacific region, and released the Technical Requirements for Financial Data Center Networks with the Network Innovation and Development Alliance (NIDA). All of these announcements will inject new vitality into digital and intelligent development in Asia Pacific.
In the opening speech, Spawn Fan, President of Huawei's Asia Pacific Enterprise Sales Department, pointed out that AI development has far exceeded expectations, with record-breaking growth in AI users and adoption rates. Huawei responds to this by leveraging its unique full-stack capabilities and collaborating with partners to help customers in all industries build advanced infrastructure for the AI era.
Mr. Sword Cheng, Vice President of Huawei's Data Communication Product Line, said in the speech that global industries are actively promoting the convergence of AI and network technologies. However, most solutions are still in the early stages of "network management + AI". As such, only end-to-end AI-powered networks are viable solutions in the AI era.
Huawei's Xinghe Intelligent Network Solution is ideal for AI-powered networks through innovations in diverse domains.
Xinghe Intelligent Campus leverages Wi-Fi Channel State Information (CSI) sensing technology to detect intrusions and identify unauthorized cameras in seconds through Wi-Fi signals, ensuring campus space security and privacy protection.
Xinghe Intelligent Fabric provides intelligent assurance for general-purpose computing and intelligent computing scenarios.
Xinghe Intelligent WAN employs an AI latency compensation algorithm to reduce the two-way latency variation to 3 μs, which is a 20-fold improvement over SDH networks. It also draws on AI algorithms to achieve a 95% identification rate of encrypted traffic, helping Internet Service Providers (ISPs) reduce latency by more than 65% and increase revenue by 20%.
Xinghe Intelligent Network Security builds on an AI detection engine to achieve a 95% unknown malware identification rate. It also leverages a high-performance graph database to trace up to 100 hops back to the source. All of these help detect threats at a high accuracy and prevent virus spread.
Huawei also announced a full lineup of all-new Xinghe Intelligent Network products and solutions for Asia Pacific. Examples include the industry's first 5-radio Wi-Fi 7 AP, unique "3S (spanning, stable, and secure)" StarryLink optical modules, the industry's highest-density 400GE full-service router, and differentiated solutions across industries.
Huawei and NIDA jointly released the Technical Requirements for Financial Data Center Networks, which systematically defines key technical indicators of financial data center networks for the first time, serving as an industry-recognized network construction guide for financial institutions.
Looking ahead, Huawei will further deepen AI-powered Xinghe Intelligent Network innovation and collaborate with Asia-Pacific customers across industries to accelerate intelligent transformation, explore new use cases, and stride to the digital and intelligent future together.
View original content to download multimedia:https://www.prnewswire.com/apac/news-releases/huawei-unveils-fully-upgraded-xinghe-intelligent-network-offerings-to-accelerate-digital-and-intelligent-transformation-in-asia-pacific-302478778.html
SOURCE Huawei
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
3 minutes ago
- Yahoo
Micron to invest $200 billion in US memory facilities
Memory chip maker Micron (MU) announced on Thursday that it will invest an additional $30 billion in the US, as it looks to build out its manufacturing and research and development facilities in Idaho and New York. The move brings Micron's total US manufacturing and R&D investments up to roughly $200 billion which will create some 90,000 direct and indirect jobs, the company said. Micron is receiving about $6.5 billion in funding from the US CHIPS Act. The plans call for Micron to build a second memory manufacturing plant at its Boise, Idaho, facility and a massive chip fabrication complex in New York. The company is also updating and expanding its Virginia plant. Micron also said it expects the second Idaho plant to help it bring its advanced high-bandwidth memory (HBM) manufacturing to the US. HBM is a key component in AI data centers. 'Micron's investment in advanced memory manufacturing and HBM capabilities in the U.S., with support from the Trump Administration, is an important step forward for the AI ecosystem,' Nvidia (NVDA) CEO Jensen Huang said in a statement. 'Micron's leadership in high-performance memory is invaluable to enabling the next generation of AI breakthroughs that NVIDIA is driving. We're excited to collaborate with Micron as we push the boundaries of what's possible in AI and high-performance computing,' Huang added. All totaled, Micron says the investments will allow the company to produce 40% of its DRAM memory in the US. Its initial Idaho plant is expected to begin pumping out the hardware in 2027. Micron also says it is set to begin preparing the ground for its New York facilities later this year. 'This approximately $200 billion investment will reinforce America's technological leadership, create tens of thousands of American jobs across the semiconductor ecosystem and secure a domestic supply of semiconductors—critical to economic and national security,' Micron CEO Sanjay Mehrotra said in a statement. 'We are grateful for the support from President Trump, Secretary Lutnick and our federal, state, and local partners who have been instrumental in advancing domestic semiconductor manufacturing.' Micron isn't the only company bringing HBM production to the US, though. South Korea's SK Hynix is also building a new HBM plant in Indiana as part of a $3.8 billion construction project. The Trump administration, and the Biden administration before it, has made onshoring semiconductor manufacturing a key component of its domestic agenda, as it seeks to wean itself off of the country's dependence on foreign-made chips. Companies ranging from Intel (INTC) and TSMC (TSM) to Samsung and GlobalFounderies (GFS) and others have recently announced plans to build or upgrade their facility throughout the country, thanks in part to billions of dollars in funding through the CHIPS Act. Email Daniel Howley at dhowley@ Follow him on X/Twitter at @DanielHowley. Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data


San Francisco Chronicle
7 minutes ago
- San Francisco Chronicle
AI chatbots need more books to learn from. These libraries are opening their stacks
CAMBRIDGE, Mass. (AP) — Everything ever said on the internet was just the start of teaching artificial intelligence about humanity. Tech companies are now tapping into an older repository of knowledge: the library stacks. Nearly one million books published as early as the 15th century — and in 254 languages — are part of a Harvard University collection being released to AI researchers Thursday. Also coming soon are troves of old newspapers and government documents held by Boston's public library. Cracking open the vaults to centuries-old tomes could be a data bonanza for tech companies battling lawsuits from living novelists, visual artistsand others whose creative works have been scooped up without their consent to train AI chatbots. 'It is a prudent decision to start with public domain data because that's less controversial right now than content that's still under copyright,' said Burton Davis, a deputy general counsel at Microsoft. Davis said libraries also hold 'significant amounts of interesting cultural, historical and language data' that's missing from the past few decades of online commentary that AI chatbots have mostly learned from. Supported by 'unrestricted gifts' from Microsoft and ChatGPT maker OpenAI, the Harvard-based Institutional Data Initiative is working with libraries around the world on how to make their historic collections AI-ready in a way that also benefits libraries and the communities they serve. 'We're trying to move some of the power from this current AI moment back to these institutions,' said Aristana Scourtas, who manages research at Harvard Law School's Library Innovation Lab. 'Librarians have always been the stewards of data and the stewards of information.' Harvard's newly released dataset, Institutional Books 1.0, contains more than 394 million scanned pages of paper. One of the earlier works is from the 1400s — a Korean painter's handwritten thoughts about cultivating flowers and trees. The largest concentration of works is from the 19th century, on subjects such as literature, philosophy, law and agriculture, all of it meticulously preserved and organized by generations of librarians. It promises to be a boon for AI developers trying to improve the accuracy and reliability of their systems. 'A lot of the data that's been used in AI training has not come from original sources,' said the data initiative's executive director, Greg Leppert, who is also chief technologist at Harvard's Berkman Klein Center for Internet & Society. This book collection goes "all the way back to the physical copy that was scanned by the institutions that actually collected those items,' he said. Before ChatGPT sparked a commercial AI frenzy, most AI researchers didn't think much about the provenance of the passages of text they pulled from Wikipedia, from social media forums like Reddit and sometimes from deep repositories of pirated books. They just needed lots of what computer scientists call tokens — units of data, each of which can represent a piece of a word. Harvard's new AI training collection has an estimated 242 billion tokens, an amount that's hard for humans to fathom but it's still just a drop of what's being fed into the most advanced AI systems. Facebook parent company Meta, for instance, has said the latest version of its AI large language model was trained on more than 30 trillion tokens pulled from text, images and videos. Meta is also battling a lawsuit from comedian Sarah Silverman and other published authors who accuse the company of stealing their books from 'shadow libraries' of pirated works. Now, with some reservations, the real libraries are standing up. OpenAI, which is also fighting a string of copyright lawsuits, donated $50 million this year to a group of research institutions including Oxford University's 400-year-old Bodleian Library, which is digitizing rare texts and using AI to help transcribe them. When the company first reached out to the Boston Public Library, one of the biggest in the U.S., the library made clear that any information it digitized would be for everyone, said Jessica Chapel, its chief of digital and online services. 'OpenAI had this interest in massive amounts of training data. We have an interest in massive amounts of digital objects. So this is kind of just a case that things are aligning,' Chapel said. Digitization is expensive. It's been painstaking work, for instance, for Boston's library to scan and curate dozens of New England's French-language newspapers that were widely read in the late 19th and early 20th century by Canadian immigrant communities from Quebec. Now that such text is of use as training data, it helps bankroll projects that librarians want to do anyway. 'We've been very clear that, 'Hey, we're a public library,'" Chapel said. 'Our collections are held for public use, and anything we digitized as part of this project will be made public.' Harvard's collection was already digitized starting in 2006 for another tech giant, Google, in its controversial project to create a searchable online library of more than 20 million books. Google spent years beating back legal challenges from authors to its online book library, which included many newer and copyrighted works. It was finally settled in 2016 when the U.S. Supreme Court let stand lower court rulings that rejected copyright infringement claims. Now, for the first time, Google has worked with Harvard to retrieve public domain volumes from Google Books and clear the way for their release to AI developers. Copyright protections in the U.S. typically last for 95 years, and longer for sound recordings. How useful all of this will be for the next generation of AI tools remains to be seen as the data gets shared Thursday on the Hugging Face platform, which hosts datasets and open-source AI models that anyone can download. The book collection is more linguistically diverse than typical AI data sources. Fewer than half the volumes are in English, though European languages still dominate, particularly German, French, Italian, Spanish and Latin. A book collection steeped in 19th century thought could also be 'immensely critical' for the tech industry's efforts to build AI agents that can plan and reason as well as humans, Leppert said. 'At a university, you have a lot of pedagogy around what it means to reason,' Leppert said. 'You have a lot of scientific information about how to run processes and how to run analyses.' At the same time, there's also plenty of outdated data, from debunked scientific and medical theories to racist narratives. 'When you're dealing with such a large data set, there are some tricky issues around harmful content and language," said Kristi Mukk, a coordinator at Harvard's Library Innovation Lab who said the initiative is trying to provide guidance about mitigating the risks of using the data, to 'help them make their own informed decisions and use AI responsibly.'


Business Wire
12 minutes ago
- Business Wire
Volantis Unveils Photonic Compute Platform for the AI Era; Raises $9M in Seed Round With Alex Wang, Trevor Blackwell, and Others
SAN MATEO, Calif.--(BUSINESS WIRE)-- Volantis, a semiconductor startup building photonically integrated computers for the AI era, today emerged from stealth and announced a $9 million seed round of funding. The round includes backing from Alex Wang (Scale AI) and Trevor Blackwell (Y Combinator). Sam Altman backed Volantis in April 2022, betting on the need for foundational breakthroughs in compute long before today's AI infrastructure limitations came into focus. Volantis is not trying to retrofit today's chips; they're building what the next decade of compute will require. Founded in 2022, Volantis is the first company to scale a new class of photonic compute architecture that moves beyond the limitations of silicon photonics. Using a fundamentally different approach—direct laser modulation and wafer-scale integration—Volantis enables ultra-efficient communication across highly connected compute systems. This architecture packs the power of a server rack into a chip-scale package, reducing energy consumption and cost while dramatically increasing compute speed. 'We're not just making an incrementally better AI chip,' said Tapa Ghosh, Volantis' founder and CEO. 'We've solved long-standing challenges that have kept photonics out of computers. Instead of relying on silicon photonics, we've scaled a new class of low-cost, low-power, directly modulated lasers and coupled them into densely parallel optical waveguides—something never done at this scale before. The result is the photonic compute platform the AI era has been waiting for.' The core bottleneck in AI compute today is the bandwidth and power required to move data between chips. While photonics has long been seen as the answer, silicon photonics—the dominant approach for the past two decades—has consistently failed to scale inside systems. Volantis replaces traditional chip interconnects with energy-efficient optical channels, using parallelism to deliver high aggregate bandwidth with unmatched efficiency. Volantis' breakthrough comes from integrating directly modulated lasers with on-chip optical waveguides—a feat that enables many slow, low-powered links to work in parallel, similar to the architectural advantage GPUs have in compute. This unique combination allows Volantis to unlock 15x better performance per dollar while improving stability, and reducing power consumption. In just two years Volantis has already built working, patent-pending prototypes, validating that photonics inside the computer outperforms traditional silicon photonics. Clint Schow, a leading photonics researcher and professor at UCSB, and advisor to Volantis, added, 'This new approach takes proven, low-cost VCSEL technology and unleashes it at scale to deliver a wafer-scale AI processor. Volantis is not trying to retrofit today's chips; they're building what the next decade of compute will require.' The Volantis team brings together top optical and hardware engineering talent from companies like Ayar Labs, Lightmatter, Marvell, Rockley, Infinera, Lumentum, Juniper, Skorpios, and others. Roy Meade, co-founder and CTO of Volantis, was the first employee and former VP of Engineering at Ayar Labs, one of the most prominent silicon photonics companies globally; he also successfully led HBM (High Bandwidth Memory) development at Micron. Volantis will use the funding to continue refining its chip architecture, grow its world-class engineering team, and begin early customer engagements. About Volantis Volantis is a semiconductor startup building photonically integrated computers for the AI era. Founded in 2022 and backed by leading innovators in AI and computing, Volantis is the first to scale photonics inside computer chips, enabling a new class of chips that are radically faster, less expensive, and more energy efficient than traditional architectures. Volantis holds multiple pending patents on its architecture and is headquartered in San Mateo, CA.