Latest news with #EnChargeAI


Business Wire
29-05-2025
- Business
- Business Wire
EnCharge AI Announces EN100, First-of-its-Kind AI Accelerator for On-Device Computing
SANTA CLARA, Calif.--(BUSINESS WIRE)--EnCharge AI today announced the EnCharge EN100, the industry's first AI accelerator built on precise and scalable analog in-memory computing. Designed to bring advanced AI capabilities to laptops, workstations, and edge devices, EN100 leverages transformational efficiency to deliver 200+ TOPS of total compute power within the power constraints of edge and client platforms such as laptops. EN100 shatters these limitations. By fundamentally reshaping where AI inference happens, developers can now deploy sophisticated, secure, personalized applications locally. 'EN100 represents a fundamental shift in AI computing architecture, rooted in hardware and software innovations that have been de-risked through fundamental research spanning multiple generations of silicon development," said Naveen Verma, CEO at EnCharge AI. "These innovations are now being made available as products for the industry to use, as scalable, programmable AI inference solutions that break through the energy efficiency limits of today's digital solutions. This means advanced, secure, and personalized AI can run locally, without relying on cloud infrastructure. We hope this will radically expand what you can do with AI.' Previously, models driving the next generation of AI economy—multimodal and reasoning systems—required massive data center processing power. Cloud dependency's cost, latency, and security drawbacks made countless AI applications impossible. EN100 shatters these limitations. By fundamentally reshaping where AI inference happens, developers can now deploy sophisticated, secure, personalized applications locally. This breakthrough enables organizations to rapidly integrate advanced capabilities into existing products—democratizing powerful AI technologies and bringing high-performance inference directly to end-users EN100, the first of the EnCharge EN series of chips, features an optimized architecture that efficiently processes AI tasks while minimizing energy. Available in two form factors – M.2 for laptops and PCIe for workstations – EN100 is engineered to transform on-device capabilities: M.2 for Laptops: Delivering up to 200+ TOPS of AI compute power in an 8.25W power envelope, EN100 M.2 enables sophisticated AI applications on laptops without compromising battery life or portability. PCIe for Workstations: Featuring four NPUs reaching approximately 1 PetaOPS, the EN100 PCIe card delivers GPU-level compute capacity at a fraction of the cost and power consumption, making it ideal for professional AI applications utilizing complex models and large datasets. EnCharge AI's comprehensive software suite delivers full platform support across the evolving model landscape with maximum efficiency. This purpose-built ecosystem combines specialized optimization tools, high-performance compilation, and extensive development resources—all supporting popular frameworks like PyTorch and TensorFlow. Compared to competing solutions, EN100 demonstrates up to ~20x better performance per watt across various AI workloads. With up to 128GB of high-density LPDDR memory and bandwidth reaching 272 GB/s, EN100 efficiently handles sophisticated AI tasks, such as generative language models and real-time computer vision, that typically require specialized data center hardware. The programmability of EN100 ensures optimized performance of AI models today and the ability to adapt for the AI models of tomorrow. "The real magic of EN100 is that it makes transformative efficiency for AI inference easily accessible to our partners, which can be used to help them achieve their ambitious AI roadmaps," says Ram Rangarajan, Senior Vice President of Product and Strategy at EnCharge AI. "For client platforms, EN100 can bring sophisticated AI capabilities on device, enabling a new generation of intelligent applications that are not only faster and more responsive but also more secure and personalized." Early adoption partners have already begun working closely with EnCharge to map out how EN100 will deliver transformative AI experiences, such as always-on multimodal AI agents and enhanced gaming applications that render realistic environments in real-time. While the first round of EN100's Early Access Program is currently full, interested developers and OEMs can sign up to learn more about the upcoming Round 2 Early Access Program, which provides a unique opportunity to gain a competitive advantage by being among the first to leverage EN100's capabilities for commercial applications at About EnCharge AI EnCharge AI is the leader in advanced AI compute solutions for deployments from edge-to-cloud. EnCharge's robust and scalable next-generation in-memory computing technology provides orders-of-magnitude higher compute efficiency and density compared to today's best-in-class solutions. The high-performance solutions will enable the immense potential of AI to be accessible at scale, in power, size, and weight constrained applications. EnCharge AI launched in 2022 and is led by veteran technologists with backgrounds in semiconductor design and AI systems. For more information about EnCharge AI, please visit


Reuters
13-02-2025
- Business
- Reuters
EnCharge AI raises over $100 million in funding to bring AI inference chips to market
Feb 13 (Reuters) - Startup EnCharge AI raised more than $100 million in a Series B funding round led by Tiger Global to bring more efficient and less expensive AI chips to the market, the company said on Thursday. The company did not disclose details on valuation. EnCharge AI develops analog chips that are integrated into semiconductors used for storage. These in-memory chips are designed for inference, a phase where AI models are utilized rather than trained. While most AI inference chips are typically housed in vast server clusters within data centers, EnCharge AI's chips are designed for edge computing, being utilized in user-facing devices like laptops. Their approach to embedding analog processing in memory chips allows their accelerators to perform AI tasks with up to 20 times less energy consumption compared to some of the leading AI chips, the company said. Battery-powered devices like laptops and smartphones need efficient chips to process AI and analog chips situated inside semiconductors for memory are a viable solution, CEO Naveen Verma told Reuters. "It turns out that these platforms can now really overcome many of the barriers in terms of cost and sustainability, but also in terms of privacy and security, which the enterprise and also a lot of consumer applications care very much about." Groq, founded by a former Alphabet (GOOGL.O), opens new tab chip engineer and Cerebras, are among the companies developing specialized chips for AI inference. Other investors in the round included Samsung Electronics' ( opens new tab VC arm and HH-CTBC, a partnership between Taiwan's Foxconn ( opens new tab and CTBC Venture Capital.