Latest news with #edgeAI
Yahoo
31-07-2025
- Business
- Yahoo
Ambiq Announces Closing of its Upsized Initial Public Offering and Full Exercise of Underwriters' Option to Purchase Additional Shares
Ambiq Upsized Initial Public Offering AUSTIN, Texas, July 31, 2025 (GLOBE NEWSWIRE) -- Ambiq Micro, Inc. ('Ambiq'), a technology leader in ultra-low-power semiconductor solutions for edge AI, today announced the closing of its upsized initial public offering of 4,600,000 shares of its common stock, including the full exercise of the underwriters' option to purchase 600,000 additional shares, at a public offering price of $24.00 per share. The gross proceeds to Ambiq from the offering, before deducting underwriting discounts and commissions and other offering expenses payable by Ambiq, were $110.4 million. The shares began trading on the New York Stock Exchange under the ticker symbol 'AMBQ' on July 30, 2025. BofA Securities and UBS Investment Bank acted as joint lead book-running managers for the offering. Needham & Company and Stifel acted as joint book-running managers for the offering. A registration statement relating to the offering of securities was declared effective by the U.S. Securities and Exchange Commission on July 29, 2025. The offering was made only by means of a prospectus. Copies of the final prospectus relating to the offering may be obtained by contacting: BofA Securities, NC1-022-02-25, 201 North Tryon Street, Charlotte, North Carolina 28255-0001, Attention: Prospectus Department, or by email at or UBS Securities LLC, Attention: Prospectus Department, 1285 Avenue of the Americas, New York, New York 10019, by telephone at (888) 827-7275 or by emailing ol-prospectus-request@ This press release shall not constitute an offer to sell or the solicitation of an offer to buy these securities, nor shall there be any sale of these securities in any state or other jurisdiction in which such offer, solicitation or sale would be unlawful prior to the registration or qualification under the securities laws of any such state or other jurisdiction. About Ambiq Ambiq's mission is to enable intelligence (artificial intelligence (AI) and beyond) everywhere by delivering the lowest power semiconductor solutions. Ambiq enables its customers to deliver AI compute at the edge where power consumption challenges are the most severe. Ambiq's technology innovations, built on the patented and proprietary subthreshold power optimized technology (SPOT®), fundamentally deliver a multi-fold improvement in power consumption over traditional semiconductor designs. Ambiq has powered over 270 million devices to date. Contact IRShelton Groupsheltonir@ 972-239-5119 PRCharlene Wan VP of Corporate Marketingcwan@ A photo accompanying this announcement is available at
Yahoo
22-07-2025
- Automotive
- Yahoo
Ambarella Explores Strategic Options, Including Potential Sale, Amidst Surging Edge AI Demand
Ambarella Inc. (NASDAQ:AMBA) is one of the best small cap AI stocks to buy according to analysts. On June 24, Bloomberg announced that Ambarella is reportedly exploring strategic options, including a potential sale. Bloomberg's Ryan Gould and Dinesh Nair reported that the company is working with bankers and has reached out to potential buyers. Ambarella designs chips primarily used in video recording, streaming, and ADAS for self-driving cars. Its technology focuses on edge AI processing and human vision applications, such as video security, electronic mirrors, drive recorders, driver/cabin monitoring, autonomous driving, and robotics. Close-up of Silicon Die are being Extracted from Semiconductor Wafer and Attached to Substrate by Pick and Place Machine. Computer Chip Manufacturing at Fab. Semiconductor Packaging Process. Potential buyers could include rival chip companies that are seeking to enhance their automotive portfolios or private equity firms. Ambarella last posted an annual profit in 2017 but is forecasting a 28% revenue growth in FY2025, driven by increasing demand for its edge AI products, which now account for 75% of its sales. Ambarella Inc. (NASDAQ:AMBA) develops semiconductor solutions that enable AI processing, advanced image signal processing, and high-definition/HD and ultra-HD compression. While we acknowledge the potential of AMBA as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the . READ NEXT: and . Disclosure: None. This article is originally published at Insider Monkey. Sign in to access your portfolio

Yahoo
21-07-2025
- Business
- Yahoo
AI chipmaker Ambiq Micro to go public at up to $25 a share
-- Ambiq Micro Inc., a provider of ultra-low power semiconductors for edge AI applications, has filed for an initial public offering on the New York Stock Exchange under the ticker symbol 'AMBQ.' According to an SEC filing, the Austin-based firm plans to offer 6.8 million shares at a price range of $22 to $25 per share, implying a potential valuation of up to $426 million at the top of the range. Underwriters for the offering include BofA Securities, UBS Investment Bank, Needham & Company, and Stifel. The company has also granted underwriters a 510,000-share overallotment option, which could bring in additional proceeds if fully exercised. Ambiq's chips, which consume two to five times less power than conventional counterparts, have shipped in more than 270 million devices to date. Over 40% of the 42 million units shipped in 2024 ran AI algorithms, targeting use cases in wearable tech, digital health, smart homes, and industrial edge systems. Revenue momentum is strongest in geographies outside Mainland China, where net sales grew over 56% year-over-year in the first half of 2025. During the same period, sales in China fell by up to 84%, reflecting a broader pivot in the company's go-to-market strategy. According to preliminary results, Ambiq estimates its gross profit margin rose to 46.3% in the six months ended June 30, 2025, up from 35.7% a year prior. Gross profit could range between $15.2 million and $15.9 million, suggesting improved operating leverage as a result of more integrated system-on-chip (SoC) and software sales. The company's product lines, including the Apollo and developing Atomiq SoC families, leverage its proprietary Sub-threshold Power Optimized Technology (SPOT) platform to drive energy efficiency in AI workloads. Atomiq, which is still in development, is expected to deliver the highest performance and lowest power consumption among Ambiq's products to date. Despite the growth narrative, Ambiq cautions that it faces material risks, including dependency on Taiwan Semiconductor Manufacturing Company for fabrication and a lack of long-term customer commitments. The company also noted it has identified material weaknesses in internal financial controls as it transitions to public-market reporting standards. Looking ahead, Ambiq seeks to extend its SPOT architecture beyond its current microcontroller offerings into dedicated AI processors and application-specific chips. Management believes a licensable SPOT platform could position the company as a foundational low-power enabler in the broader $22.5 billion edge AI semiconductor market forecasted for 2028. Related articles AI chipmaker Ambiq Micro to go public at up to $25 a share Victoria's Secret Exposed: The Warning Sign Behind the Stock's 52% Collapse Apollo economist warns: AI bubble now bigger than 1990s tech mania Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

National Post
10-07-2025
- Business
- National Post
Liquid AI Releases World's Fastest and Best-Performing Open-Source Small Foundation Models
Article content Next-generation edge models outperform top global competitors; now available open source on Hugging Face Article content CAMBRIDGE, Mass. — Liquid AI announced today the launch of its next-generation Liquid Foundation Models (LFM2), which set new records in speed, energy efficiency, and quality in the edge model class. This release builds on Liquid AI's first-principles approach to model design. Unlike traditional transformer-based models, LFM2 is composed of structured, adaptive operators that allow for more efficient training, faster inference and better generalization – especially in long-context or resource-constrained scenarios. Article content Article content Liquid AI open-sourced its LFM2, introducing the novel architecture in full transparency to the world. LFM2's weights can now be downloaded from Hugging Face and are also available through the Liquid P layground for testing. Liquid AI also announced that the models will be integrated into its Edge AI platform and an iOS-native consumer app for testing in the following days. Article content 'At Liquid, we build best-in-class foundation models with quality, latency, and memory efficiency in mind,' said Ramin Hasani, co-founder and CEO of Liquid AI. 'LFM2 series of models is designed, developed, and optimized for on-device deployment on any processor, truly unlocking the applications of generative and agentic AI on the edge. LFM2 is the first in the series of powerful models we will be releasing in the coming months.' Article content The release of LFM2 marks a milestone in global AI competition and is the first time a U.S. company has publicly demonstrated clear efficiency and quality gains over China's leading open-source small language models, including those developed by Alibaba and ByteDance. Article content In head-to-head evaluations, LFM2 models outperform state-of-the-art competitors across speed, latency and instruction-following benchmarks. Key highlights: Article content LFM2 exhibits 200 percent higher throughput and lower latency compared to Qwen3, Gemma 3n Matformer and every other transformer- and non-transformer-based autoregressive models available to date, on CPU. The model not only is the fastest, but also on average performs significantly better than models in each size class on instruction-following and function calling (the main attributes of LLMs in building reliable AI agents). This places LFM2 as the ideal choice of models for local and edge use-cases. LFMs built based on this new architecture and the new training infrastructure show 300 percent improvement in training efficiency over the previous versions of LFMs, making them the most cost-efficient way to build capable general-purpose AI systems. Article content Shifting large generative models from distant clouds to lean, on‑device LLMs unlocks millisecond latency, offline resilience, and data‑sovereign privacy. These are capabilities essential for phones, laptops, cars, robots, wearables, satellites, and other endpoints that must reason in real time. Aggregating high‑growth verticals such as edge AI stack in consumer electronics, robotics, smart appliances, finance, e-commerce, and education, before counting defense, space, and cybersecurity allocations, pushes the TAM for compact, private foundation models toward the $1 trillion mark by 2035. Article content Liquid AI is engaged with a large number of Fortune 500 companies in these sectors. They offer ultra‑efficient small multimodal foundation models with a secure enterprise-grade deployment stack that turns every device into an AI device, locally. This gives Liquid AI the opportunity to obtain an outsized share on the market as enterprises pivot from cloud LLMs to cost-efficient, fast, private and on‑prem intelligence. Article content Article content Article content Article content Article content Article content
Yahoo
08-07-2025
- Business
- Yahoo
Advantech Unveils Next-Generation Edge AI Compute Solutions Powered by Qualcomm Snapdragon X Elite
TAIPEI, July 7, 2025 /PRNewswire/ -- Advantech is proud to introduce its latest suite of high-performance edge AI compute solutions powered by the Snapdragon® X Elite platform – the AOM-6731, AIMB-293, and SOM-6820. Built on this groundbreaking platform, these innovative products are engineered to meet the demanding requirements of modern industrial applications by delivering exceptional processing power, integrated AI acceleration with up to 45 TOPS of AI performance, and robust and lightning-fast 5G and Wi-Fi 7 connectivity into an industrial PC. The solutions powered by the Snapdragon X Elite 12-core and Snapdragon® X Plus 10-core with a leading Qualcomm Oryoton™ CPUs, reaching speeds of up to 3.4GHz. This high-performance processing not only ensures enables rapid data handling and seamless multitasking but also outperforms traditional x86 solutions—using 28% less power on average for everyday tasks, including Teams video calls, local video playback, web browsing, and Microsoft 365. Enhancing AI capabilities, these devices are integrated with Qualcomm® Hexagon™ NPU, providing up to 45 TOPS. The solutions, which contain Snapdragon X Elite platforms, are equipped with LPDDR5X memory, offering a 1.3× speed boost—from 6400MT/s to 8533MT/s—while cutting power consumption by 20% compared to standard LPDDR5. In addition, the integration of UFS 3.1 Gear 4 storage dramatically increases data transfer speeds from 1,000Mbps (PCIe Gen3 NVMe) to an impressive 16,000Mbps. For even greater durability and shock resistance, UFS 4.0 storage solutions are available, ensuring optimal performance in harsh industrial environments. For multimedia-intensive applications, the integrated Snapdragon Adreno 5th Generation VPU supports 4K60p full-duplex H.264 video encoding/decoding. Additionally, the Adreno GPU—equipped with OpenCL, OpenGL, and Microsoft DirectX 12 support—ensures superior graphics performance for vision-centric tasks. Advantech's products take connectivity to the next level with integrated Wi-Fi 7 and 5G technologies, delivering ultra-fast, low-latency network performance, ensuring uninterrupted data streaming and real-time communication even in the most demanding industrial settings. With Wi-Fi 7's multi-gigabit speeds and enhanced network reliability combined with the expansive coverage and high-speed capabilities of 5G, these solutions support data-intensive AI applications and robust remote operations. The result is a truly agile and future-ready infrastructure that optimizes real-time processing and connectivity, empowering industries to harness the full potential of edge AI in today's fast-paced digital landscape. The AI module AOM-6731, the Mini-ITX motherboard AIMB-293, and the COM Express Type 6 module SOM-6820—will be available for engineering evaluations starting in March 2025. For further details on these models, please visit Advantech website. View original content to download multimedia: SOURCE Advantech Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data