logo
#

Latest news with #RaminHasani

Liquid AI Releases World's Fastest and Best-Performing Open-Source Small Foundation Models
Liquid AI Releases World's Fastest and Best-Performing Open-Source Small Foundation Models

Business Upturn

time11-07-2025

  • Business
  • Business Upturn

Liquid AI Releases World's Fastest and Best-Performing Open-Source Small Foundation Models

By Business Wire India Published on July 11, 2025, 11:30 IST Business Wire India Liquid AI announced today the launch of its next-generation Liquid Foundation Models (LFM2), which set new records in speed, energy efficiency, and quality in the edge model class. This release builds on Liquid AI's first-principles approach to model design. Unlike traditional transformer-based models, LFM2 is composed of structured, adaptive operators that allow for more efficient training, faster inference and better generalization – especially in long-context or resource-constrained scenarios. Liquid AI open-sourced its LFM2, introducing the novel architecture in full transparency to the world. LFM2's weights can now be downloaded from Hugging Face and are also available through the Liquid Playground for testing. Liquid AI also announced that the models will be integrated into its Edge AI platform and an iOS-native consumer app for testing in the following days. 'At Liquid, we build best-in-class foundation models with quality, latency, and memory efficiency in mind,' said Ramin Hasani, co-founder and CEO of Liquid AI. 'LFM2 series of models is designed, developed, and optimized for on-device deployment on any processor, truly unlocking the applications of generative and agentic AI on the edge. LFM2 is the first in the series of powerful models we will be releasing in the coming months.' The release of LFM2 marks a milestone in global AI competition and is the first time a U.S. company has publicly demonstrated clear efficiency and quality gains over China's leading open-source small language models, including those developed by Alibaba and ByteDance. In head-to-head evaluations, LFM2 models outperform state-of-the-art competitors across speed, latency and instruction-following benchmarks. Key highlights: LFM2 exhibits 200 percent higher throughput and lower latency compared to Qwen3, Gemma 3n Matformer and every other transformer- and non-transformer-based autoregressive models available to date, on CPU. The model not only is the fastest, but also on average performs significantly better than models in each size class on instruction-following and function calling (the main attributes of LLMs in building reliable AI agents). This places LFM2 as the ideal choice of models for local and edge use-cases. LFMs built based on this new architecture and the new training infrastructure show 300 percent improvement in training efficiency over the previous versions of LFMs, making them the most cost-efficient way to build capable general-purpose AI systems. Shifting large generative models from distant clouds to lean, on‑device LLMs unlocks millisecond latency, offline resilience, and data‑sovereign privacy. These are capabilities essential for phones, laptops, cars, robots, wearables, satellites, and other endpoints that must reason in real time. Aggregating high‑growth verticals such as edge AI stack in consumer electronics, robotics, smart appliances, finance, e-commerce, and education, before counting defense, space, and cybersecurity allocations, pushes the TAM for compact, private foundation models toward the $1 trillion mark by 2035. Liquid AI is engaged with a large number of Fortune 500 companies in these sectors. They offer ultra‑efficient small multimodal foundation models with a secure enterprise-grade deployment stack that turns every device into an AI device, locally. This gives Liquid AI the opportunity to obtain an outsized share on the market as enterprises pivot from cloud LLMs to cost-efficient, fast, private and on‑prem intelligence. About Liquid AI: Liquid AI is at the forefront of artificial intelligence innovation, developing foundation models that set new standards for performance and efficiency. With the mission to build efficient, general-purpose AI systems at every scale, Liquid AI continues to push the boundaries of how much intelligence can be packed into phones, laptops, cars, satellites, and other devices. Learn more at View source version on Disclaimer: The above press release comes to you under an arrangement with Business Wire India. Business Upturn take no editorial responsibility for the same. Ahmedabad Plane Crash Business Wire India, established in 2002, India's premier media distribution company ensures guaranteed media coverage through its network of 30+ cities and top news agencies.

Liquid AI Releases World's Fastest and Best-Performing Open-Source Small Foundation Models
Liquid AI Releases World's Fastest and Best-Performing Open-Source Small Foundation Models

National Post

time10-07-2025

  • Business
  • National Post

Liquid AI Releases World's Fastest and Best-Performing Open-Source Small Foundation Models

Article content Next-generation edge models outperform top global competitors; now available open source on Hugging Face Article content CAMBRIDGE, Mass. — Liquid AI announced today the launch of its next-generation Liquid Foundation Models (LFM2), which set new records in speed, energy efficiency, and quality in the edge model class. This release builds on Liquid AI's first-principles approach to model design. Unlike traditional transformer-based models, LFM2 is composed of structured, adaptive operators that allow for more efficient training, faster inference and better generalization – especially in long-context or resource-constrained scenarios. Article content Article content Liquid AI open-sourced its LFM2, introducing the novel architecture in full transparency to the world. LFM2's weights can now be downloaded from Hugging Face and are also available through the Liquid P layground for testing. Liquid AI also announced that the models will be integrated into its Edge AI platform and an iOS-native consumer app for testing in the following days. Article content 'At Liquid, we build best-in-class foundation models with quality, latency, and memory efficiency in mind,' said Ramin Hasani, co-founder and CEO of Liquid AI. 'LFM2 series of models is designed, developed, and optimized for on-device deployment on any processor, truly unlocking the applications of generative and agentic AI on the edge. LFM2 is the first in the series of powerful models we will be releasing in the coming months.' Article content The release of LFM2 marks a milestone in global AI competition and is the first time a U.S. company has publicly demonstrated clear efficiency and quality gains over China's leading open-source small language models, including those developed by Alibaba and ByteDance. Article content In head-to-head evaluations, LFM2 models outperform state-of-the-art competitors across speed, latency and instruction-following benchmarks. Key highlights: Article content LFM2 exhibits 200 percent higher throughput and lower latency compared to Qwen3, Gemma 3n Matformer and every other transformer- and non-transformer-based autoregressive models available to date, on CPU. The model not only is the fastest, but also on average performs significantly better than models in each size class on instruction-following and function calling (the main attributes of LLMs in building reliable AI agents). This places LFM2 as the ideal choice of models for local and edge use-cases. LFMs built based on this new architecture and the new training infrastructure show 300 percent improvement in training efficiency over the previous versions of LFMs, making them the most cost-efficient way to build capable general-purpose AI systems. Article content Shifting large generative models from distant clouds to lean, on‑device LLMs unlocks millisecond latency, offline resilience, and data‑sovereign privacy. These are capabilities essential for phones, laptops, cars, robots, wearables, satellites, and other endpoints that must reason in real time. Aggregating high‑growth verticals such as edge AI stack in consumer electronics, robotics, smart appliances, finance, e-commerce, and education, before counting defense, space, and cybersecurity allocations, pushes the TAM for compact, private foundation models toward the $1 trillion mark by 2035. Article content Liquid AI is engaged with a large number of Fortune 500 companies in these sectors. They offer ultra‑efficient small multimodal foundation models with a secure enterprise-grade deployment stack that turns every device into an AI device, locally. This gives Liquid AI the opportunity to obtain an outsized share on the market as enterprises pivot from cloud LLMs to cost-efficient, fast, private and on‑prem intelligence. Article content Article content Article content Article content Article content Article content

G42, Liquid AI partner to deliver efficient AI solutions to enterprises
G42, Liquid AI partner to deliver efficient AI solutions to enterprises

Zawya

time18-06-2025

  • Business
  • Zawya

G42, Liquid AI partner to deliver efficient AI solutions to enterprises

ABU DHABI: G42, the Abu Dhabi-based global technology group, and Liquid AI, a leading efficient foundation models company headquartered in Cambridge, Massachusetts, today announced that they have entered into a multifaceted commercial partnership to facilitate the creation, training and commercialisation of generative AI solutions powered by Liquid Foundation Models. The goal of the partnership is to jointly develop and deploy private generative AI solutions internationally, from the Middle East and North Africa to the Global South, and across many sectors, such as investment and banking, consumer electronics, telecommunications, biotech, and energy. G42 companies such as Core42 and Inception are involved in various aspects of the partnership, from AI infrastructure for AI development to co-designing multimodal foundation models and deployment. 'This partnership reflects our shared vision for AI that is sovereign, efficient, and enterprise-ready,' said Dr. Andrew Jackson, group chief AI officer of G42. 'By combining G42's infrastructure with Liquid AI's model innovation, we're advancing scalable, trusted AI solutions for global industries.' 'The scale and impact that G42 has brought to the AI world is inspiring,' said Ramin Hasani, co-founder and CEO of Liquid AI. 'This partnership, complementary to the global AI momentum, enables a top-down approach to delivering efficient, capable, and private general-purpose AI models at scale to enterprises faster than ever.' In light of the recent US-UAE AI acceleration partnership, this alliance is another testament to the commitment of both initiatives to building responsible and inclusive AI for all.

G42, Liquid AI partner to deliver efficient AI solutions to enterprises
G42, Liquid AI partner to deliver efficient AI solutions to enterprises

Al Etihad

time17-06-2025

  • Business
  • Al Etihad

G42, Liquid AI partner to deliver efficient AI solutions to enterprises

17 June 2025 18:56 ABU DHABI (WAM)G42, the Abu Dhabi-based global technology group, and Liquid AI, a leading efficient foundation models company headquartered in Cambridge, Massachusetts, today announced that they have entered into a multifaceted commercial partnership to facilitate the creation, training and commercialisation of generative AI solutions powered by Liquid Foundation goal of the partnership is to jointly develop and deploy private generative AI solutions internationally, from the Middle East and North Africa to the Global South, and across many sectors, such as investment and banking, consumer electronics, telecommunications, biotech, and energy.G42 companies such as Core42 and Inception are involved in various aspects of the partnership, from AI infrastructure for AI development to co-designing multimodal foundation models and deployment.'This partnership reflects our shared vision for AI that is sovereign, efficient, and enterprise-ready,' said Dr. Andrew Jackson, group chief AI officer of G42. 'By combining G42's infrastructure with Liquid AI's model innovation, we're advancing scalable, trusted AI solutions for global industries.''The scale and impact that G42 has brought to the AI world is inspiring,' said Ramin Hasani, co-founder and CEO of Liquid AI. 'This partnership, complementary to the global AI momentum, enables a top-down approach to delivering efficient, capable, and private general-purpose AI models at scale to enterprises faster than ever.' In light of the recent US-UAE AI acceleration partnership, this alliance is another testament to the commitment of both initiatives to building responsible and inclusive AI for all.

ADDING MULTIMEDIA: G42 and Liquid AI Partner to Deliver Private, Local and Efficient AI Solutions to Enterprises at Scale
ADDING MULTIMEDIA: G42 and Liquid AI Partner to Deliver Private, Local and Efficient AI Solutions to Enterprises at Scale

Business Wire

time17-06-2025

  • Business
  • Business Wire

ADDING MULTIMEDIA: G42 and Liquid AI Partner to Deliver Private, Local and Efficient AI Solutions to Enterprises at Scale

ABU DHABI, United Arab Emirates & CAMBRIDGE, Mass.--(BUSINESS WIRE)--G42, the Abu Dhabi-based global technology group, and Liquid AI, a leading efficient foundation models company headquartered in Cambridge, Massachusetts, today announced that they have entered into a multifaceted commercial partnership to facilitate the creation, training and commercialization of generative AI solutions powered by Liquid Foundation Models. The goal of the partnership is to jointly develop and deploy private generative AI solutions internationally, from the Middle East and North Africa to the Global South, and across many sectors, such as investment and banking, consumer electronics, telecommunications, biotech, and energy. G42 companies such as Core42 and Inception are involved in various aspects of the partnership, from AI infrastructure for AI development to co-designing multimodal foundation models and deployment. 'This partnership reflects our shared vision for AI that is sovereign, efficient, and enterprise-ready, ' said Dr. Andrew Jackson, group chief AI officer of G42. 'By combining G42's infrastructure with Liquid AI's model innovation, we're advancing scalable, trusted AI solutions for global industries.' 'The scale and impact that G42 has brought to the AI world is inspiring,' said Ramin Hasani, co-founder and CEO of Liquid AI. 'This partnership, complementary to the global AI momentum, enables a top-down approach to delivering efficient, capable, and private general-purpose AI models at scale to enterprises faster than ever.' In light of the recent U.S.-UAE AI acceleration partnership, this alliance is another testament to the commitment of both initiatives to building responsible and inclusive AI for all. About G42 G42 is a technology holding group and a global leader in creating visionary artificial intelligence for a better tomorrow. Born in Abu Dhabi and operating worldwide, G42 champions AI as a powerful force for good across industries. From molecular biology to space exploration and everything in between, G42 realizes exponential possibilities today. To learn more, visit About Liquid AI Liquid AI is at the forefront of artificial intelligence innovation, developing foundation models that set new standards for performance and efficiency. With the mission to build efficient general-purpose AI systems at every scale, Liquid AI continues to push the boundaries of what's possible in AI technology. To learn more, visit

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store