logo
#

Latest news with #LiquidAI

The 2025 Tech Power Players in the foundational AI sector
The 2025 Tech Power Players in the foundational AI sector

Boston Globe

time3 days ago

  • Business
  • Boston Globe

The 2025 Tech Power Players in the foundational AI sector

The team behind the company, now chasing better known rivals such as OpenAI's ChatGPT, included three MIT students and their adviser, computer scientist Rus has been a fixture on the AI scene since she came to MIT in 2003, fresh off a MacArthur 'genius' grant for her work developing robots. Nine years later, the university named Rus to lead the school's famed Born in Communist Romania during the Cold War, Rus and her family immigrated to the United States in 1982. She studied at the University of Iowa before earning a doctorate at Cornell University in 1992. She taught at Dartmouth College before moving to MIT. Advertisement Inspired by the simple brain structure of a roundworm, Rus and her cofounders, Ramin Hasani, Mathias Lechner, and Alexander Amini, developed an AI technique with fewer software 'neurons' than the large language models of OpenAI and others. That means Liquid AI requires less computing power (and electricity). The company, valued at more than $2 billion, has about 55 employees at its Kendall Square headquarters. More tech power players to watch in the foundational AI sector: Explore more sectors Aaron Pressman can be reached at

Liquid AI is revolutionizing LLMs to work on edge devices like smartphones with new ‘Hyena Edge' model
Liquid AI is revolutionizing LLMs to work on edge devices like smartphones with new ‘Hyena Edge' model

Business Mayor

time26-04-2025

  • Business Mayor

Liquid AI is revolutionizing LLMs to work on edge devices like smartphones with new ‘Hyena Edge' model

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Liquid AI, the Boston-based foundation model startup spun out of the Massachusetts Institute of Technology (MIT), is seeking to move the tech industry beyond its reliance on the Transformer architecture underpinning most popular large language models (LLMs) such as OpenAI's GPT series and Google's Gemini family. Yesterday, the company announced 'Hyena Edge,' a new convolution-based, multi-hybrid model designed for smartphones and other edge devices in advance of the International Conference on Learning Representations (ICLR) 2025. The conference, one of the premier events for machine learning research, is taking place this year in Vienna, Austria. Hyena Edge is engineered to outperform strong Transformer baselines on both computational efficiency and language model quality. In real-world tests on a Samsung Galaxy S24 Ultra smartphone, the model delivered lower latency, smaller memory footprint, and better benchmark results compared to a parameter-matched Transformer++ model. Unlike most small models designed for mobile deployment — including SmolLM2, the Phi models, and Llama 3.2 1B — Hyena Edge steps away from traditional attention-heavy designs. Instead, it strategically replaces two-thirds of grouped-query attention (GQA) operators with gated convolutions from the Hyena-Y family. The new architecture is the result of Liquid AI's Synthesis of Tailored Architectures (STAR) framework, which uses evolutionary algorithms to automatically design model backbones and was announced back in December 2024. STAR explores a wide range of operator compositions, rooted in the mathematical theory of linear input-varying systems, to optimize for multiple hardware-specific objectives like latency, memory usage, and quality. To validate Hyena Edge's real-world readiness, Liquid AI ran tests directly on the Samsung Galaxy S24 Ultra smartphone. Results show that Hyena Edge achieved up to 30% faster prefill and decode latencies compared to its Transformer++ counterpart, with speed advantages increasing at longer sequence lengths. Prefill latencies at short sequence lengths also outpaced the Transformer baseline — a critical performance metric for responsive on-device applications. In terms of memory, Hyena Edge consistently used less RAM during inference across all tested sequence lengths, positioning it as a strong candidate for environments with tight resource constraints. Hyena Edge was trained on 100 billion tokens and evaluated across standard benchmarks for small language models, including Wikitext, Lambada, PiQA, HellaSwag, Winogrande, ARC-easy, and ARC-challenge. On every benchmark, Hyena Edge either matched or exceeded the performance of the GQA-Transformer++ model, with noticeable improvements in perplexity scores on Wikitext and Lambada, and higher accuracy rates on PiQA, HellaSwag, and Winogrande. These results suggest that the model's efficiency gains do not come at the cost of predictive quality — a common tradeoff for many edge-optimized architectures. For those seeking a deeper dive into Hyena Edge's development process, a recent video walkthrough provides a compelling visual summary of the model's evolution. The video highlights how key performance metrics — including prefill latency, decode latency, and memory consumption — improved over successive generations of architecture refinement. Read More The human factor: How companies can prevent cloud disasters It also offers a rare behind-the-scenes look at how the internal composition of Hyena Edge shifted during development. Viewers can see dynamic changes in the distribution of operator types, such as Self-Attention (SA) mechanisms, various Hyena variants, and SwiGLU layers. These shifts offer insight into the architectural design principles that helped the model reach its current level of efficiency and accuracy. By visualizing the trade-offs and operator dynamics over time, the video provides valuable context for understanding the architectural breakthroughs underlying Hyena Edge's performance. Liquid AI said it plans to open-source a series of Liquid foundation models, including Hyena Edge, over the coming months. The company's goal is to build capable and efficient general-purpose AI systems that can scale from cloud datacenters down to personal edge devices. The debut of Hyena Edge also highlights the growing potential for alternative architectures to challenge Transformers in practical settings. With mobile devices increasingly expected to run sophisticated AI workloads natively, models like Hyena Edge could set a new baseline for what edge-optimized AI can achieve. Hyena Edge's success — both in raw performance metrics and in showcasing automated architecture design — positions Liquid AI as one of the emerging players to watch in the evolving AI model landscape.

Machine Learning Pioneer Ramin Hasani Joins Info-Tech's "Digital Disruption" Podcast to Explore the Future of AI and Liquid Neural Networks
Machine Learning Pioneer Ramin Hasani Joins Info-Tech's "Digital Disruption" Podcast to Explore the Future of AI and Liquid Neural Networks

Yahoo

time19-03-2025

  • Science
  • Yahoo

Machine Learning Pioneer Ramin Hasani Joins Info-Tech's "Digital Disruption" Podcast to Explore the Future of AI and Liquid Neural Networks

The fourth episode of Info-Tech Research Group's video podcast, Digital Disruption, features MIT researcher and Liquid AI co-founder Ramin Hasani, who shares insights into the evolution of AI and how Liquid Neural Networks are driving the next wave of machine learning innovation. TORONTO, March 19, 2025 /PRNewswire/ - As artificial intelligence continues to reshape industries, researchers are pushing the boundaries of machine learning to create more adaptive and efficient systems. In the fourth episode of Info-Tech Research Group's Digital Disruption video podcast, titled "A Worm is Changing the Future of AI," host Geoff Nielson speaks with Ramin Hasani, CEO and co-founder of Liquid AI and a scientist at MIT's Computer Science and Artificial Intelligence Lab (CSAIL), about the revolutionary potential of Liquid Neural Networks and their real-world applications. Hasani's research, which draws inspiration from biology and physics, has led to the development of Liquid Neural Networks, a groundbreaking approach that enhances AI's ability to adapt, learn, and make decisions with minimal computational resources. His work has earned international recognition, including the HPC Innovation Excellence Award, and has been featured in leading AI publications and TEDx talks. "As organizations race to adopt AI, they need to understand not just where the technology is today, but where it's headed," says Geoff Nielson, senior vice president of brand at Info-Tech Research Group and host of Digital Disruption. "In this Digital Disruption episode, I talk with Ramin about his pioneering work that is not only advancing AI's capabilities but also challenging conventional models in ways that could fundamentally change how we develop and deploy intelligent systems. It's a conversation that will make listeners rethink what the trajectory of AI will be." Digital Disruption, Episode 4: "A Worm is Changing the Future of AI" In the latest episode of Digital Disruption, Ramin Hasani and Geoff Nielson discuss: The evolution of AI models and why traditional deep learning architectures face significant scalability and efficiency challenges The biological influence behind Liquid Neural Networks, including how insights from a microscopic worm helped inspire a new class of machine-learning models How Liquid AI is being applied today, from robotics and automation to edge computing and drug discovery The next frontier for AI, including the role of explainability, efficiency, and real-time adaptability in shaping future intelligent systems Episode 4 of Digital Disruption, featuring Ramin Hasini, is now available on YouTube, Apple Music, and Spotify. IT and business professionals are encouraged to subscribe for insights from top industry experts shaping the future of digital transformation. For more details, visit the Digital Disruption podcast page and follow Info-Tech Research Group on LinkedIn and X for updates. To learn more about guest opportunities and participation in upcoming episodes, please contact pr@ About Info-Tech Research GroupInfo-Tech Research Group is one of the world's leading research and advisory firms, proudly serving over 30,000 IT and HR professionals. The company produces unbiased, highly relevant research and provides advisory services to help leaders make strategic, timely, and well-informed decisions. For nearly 30 years, Info-Tech has partnered closely with teams to provide them with everything they need, from actionable tools to analyst guidance, ensuring they deliver measurable results for their organizations. To learn more about Info-Tech's divisions, visit McLean & Company for HR research and advisory services and SoftwareReviews for software buying insights. Media professionals can register for unrestricted access to research across IT, HR, and software and hundreds of industry analysts through the firm's Media Insiders program. To gain access, contact pr@ View original content to download multimedia: SOURCE Info-Tech Research Group Sign in to access your portfolio

Chinese AI app DeepSeek could be bad for stocks but good for local startups
Chinese AI app DeepSeek could be bad for stocks but good for local startups

Boston Globe

time28-01-2025

  • Business
  • Boston Globe

Chinese AI app DeepSeek could be bad for stocks but good for local startups

Get Starting Point A guide through the most important stories of the morning, delivered Monday, Wednesday, and Friday. Enter Email Sign Up As Globe tech columnist Hiawatha Bray noted, DeepSeek published a white paper revealing how it developed the model inexpensively, Advertisement 'DeepSeek is the next piece in the AI arms race,' said computer science professor Elke Rundensteiner at Worcester Polytechnic Institute. Even while spending billions less than its US rivals, and being forced to use slower chips due to US export limits, 'it rivals current AI giants in performance.' So far, California has dominated Boston and other regions in raising venture capital backing for AI startups, Out west, most of the money has gone to companies developing underlying tech like ChatGPT developer OpenAI. Boston's ecosystem has focused on building applications that use AI, or in the case of MIT spinoff Liquid AI, creating its own version of much cheaper AI models. VC Jeff Bussgang at Flybridge Capital Partners sees the plummeting cost of AI as a boon for local startups. 'We saw it with cloud computing and we are seeing it now with AI,' Bussgang said. 'As a result, we will see an explosion of demand and AI-based apps will benefit tremendously.' Cheaper AI could also benefit public companies in the area that have been adding AI features. Shares of HubSpot and Klaviyo have gained about 11 percent each so far this week, as the companies have been incorporating AI services into their marketing and customer service software. Advertisement But DeepSeek's low-cost AI tech may not be quite so beneficial for the stock market at large, where seven tech giants each valued at over $1 trillion dominate market indexes, the fallout could be more severe, and some have drawn From the start of 1995 to March, 2000, the Nasdaq Composite Index increased almost 600 percent. Then over the weekend of March 18, 2000, Barrons magazine published a cover story headlined ' Within a year, the index had fallen more than 60 percent and didn't reach 5,000 again for 14 years. The leaders of the current tech big seven, including Nvidia chief executive Jensen Huang and Apple chief executive Tim Cook, all will address Wall Street analysts in coming weeks as they report fourth quarter results. Their responses to the DeepSeek shockwave could make or break the Internet bubble analogy. 'This is likely to act as an overhang on the technology and energy sectors until a broad array of company management teams provide their own insight/perspectives during the Q4'24 earnings season,' a team of analysts at Goldman Sachs wrote on Monday. Aaron Pressman can be reached at

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store