Latest news with #EdgeAI


Forbes
24-07-2025
- Forbes
Cracking The Code: Navigating The Edge AI Development Life Cycle
Rajesh Subramaniam is Founder and CEO of embedUR systems. How many intelligent devices are running in your home right now? I bet it's more than you think. The current average is 25 devices per household, and the number is only going up every year. What's more, many of these devices, from fridges to fans, now come equipped with AI accelerators tucked into their chipsets. Whether or not you're aware of it, your thermostat may be learning your habits, and your washing machine may be whispering to the cloud. This quiet evolution marks a new frontier in technology: Edge AI. It's the convergence of embedded systems and AI, designed to run efficiently right where the data is generated: on the edge. But getting from an idea to a working AI-enabled product is anything but straightforward. The development process is fragmented, the talent pool is bifurcated and the hodgepodge of tools were all designed for AI development in the cloud, not the edge. I've spent the last two years focused on one central question: How do we make edge AI easier? Edge AI Development Pain Points Let's start with the development workflow itself. Building an AI solution for an edge device is a series of deeply interdependent challenges. You start with model discovery: finding a neural network architecture that might solve the problem you're working on. Then comes sourcing and annotating relevant data, fine-tuning the model, validating its accuracy, testing it on real devices, optimizing it for specific chipsets and finally deploying it into production. That's a lot of moving pieces. That's where engineers get stuck. Using the output from one step, as the input to the next, hoping they are compatible, and discovering they mostly are not. A lot of jerry-rigging is needed to string dev pipelines together, because until now there has not been a unified dev environment for Edge AI. The challenge is that most developers are forced to stitch this pipeline together from scattered tools. You might use one platform to find a model, a separate one to label data and something entirely different to benchmark your results. There are constant handoffs, and each transition brings the risk of versioning problems, performance degradation or flat-out failure when trying to get a model to run on resource-constrained hardware. On top of that, most embedded engineers aren't AI experts, and most AI experts don't come from embedded systems. Bridging this language and tooling divide is one of the core problems we're trying to solve. A New Mindset And A New Toolchain Traditionally, embedded software followed a familiar pattern: Write the code, compile it, test it and ship it. Now, though, you have to fit an AI model into that life cycle. But AI doesn't behave like conventional software. You need to train AI models with a large amount of high-quality data. You also need to make sure they're accurate, secure, upgradeable and able to run efficiently on limited hardware—and they still need to integrate cleanly with the rest of the software stack. What's really needed is a toolset that allows embedded developers to stay in their comfort zone while unlocking the power of AI. Think of it like a sandbox: You identify the type of application you're building and get model recommendations from a curated library. Then the system walks you through fine-tuning, validating and benchmarking the model. It should also help with things like security and upgrade paths. This is where I see us heading: tools that abstract the complexity of AI while integrating seamlessly with existing embedded workflows. That means packaging up best-in-class models, simplifying the training process and making on-device validation dead simple. Standardization And The Path Forward Our goal is to bring some structure to the edge AI development lifecycle. Right now, there are too many tools and frameworks and no common standards for building, testing or deploying AI models in an embedded context. By pushing for standardization, we're trying to make it easier for traditional developers to adopt AI. Once the life cycle is defined and toolchains are aligned, more engineers will feel confident jumping in. Consistency will help build trust and reduce friction in the process. It's hard to overstate the implications of this shift to embedded edge AI. Think about the early days of the internet or the rise of smartphones—we're at that kind of inflection point. The number of embedded clients per household is only going to continue to soar, from smart doorbell cameras that recognize family and friends to voice assistants that control everything from lighting to entertainment with natural commands. That means it's essential to solve the issue of integration. The sheer scale and reach of edge AI applications are staggering, maybe even a little scary, but mostly it's exciting. Because what we're really talking about is democratization. AI was once limited to massive data centers and elite development teams. Now it's finding its way into everyday devices at a price point that's accessible to everyone. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Yahoo
24-07-2025
- Business
- Yahoo
Cincoze MXM GPU Computers: The AI Solutions for Manufacturing, Transportation, and Defense
TAIPEI, Taiwan, July 24, 2025--(BUSINESS WIRE)--Rugged embedded computer brand - Cincoze's Red Dot Award-winning MXM GPU computer series (GM-1100) has the high performance, compact design, and expansion flexibility that make it the ideal choice for space-constrained Edge AI applications. The GM-1100 series supports the latest 14th Gen. Intel® Core™ processors and NVIDIA MXM GPU modules, providing the ultimate computing performance and graphics processing capabilities for real-time decision-making and high-speed AI Edge computing. The GM-1100 is only 260 x 200 x 85 mm, but it maximizes expansion flexibility and can easily meet the application needs of multiple vertical industries, especially in transportation, manufacturing, and defense. Transportation: Improving Safety and Flow Airport E-gates and X-ray scanners, powered by the GM-1100, enhance customs clearance efficiency and aid in automating security checks. The GM-1100 efficiently handles multiple tasks simultaneously, performing image processing and real-time analysis by leveraging the combined power of the CPU and GPU. It offers native rich I/O and expandable I/O to support LAN, COM, USB, and DIO interfaces, for flexible connectivity with peripherals such as cameras, passport readers, and gate controllers. Its wide temperature design (-40°C to 70°C), combined with its fanless architecture, can effectively overcome the challenges of high temperatures and dust when installed in an electrical enclosure. It effectively achieves fast and accurate identity recognition and prohibited goods detection, optimizing customs clearance efficiency and overall safety. Manufacturing: Boosting Efficiency and Yield Automated optical inspection (AOI) and robotic bin-picking systems benefit from the powerful performance and rapid data transfer of the GM-1100, making it an optimal computing solution. The potent AI inference capabilities of the MXM GPU enable swift and precise identification of product defects and object location, significantly boosting inspection and overall operational efficiency. The high-speed I/O interfaces, including up to 10GbE LAN, 20Gbps USB 3.2 Type-C, and an M.2 Key M slot for expansion with high-speed NVMe SSDs, fully meet the high-speed transmission and extensive storage requirements of high-resolution image processing. To address the challenges in installation environments with high electromagnetic interference, the GM-1100 adheres to industrial-grade EMC standards. It has successfully obtained CE, UKCA, and FCC certifications, and complies with ICES-003 Class A level. It supports multiple installation methods, such as wall, side, DIN rail, and VESA, for enhanced deployment flexibility. Defense: Enhancing Intelligence and Reconnaissance Military reconnaissance and information gathering can leverage the rugged design and efficient computing of the GM-1100. Its MXM GPU module enables high-speed parallel computing for real-time image analysis and terrain recognition, facilitating battlefield map construction and mission assessment. The built-in M.2 Key B expansion slot supports a GNSS module for high-precision positioning. Its wide voltage design (9-48V) allows flexible installation in military vehicles, surface vessels, and other mobile equipment, and it is equipped with an IGN (Power Ignition Sensing) delayed power on/off function that prevents system damage and data loss from temporary shutdowns or unstable power. Having passed the MIL-STD-810H shock and vibration test and E-mark vehicle certification ensures the stable operation of the GM-1100 in harsh environments, making it a reliable computing core for military defense systems. About Cincoze Rugged embedded computer brand—Cincoze provides industry-leading solutions for edge computing, AIoT, and critical applications in harsh environments. Our product lines include rugged embedded computers, industrial panel PCs and monitors, and GPU embedded computers, which can quickly meet the application needs of vertical markets, especially manufacturing, vehicle, rail, transportation, energy, and warehousing & logistics. Over the years, we have launched a number of innovative products and won multiple patent awards and international certifications. Tags:GPU computer / Edge computer / Intelligent Transportation / Machine Vision / Military For more information, please visit or contact us by email: info@ Cincoze Co., Ltd. All Rights Reserved. View source version on Contacts Press Contact Cindy LinPhone: +886-2-8912-1101 #1904E-Mail:
Yahoo
17-07-2025
- Business
- Yahoo
Revenue Data from 2024, Estimates for 2025, and Projected CAGRs Through 2030
Explore the comprehensive trends of the global edge AI market from 2024 to 2030. Analyze market revenues, industry challenges, and technological advancements across key regions. Discover insights on ESG developments, patent activity, and the strategic approaches of leading companies like Microsoft and Nvidia. Global Edge AI Market Dublin, July 17, 2025 (GLOBE NEWSWIRE) -- The "Edge AI Market" report has been added to global market for edge AI was valued at $8.7 billion in 2024 and is estimated to increase from $11.8 billion in 2025 to reach $56.8 billion by 2030, at a compound annual growth rate (CAGR) of 36.9% from 2025 through 2030. This report examines the trends in the global edge AI market, using 2024 as the base year and projecting data from 2025 through report examines the market's drivers and challenges, the emerging technologies, patent activity and the competitive landscape for the leading companies. It analyzes environmental, social and corporate governance (ESG) developments. The report concludes with profiles of the significant edge AI companies and a look at their market and technological strategies. Edge AI, or artificial intelligence at the edge, refers to the use of AI algorithms and machine learning (ML) models on local devices like sensors, IoT devices, smartphones, drones, cameras and edge servers. Unlike AI that relies on processing in data centers, edge AI analyzes data where it is generated, thereby ensuring real-time processing. Edge AI is ideal for low-latency applications performed in milliseconds and without internet access. The concept combines two emerging technologies: edge computing, which involves local data processing, and AI, which leverages ML to mimic human reasoning. This combination allows devices to make independent decisions, such as a security camera detecting intrusions AI is useful in many applications, such as self-driving cars, AI-powered instruments, smart virtual assistants, predictive maintenance, intelligent forecasting, security cameras, smart appliances and health monitoring devices. The global edge AI market is driven by increasing demand for real-time data processing, advances in edge computing technologies, and the proliferation of IoT devices. The adoption of edge devices, real-time processing needs, and industry-specific applications all drive the market's growth. For instance, healthcare applications, such as remote patient monitoring through on-device data analysis and manufacturing, with predictive maintenance using sensor data, contribute significantly to demand for edge Includes 60 data tables and 50 additional tables Analyses of global market trends for artificial intelligence in an edge-computing environment, with revenue data from 2024, estimates for 2025, and projected CAGRs through 2030 Estimates of the size and revenue prospects for the global edge AI market, along with a market share analysis by offering (component), deployment type, end-use industry and region Facts and figures pertaining to market dynamics, technological advancements, regulations, industry structure and the impacts of macroeconomic variables Insights derived from Porter's Five Forces model, as well as global supply chain and SWOT analyses An analysis of patents and emerging trends and developments in patent activity Overview of sustainability trends and ESG developments, with emphasis on consumer attitudes, and the ESG scores and practices of leading companies Analysis of the industry structure, including companies' market shares and rankings, strategic alliances, M&A activity and a venture funding outlook Company profiles of major players within the industry, including Microsoft, Nvidia Corp., Alphabet Inc. (Google Inc.), Intel Corp., and Inc. Key Attributes: Report Attribute Details No. of Pages 124 Forecast Period 2025 - 2030 Estimated Market Value (USD) in 2025 $11.8 Billion Forecasted Market Value (USD) by 2030 $56.8 Billion Compound Annual Growth Rate 36.9% Regions Covered Global Key Topics Covered: Chapter 1 Introduction Market Outlook Scope of Report Market Summary Market Dynamics and Growth Factors Segmental Analysis Regions Conclusion Chapter 2 Market Overview Current and Future Market Outlook Analysis of Macroeconomic Factors AI Chip Shortage Porter's Five Forces Analysis Impact of U.S. Tariffs Event Timelines Impact on the Edge AI Market Chapter 3 Market Dynamics Takeaways Market Dynamics Market Drivers Demand for Real-Time Data Transmission IoT Devices and Industrial Robotics Advances in AI and ML Technologies Market Challenges Limiting Computing and Storage Resources Risk of Malware and Security Flaws Market Opportunities Integration of Large Language Models Smart City Initiatives and 5G Networks Autonomous and Connected Vehicles Chapter 4 Emerging Technologies and Patent Analysis Emerging Trends/Technologies Generative AI at the Edge Quantum Computing Tiny Machine Learning (TinyML) Patent Analysis Patent Review, by Year Published Patents Findings Chapter 5 Market Segmentation Analysis Segmentation Breakdown Market Breakdown, by Offering Takeaways Hardware Software Services Market Breakdown, by End-User Industry Takeaways IT and Telecom Healthcare Automotive Retail and Consumer Goods Manufacturing Other Industries Geographic Breakdown Market Breakdown, by Region Takeaways North America Europe Asia-Pacific Rest of the World Chapter 6 Competitive Intelligence Key Takeaways Leading Companies in the Edge AI Market Market Share Analysis Strategic Analysis Recent Developments Chapter 7 Sustainability in Edge AI Industry: An ESG Perspective Overview ESG Risk Ratings for Leaders in the Edge AI Industry ESG Practices in the Edge AI Market Concluding Remarks Research Chapter 8 AppendixCompanies Featured Advanced Micro Devices Inc. Alphabet Inc. (Google Inc.) Inc. Gorilla Technology Group Hailo Technologies Ltd. Huawei Technologies Co. Ltd. IBM Corp. Infineon Technologies Ag Intel Corp. Mediatek Meta Microsoft Nvidia Corp. Qualcomm Technologies Inc. Veea Inc. For more information about this report visit About is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends. Attachment Global Edge AI Market CONTACT: CONTACT: Laura Wood,Senior Press Manager press@ For E.S.T Office Hours Call 1-917-300-0470 For U.S./ CAN Toll Free Call 1-800-526-8630 For GMT Office Hours Call +353-1-416-8900Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Forbes
16-07-2025
- Business
- Forbes
How Federated Learning Moves AI Closer To The Edge
Vikram Gupta is Chief Product Officer, SVP & GM of the IoT Processor Business Division at Synaptics, a leading EdgeAI semiconductor company. In my previous articles, I explored how the rapid growth of edge AI is calling for a new class of AI-native compute platforms and how multimodal sensing—including vision and audio—is enabling more intuitive, context-aware user experiences. These trends mark a decisive shift from centralized cloud processing to intelligent, personalized and privacy-conscious computing at the edge. Building on that foundation, the next frontier is not just how AI runs at the edge, but how it learns and evolves there. This is where federated machine learning (FML) enters the picture. Despite the ubiquity of AI in our everyday lives, the vast majority of AI model development and processing still happens in the cloud, often far away from where we interact with it. The approach has served us well, with centralized and powerful compute engines doing the heavy lifting involved in collecting data and training sophisticated learning models. As AI proliferates and billions of connected devices generate data at the edge, traditional centralized model training is becoming increasingly impractical, constrained by privacy concerns, regulatory pressures and latency limitations. At the same time, the push for more personalized, context-aware experiences is accelerating AI processing toward a fragmented landscape of "far edge" devices, such as smartwatches, wearables and industrial sensors, that rely on real-time, local understanding of their environments. This transformation is not only enabling today's intelligent experiences but also laying the groundwork for an entirely new class of cloud-agnostic, AI-driven applications—many of which we have yet to imagine—that will operate independently and become seamlessly woven into the fabric of everyday life. This shift has given rise to the new paradigm known as federated machine learning. By enabling localized intelligence directly on the devices where data is created, FML introduces a wider range of more private, personalized and responsive alternatives to cloud-centric models. Realizing this vision means evolving the AI ecosystem from system architecture and silicon design to software tooling. It also extends into how data is collected, used and protected. The need for centralized computing resources won't necessarily go away, but this broader range of synergistic processing approaches is being driven by a more federated future. One size does not fit all in the world of edge AI. Meeting the need for broader and more diverse deployment of AI, the rise of FML allows systems to become progressively more intelligent and autonomous by using on-device data and sharing only encrypted model updates. Devices that can benefit from FML are increasingly present in our everyday lives, such as smart home assistants learning speech patterns locally, wearables monitoring health metrics without cloud sync and industrial machines predicting failure based on unique deployment environments. Recent announcements from companies like Google and OpenAI point to a future where AI is moving beyond phones into a new generation of devices. The evolution of devices such as extended reality (XR) wearables raises questions: Do these devices need a cloud connection? A phone tether? Or can they operate independently, or even coordinate locally through a hub? FML introduces the idea of processing zones, which could range from on-device to near-edge aggregation hubs or the centralized cloud. This transition to the future of edge AI depends on flexible, multitiered intelligence. Ecosystem Complexity At The Edge: Fragmentation, Tooling And Hardware Diversity The adoption of AI at the edge is not without some unique challenges. Unlike the relatively structured centralized processing model of the current data center-centric approach, the edge is messy. It features different operating systems (such as RTOS, Linux and Android variants with proprietary firmware), chip architectures (such as Arm, RISC-V and x86) and AI toolkits. On top of that, many devices lack the optimal processing, memory or power for robust on-device inference—let alone training. Tooling for deploying and updating models is fragmented, particularly at scale. FML doesn't scale unless tools and hardware converge around modularity, efficiency and openness. The Chip Supplier's New Role: A Scalable, Neutral Enabler As this federated future of AI unfolds, success will hinge on delivering flexible, scalable solutions that span silicon, software and tools capable of adapting to diverse devices and dynamic ecosystems. By embracing openness, efficiency and intelligent decentralization, companies can unlock the full potential of edge AI. The shift toward distributed intelligence is redefining how we interact with technology, making it more private, responsive and relevant to real-world environments. Real progress in edge AI depends on open-source tools, accessible frameworks and broad collaboration across the ecosystem. By focusing on practical solutions and inclusive innovation, this transformation can bring smarter experiences closer to where they matter most. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Yahoo
12-07-2025
- Business
- Yahoo
Lantronix (LTRX) Expands Board in a Cooperation Agreement With an Investor Group
Lantronix, Inc. (NASDAQ:LTRX) is one of the most popular AI penny stocks to buy according to billionaires. On June 30, the company expanded its Board of Directors after finalizing a cooperation agreement with an investor group. The investor group comprises Emre Aciksoz, Haluk L. Bayraktar, and Chain of Lakes Investment Fund LLC. Under the terms of the agreement, James C. Auker was appointed to the Lantronix Board of Directors and will be nominated for election at the company's 2025 Annual Meeting of Stockholders. The agreement stipulates that the size of the Board will be increased from five to six directors with Auker's appointment. On their part, the investor group agreed to customary standstill and voting commitments to support cooperative governance and management practices. The agreement aims to enhance shareholder value through strategic collaboration and cooperation. Further, Lantronix committed to engage a financial advisor within 60 days of Auker's appointment to explore opportunities for value creation. Lantronix, Inc. (NASDAQ:LTRX) is a global provider of compute and connectivity solutions for IoT and Edge AI applications. Its portfolio includes AI-enabled gateways, embedded systems, and camera modules designed for smart cities, industrial automation, and autonomous navigation. While we acknowledge the potential of LTRX as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: Top 10 Materials Stocks to Buy According to Analysts and 10 Best Defensive Stocks to Buy in a Volatile Market. Disclosure: None. This article is originally published at Insider Monkey. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data