logo
Open-source AutoML eases edge AI deployment for developers

Open-source AutoML eases edge AI deployment for developers

Techday NZ18-07-2025
An open-source AutoML solution called AutoML for Embedded, co-developed by Analogue Devices and Antmicro, is now available as part of the Kenning framework, aimed at easing the deployment of machine learning models on embedded edge devices.
AutoML for Embedded is designed to streamline and automate many of the typical tasks developers encounter when attempting to implement artificial intelligence on microcontrollers and other resource-constrained hardware. These tasks often include data preprocessing, model selection, hyperparameter tuning, and device-specific optimisation.
Workflow and compatibility
This solution is distributed as a Visual Studio Code plugin and is built upon the Kenning library, emphasising cross-platform compatibility. It integrates with CodeFusion Studio and offers support for ADI's MAX78002 AI Accelerator Microcontroller Units (MCUs) and MAX32690, enabling direct model deployment to these hardware platforms.
The workflow also supports rapid prototyping and testing through Renode-based simulation environments and the Zephyr real-time operating system (RTOS). According to the developers, this flexibility allows users to construct and deploy machine learning models on a wide variety of target platforms, avoiding vendor lock-in.
Step-by-step tutorials, reproducible pipelines, and sample datasets are included to assist users in moving from raw data to edge AI deployment without requiring specialist data science expertise.
Developer-oriented features
The solution is the outcome of collaboration between Analogue Devices and Antmicro, who have combined hardware knowledge with open-source approaches. "Building on the flexibility of our open-source AI benchmarking and deployment framework, Kenning, we were able to develop an automated flow and VS code plugin that vastly reduces complexity of building optimised edge AI models," said Michael Gielda, Vice President of Business Development at Antmicro. "Enabling workflows based on proven open-source solutions is the backbone of our end-to-end development services that help customers take full control of their product. With flexible simulation using Renode and seamless integration with the highly configurable and standardised Zepher RTOS, the road to transparent and efficient edge AI development using AutoML in Kenning is open."
How the automation works
AutoML for Embedded utilises sequential model-based algorithm configuration (SMAC) to automate the search for optimal model architectures and training parameters. Hyperband with Successive Halving is applied to allocate computational resources towards the most promising candidate models. One of the key features is the automated verification that candidate models will fit within the memory limitations of target devices, allowing for more successful deployment on constrained systems.
After the search and optimisation stages, models can be further refined, evaluated, and benchmarked using standard workflows within the Kenning framework. Detailed reports on model size, inference speed, and accuracy inform user decisions prior to deployment.
Applications and demonstrations
AutoML for Embedded has already been utilised in use cases such as anomaly detection for sensory time series data. In a detailed demonstration, a model created by the tool was deployed on the ADI MAX32690 MCU and tested in both a physical hardware setup and its digital twin using Renode simulation, enabling performance monitoring in real time.
Potential application areas outlined by the project include image classification and object detection on low-power camera systems, predictive maintenance and anomaly detection in industrial IoT sensors, natural language processing for on-device text analysis, and real-time action recognition for sports and robotics settings.
The package is made available to developers via the Visual Studio Code Marketplace and GitHub, reflecting its open-source nature and broad accessibility.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Developers adopt AI tools but trust issues persist, survey finds
Developers adopt AI tools but trust issues persist, survey finds

Techday NZ

time4 days ago

  • Techday NZ

Developers adopt AI tools but trust issues persist, survey finds

Stack Overflow has released the results of its 2025 Developer Survey, detailing the perceptions and habits of more than 49,000 technologists across 177 countries. The AI trust gap The survey indicates a significant disparity between AI adoption and trust among developers. While 84% of respondents use or plan to use artificial intelligence tools in their workflow, nearly half (46%) report that they do not trust the accuracy of AI-generated output. This marks a substantial rise from 31% indicating a lack of trust in the previous year. This year's expanded artificial intelligence section included 15 new questions, addressing topics such as the utility of AI agent tools, the impact of AI on developers' jobs, and the phenomenon of "vibe coding". "The growing lack of trust in AI tools stood out to us as the key data point in this year's survey, especially given the increased pace of growth and adoption of these AI tools. AI is a powerful tool, but it has significant risks of misinformation or can lack complexity or relevance. With the use of AI now ubiquitous and 'AI slop' rapidly replacing the content we see online, an approach that leans heavily on trustworthy, responsible use of data from curated knowledge bases is critical. By providing a trusted human intelligence layer in the age of AI, we believe the tech enthusiasts of today can play a larger role in adding value to build the AI technologies and products of tomorrow," said Prashanth Chandrasekar, CEO of Stack Overflow. The survey found that 75% of users do not trust AI-generated answers, and 45% find debugging AI-generated code time-consuming. Ethical and security concerns are prevalent, with 61.7% citing these as reasons for hesitancy, while 61.3% wish to maintain full understanding of their code. AI use and productivity Despite low overall adoption, AI agents are associated with productivity improvements. Only 31% of developers currently use AI agents, but among those, 69% report increased workplace productivity. Meanwhile, 17% are planning to adopt such tools, while 38% are not planning to use them at all. A majority (64%) of developers do not see AI as a threat to their employment, though this figure has declined slightly from the previous year's 68%. Platforms and tools Visual Studio Code and Visual Studio remain the most used Integrated Development Environments (IDEs). New AI-enabled IDEs have entered the market, with Cursor at an 18% usage rate, Claude Code at 10%, and Windsurf at 5% among respondents. Among large language models (LLMs), OpenAI's GPT series is the most popular, used by 81% of developers surveyed. Claude Sonnet received 43% usage, and Gemini Flash 35%. Vibe coding and new ways of learning 'Vibe coding', defined as generating software from LLM prompts, was explored for the first time. While AI tools are being adopted for learning and development, nearly 77% of developers indicated that vibe coding is not part of their professional workflow. The trend is more relevant for less experienced developers seeking a rapid start, but it comes with a trade-off in the level of trust and confidence in the output. Community platforms continue to play an important role. Stack Overflow is the most common platform, used or planned to be used by 84% of respondents, followed by GitHub at 67%, and YouTube at 61%. Notably, 35% of respondents reported consulting Stack Overflow when confronted with AI-related issues. The survey shows that 69% of developers have learned a new technology or programming language in the past year, with 36% focusing specifically on AI-enabled tools. Usage of AI tools for learning to code has risen to 44%, up from 37% last year. Top resources remain technical documentation (68%), online resources (59%), and Stack Overflow (51%). For those learning AI-specific skills, 53% used AI tools. Gen Z developers (aged 18-24) are more likely to engage with coding challenges, with 15% participating compared to an overall average of 12%. Additionally, a higher proportion of this age group prefers chat-based and challenge-based learning approaches than other cohorts. International responses and technology adoption The United States, Germany, India, United Kingdom, France, Canada, Ukraine, Poland, Netherlands, and Italy were the top ten countries by survey participation. Trust in AI tools differs by region; India saw the highest proportion of developers expressing some or significant trust in AI at 56%, followed by Ukraine at 41%. Other countries showed lower levels of trust, including Italy (31%), Netherlands and United States (28%), Poland (26%), Canada and France (25%), United Kingdom (23%), and Germany (22%). Python continues to gain in popularity, with a seven percentage point increase since 2024. JavaScript (66%), HTML/CSS (62%), and SQL (59%) remain popular programming languages. Docker usage grew by 17 percentage points to 71%, marking it as a widely adopted tool in cloud and infrastructure development. PostgreSQL holds the position as the most sought-after database technology, with 47% planning to use it in the next year or continuing usage, marking its third year at the top in this category. For documentation and collaboration, GitHub leads at 81%, followed by Jira (46%) and GitLab (36%).

Open-source AutoML eases edge AI deployment for developers
Open-source AutoML eases edge AI deployment for developers

Techday NZ

time18-07-2025

  • Techday NZ

Open-source AutoML eases edge AI deployment for developers

An open-source AutoML solution called AutoML for Embedded, co-developed by Analogue Devices and Antmicro, is now available as part of the Kenning framework, aimed at easing the deployment of machine learning models on embedded edge devices. AutoML for Embedded is designed to streamline and automate many of the typical tasks developers encounter when attempting to implement artificial intelligence on microcontrollers and other resource-constrained hardware. These tasks often include data preprocessing, model selection, hyperparameter tuning, and device-specific optimisation. Workflow and compatibility This solution is distributed as a Visual Studio Code plugin and is built upon the Kenning library, emphasising cross-platform compatibility. It integrates with CodeFusion Studio and offers support for ADI's MAX78002 AI Accelerator Microcontroller Units (MCUs) and MAX32690, enabling direct model deployment to these hardware platforms. The workflow also supports rapid prototyping and testing through Renode-based simulation environments and the Zephyr real-time operating system (RTOS). According to the developers, this flexibility allows users to construct and deploy machine learning models on a wide variety of target platforms, avoiding vendor lock-in. Step-by-step tutorials, reproducible pipelines, and sample datasets are included to assist users in moving from raw data to edge AI deployment without requiring specialist data science expertise. Developer-oriented features The solution is the outcome of collaboration between Analogue Devices and Antmicro, who have combined hardware knowledge with open-source approaches. "Building on the flexibility of our open-source AI benchmarking and deployment framework, Kenning, we were able to develop an automated flow and VS code plugin that vastly reduces complexity of building optimised edge AI models," said Michael Gielda, Vice President of Business Development at Antmicro. "Enabling workflows based on proven open-source solutions is the backbone of our end-to-end development services that help customers take full control of their product. With flexible simulation using Renode and seamless integration with the highly configurable and standardised Zepher RTOS, the road to transparent and efficient edge AI development using AutoML in Kenning is open." How the automation works AutoML for Embedded utilises sequential model-based algorithm configuration (SMAC) to automate the search for optimal model architectures and training parameters. Hyperband with Successive Halving is applied to allocate computational resources towards the most promising candidate models. One of the key features is the automated verification that candidate models will fit within the memory limitations of target devices, allowing for more successful deployment on constrained systems. After the search and optimisation stages, models can be further refined, evaluated, and benchmarked using standard workflows within the Kenning framework. Detailed reports on model size, inference speed, and accuracy inform user decisions prior to deployment. Applications and demonstrations AutoML for Embedded has already been utilised in use cases such as anomaly detection for sensory time series data. In a detailed demonstration, a model created by the tool was deployed on the ADI MAX32690 MCU and tested in both a physical hardware setup and its digital twin using Renode simulation, enabling performance monitoring in real time. Potential application areas outlined by the project include image classification and object detection on low-power camera systems, predictive maintenance and anomaly detection in industrial IoT sensors, natural language processing for on-device text analysis, and real-time action recognition for sports and robotics settings. The package is made available to developers via the Visual Studio Code Marketplace and GitHub, reflecting its open-source nature and broad accessibility.

Latent AI unveils platform to speed & secure edge AI rollout
Latent AI unveils platform to speed & secure edge AI rollout

Techday NZ

time18-06-2025

  • Techday NZ

Latent AI unveils platform to speed & secure edge AI rollout

Latent AI has announced the launch of Latent Agent, an edge AI platform designed to simplify the management and security of deploying artificial intelligence models at the edge. Built upon the Latent AI Efficient Inference Platform (LEIP), Latent Agent is designed to automate optimisation and deployment tasks, enabling developers to iterate, deploy, monitor, and secure edge AI models at scale. The company states that the new platform addresses the complexity issues that have made enterprise adoption of edge AI challenging. Complexity of traditional MLOps Traditional machine learning operations (MLOps) force developers to manually optimise models for specific hardware, often without a comprehensive understanding of device constraints. This can create pressure on teams, as optimisation workflows typically demand multiple specialists per hardware pipeline, and the complexity multiplies with each additional hardware target. According to Latent AI, this challenge has extended go-to-market timelines to as much as twelve weeks and led to substantial resource overhead for many organisations, particularly those looking to scale across diverse edge devices such as drones and sensors. "The rapid shift to edge AI has exposed gaps in traditional MLOps, slowing innovation and scalability," said Sek Chai, CTO and Co-founder of Latent AI. "Latent Agent eliminates the model-to-hardware guessing game, replacing weeks-long deployment cycles and scarce expertise with intelligent automation. This is a game-changer for enterprises racing to stay competitive." Platform features Latent Agent aims to streamline the lifecycle of edge AI, spanning exploration, training, development, and deployment across a range of hardware platforms. A key feature is its natural language interface, which lets developers set their AI requirements while receiving model-to-hardware recommendations from Latent AI Recipes. This knowledge base draws on 12TB of telemetry data compiled from over 200,000 device hours. Within the platform, a Visual Studio Code (VS Code) extension has been introduced to incorporate these agentic capabilities into developer workflows, providing an interface for requirement gathering and deployment. Other capabilities highlighted include an adaptive model architecture that can autonomously detect performance drift in deployed models and take remedial actions, such as retraining or over-the-air updates, without human intervention. Latent Agent's Recipes leverages automatically benchmarked model-to-hardware configurations, aiming to enable faster iteration and model deployment. The company states this accelerated approach will remove bottlenecks caused by manual processes and facilitate secure management of AI infrastructure at scale. "The biggest barrier to edge AI at scale has always been the complexity of optimising models for constrained hardware environments," said Dan Twing, President and COO of Enterprise Management Associates, and Principal Analyst for Intelligent Automation. "Latent Agent addresses that challenge head-on. It streamlines the hardest part of edge AI—getting high-performance models running on diverse devices—so teams can move faster and scale confidently." Business focus Latent Agent is being presented as a tool to accelerate development timelines, allow autonomous operations, and support scaling. By reducing the need for deep machine learning or hardware expertise, the company claims deployment times can be shortened from twelve weeks to a matter of hours. The agentic platform's compile-once, deploy-anywhere function is said to support any chip, operating system, or form factor, thereby assisting in the management of thousands of edge devices simultaneously. Furthermore, Latent Agent incorporates security measures such as model encryption, watermarking, and compliance with Department of Defence (DoD) security standards, designed to protect sensitive deployments. "At Latent AI, we've always believed that edge AI should be as simple to deploy as it is powerful to use," said Jags Kandasamy, CEO and Co-founder of Latent AI. "Latent Agent represents the natural evolution of our mission—transforming edge AI from a specialised engineering challenge into an accessible conversation. By combining our proven optimisation expertise with agentic intelligence, we're not just making edge AI faster; we're making it possible for any developer to achieve what previously required a team of ML experts." The new platform is now available to organisations seeking to improve deployment speed, operational autonomy, scalability, and security for edge AI models.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store