Latest news with #Apache2.0


Hans India
13 hours ago
- Hans India
Google launches AI Edge Gallery App to run hugging face models offline on phones
Keywords: Google AI Edge Gallery, offline AI models, Hugging Face, Android AI apps, iOS AI app, AI on-device, edge computing, local AI models, AI Prompt Lab, Google Gemma, Apache 2.0 license, experimental alpha release, AI development, Google open source AI, mobile AI Google has quietly unveiled a new app called AI Edge Gallery, enabling users to download and run AI models from Hugging Face directly on their smartphones — entirely offline. Currently available for Android (with iOS support coming soon), this experimental Alpha release allows local execution of image generation, Q&A, code writing, and more using compatible models like Google's Gemma 3n. Users can browse models by task, such as 'AI Chat' or 'Ask Image,' and launch them via an intuitive interface. A built-in Prompt Lab supports quick, single-turn tasks like summarizing and rewriting content, offering customizable templates and settings for developers to explore. While cloud-based models remain more powerful, the local alternative prioritizes data privacy, speed (on modern devices), and offline accessibility, especially useful where internet is unreliable. Performance will vary based on device specs and model size. Licensed under Apache 2.0, the app is open for commercial and personal use, and Google is inviting developer feedback to improve its capabilities. Developers can download Google AI Edge Gallery from GitHub by following provided instructions.


Techday NZ
22-05-2025
- Business
- Techday NZ
Plainsight unveils OpenFilter to simplify vision AI pipelines
Plainsight has launched OpenFilter, an open source project designed to simplify and accelerate the development, deployment, and scaling of production-grade computer vision applications. OpenFilter is available under the Apache 2.0 licence and is designed to help enterprises build, deploy, and manage vision AI pipelines using modular, reusable components, referred to as "filters". These filters combine code and AI models into building blocks for assembling custom vision pipelines. The project aims to address key challenges that organisations face when implementing AI-driven computer vision in production environments, such as cost, scalability, and the complexity of infrastructure integration. Priyanshu Sharma, Senior Data Engineer at BrickRed Systems, explained the practical benefits seen in manufacturing and logistics implementations. "OpenFilter has revolutionised how we deploy vision AI for our manufacturing and logistics clients. With its modular filter architecture, we can quickly build and customise pipelines for tasks like automated quality inspection and real-time inventory tracking, without having to rewrite core infrastructure. This flexibility has enabled us to deliver robust, scalable solutions that meet our clients' evolving needs, while dramatically reducing development time and operational complexity," Sharma said. Plainsight claims that OpenFilter's features - including frame deduplication and priority scheduling - lower GPU inference costs, while its abstractions are intended to shorten deployment timelines from weeks to days. The system's extensible architecture is designed to future-proof investments, offering compatibility not only with computer vision but also adaptable extensions for audio, text, and multimodal AI use cases. OpenFilter aims to bridge a common gap in computer vision adoption, where projects can stall due to fragmented tooling and difficulties in scaling from prototype to production. The platform includes several features: a core runtime available as open source, pre-built filters for tasks such as object tracking and image segmentation, and a pipeline management system that can handle various video inputs like RTSP streams, webcams, and image files. It enables routing of processed data to destinations including databases, MQTT brokers, or APIs. The system is designed to support deployment across a wide range of hardware, from CPUs and GPUs to edge devices, allowing for resource optimisation in different environments. OpenFilter supports broad model integration, letting users deploy models from frameworks such as PyTorch and OpenCV, or custom models like YOLO, without vendor lock-in. Typical use cases for OpenFilter span a variety of sectors. In manufacturing, the platform can be used for automated quality inspection, defect detection, and fill-level monitoring. Retailers and food service operations may use it for drive-through analytics or inventory tracking, while logistics operators could automate vehicle tracking or workflow processes. Additional applications include precision agriculture, surveillance, people counting, and event detection for IoT and edge environments. Andrew Smith, CTO of Plainsight, commented on the broader aim for OpenFilter's architecture. "Filters are the building blocks for operationalising vision AI," Smith said. "Instead of wrestling with brittle pipelines and bespoke infrastructure, developers can snap together reusable components that scale from prototypes to production. It's how we make computer vision feel more like software engineering - and less like science experiments." Chris Aniszczyk, CTO of CNCF, endorsed the open source nature of OpenFilter, saying, "OpenFilter is a leap forward for open source, giving developers and data scientists a powerful, collaborative platform to build and scale computer vision AI. Its modular design and permissive Apache 2.0 license make it easy to adapt solutions for everything from agriculture and manufacturing to retail and logistics, helping organisations of all types and sizes unlock the value of vision-based AI." Kit Merker, CEO of Plainsight, described the broader ambition for OpenFilter in the industry. "OpenFilter is the abstraction the AI industry has been waiting for. We're making it possible for anyone - not just experts - to turn camera data into real business value, faster and at lower cost," Merker said. "By treating vision workloads as modular filters, we give developers the power to build, scale, and update applications with the same ease and flexibility as modern cloud software. This isn't just about productivity, it's about democratising computer vision, unlocking new use cases, and making AI accessible and sustainable for every organisation. We believe this is the foundation for the next wave of AI-powered transformation." Plainsight has made OpenFilter available to the public under the Apache 2.0 licence and offers an Early Access Programme for enterprises interested in a commercial version of the platform.
Yahoo
21-05-2025
- Business
- Yahoo
Mistral's new Devstral AI model was designed for coding
AI startup Mistral on Wednesday announced a new AI model focused on coding: Devstral. Devstral, which Mistral says was developed in partnership with AI company All Hands AI, is openly available under an Apache 2.0 license, meaning it can be used commercially without restriction. Mistral claims that Devstral outperforms other open models like Google's Gemma 3 27B and Chinese AI lab DeepSeek's V3 on SWE-Bench Verified, a benchmark measuring coding skills. "Devstral excels at using tools to explore codebases, editing multiple files and power[ing] software engineering agents," writes Mistral in a blog post provided to TechCrunch. "[I]t runs over code agent scaffolds such as OpenHands or SWE-Agent, which define the interface between the model and the test cases [...] Devstral is light enough to run on a single [Nvidia] RTX 4090 or a Mac with 32GB RAM, making it an ideal choice for local deployment and on-device use." Devstral arrives as AI coding assistants — and the models powering them — grow increasingly popular. Just last month, JetBrains, the company behind a range of popular app development tools, released its first "open" AI model for coding. In recent months, AI outfits including Google, Windsurf, and OpenAI have also unveiled models, both openly available and proprietary, optimized for programming tasks. AI models still struggle to code quality software — code-generating AI tends to introduce security vulnerabilities and errors, owing to weaknesses in areas like the ability to understand programming logic. Yet their promise to boost coding productivity is pushing companies — and developers — to rapidly adopt them. One recent poll found that 76% of devs used or were planning to use AI tools in their development processes last year. Mistral previously waded into the assistive programming space with Codestral, a generative model for code. But Codestral wasn't released under a license that permitted devs to use the model for commercial applications; its license explicitly banned "any internal usage by employees in the context of [a] company's business activities." Devstral, which Mistral is calling a "research preview," can be downloaded from AI development platforms, including Hugging Face, and also tapped through Mistral's API. It's priced at $0.1 per million input tokens and $0.3 per million output tokens, tokens being the raw bits of data that AI models work with. (A million tokens is equivalent to about 750,000 words, or roughly 163,000 words longer than "War and Peace.") Mistral says it's "hard at work building a larger agentic coding model that will be available in the coming weeks." Devstral isn't a small model per se, but it's on the smaller side at 24 billion parameters. (Parameters roughly correspond to a model's problem-solving skills, and models with more parameters generally perform better than those with fewer parameters.) Mistral, founded in 2023, is a frontier model lab, aiming to build a range of AI-powered services, including a chatbot platform, Le Chat, and mobile apps. It's backed by VCs, including General Catalyst, and has raised over €1.1 billion (roughly $1.24 billion) to date. Mistral's customers include BNP Paribas, AXA, and Mirakl. Devstral is Mistral's third product launch this month. A few weeks ago, Mistral launched Mistral Medium 3, an efficient general-purpose model. Around the same time, the company rolled out Le Chat Enterprise, a corporate-focused chatbot service that offers tools like an AI "agent" builder and integrates Mistral's models with third-party services like Gmail, Google Drive, and SharePoint. Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data
Yahoo
21-05-2025
- Business
- Yahoo
Mistral's new Devstral AI model was designed for coding
AI startup Mistral on Wednesday announced a new AI model focused on coding: Devstral. Devstral, which Mistral says was developed in partnership with AI company All Hands AI, is openly available under an Apache 2.0 license, meaning it can be used commercially without restriction. Mistral claims that Devstral outperforms other open models like Google's Gemma 3 27B and Chinese AI lab DeepSeek's V3 on SWE-Bench Verified, a benchmark measuring coding skills. "Devstral excels at using tools to explore codebases, editing multiple files and power[ing] software engineering agents," writes Mistral in a blog post provided to TechCrunch. "[I]t runs over code agent scaffolds such as OpenHands or SWE-Agent, which define the interface between the model and the test cases [...] Devstral is light enough to run on a single [Nvidia] RTX 4090 or a Mac with 32GB RAM, making it an ideal choice for local deployment and on-device use." Devstral arrives as AI coding assistants — and the models powering them — grow increasingly popular. Just last month, JetBrains, the company behind a range of popular app development tools, released its first "open" AI model for coding. In recent months, AI outfits including Google, Windsurf, and OpenAI have also unveiled models, both openly available and proprietary, optimized for programming tasks. AI models still struggle to code quality software — code-generating AI tends to introduce security vulnerabilities and errors, owing to weaknesses in areas like the ability to understand programming logic. Yet their promise to boost coding productivity is pushing companies — and developers — to rapidly adopt them. One recent poll found that 76% of devs used or were planning to use AI tools in their development processes last year. Mistral previously waded into the assistive programming space with Codestral, a generative model for code. But Codestral wasn't released under a license that permitted devs to use the model for commercial applications; its license explicitly banned "any internal usage by employees in the context of [a] company's business activities." Devstral, which Mistral is calling a "research preview," can be downloaded from AI development platforms, including Hugging Face, and also tapped through Mistral's API. It's priced at $0.1 per million input tokens and $0.3 per million output tokens, tokens being the raw bits of data that AI models work with. (A million tokens is equivalent to about 750,000 words, or roughly 163,000 words longer than "War and Peace.") Mistral says it's "hard at work building a larger agentic coding model that will be available in the coming weeks." Devstral isn't a small model per se, but it's on the smaller side at 24 billion parameters. (Parameters roughly correspond to a model's problem-solving skills, and models with more parameters generally perform better than those with fewer parameters.) Mistral, founded in 2023, is a frontier model lab, aiming to build a range of AI-powered services, including a chatbot platform, Le Chat, and mobile apps. It's backed by VCs, including General Catalyst, and has raised over €1.1 billion (roughly $1.24 billion) to date. Mistral's customers include BNP Paribas, AXA, and Mirakl. Devstral is Mistral's third product launch this month. A few weeks ago, Mistral launched Mistral Medium 3, an efficient general-purpose model. Around the same time, the company rolled out Le Chat Enterprise, a corporate-focused chatbot service that offers tools like an AI "agent" builder and integrates Mistral's models with third-party services like Gmail, Google Drive, and SharePoint. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
21-05-2025
- Business
- Yahoo
Mistral's new Devstral model was designed for coding
AI startup Mistral on Wednesday announced a new AI model focused on coding: Devstral. Devstral, which Mistral says was developed in partnership with AI company All Hands AI, is openly available under an Apache 2.0 license, meaning it can be used commercially without restriction. Mistral claims that Devstral outperforms other open models like Google's Gemma 3 27B and Chinese AI lab DeepSeek's V3 on SWE-Bench Verified, a benchmark measuring coding skills. "Devstral [...] is trained to solve real GitHub issues," writes Mistral in a blog post provided to TechCrunch. "[I]t runs over code agent scaffolds such as OpenHands or SWE-Agent, which define the interface between the model and the test cases [...] Devstral is light enough to run on a single [Nvidia] RTX 4090 or a Mac with 32GB RAM, making it an ideal choice for local deployment and on-device use." Devstral arrives as AI coding assistants — and the models powering them — grow increasingly popular. Just last month, JetBrains, the company behind a range of popular app development tools, released its first "open" AI model for coding. In recent months, AI outfits including Google, Windsurf, and OpenAI have also unveiled models, both openly available and proprietary, optimized for programming tasks. AI models still struggle to code quality software — code-generating AI tends to introduce security vulnerabilities and errors, owing to weaknesses in areas like the ability to understand programming logic. Yet their promise to boost coding productivity is pushing companies — and developers — to rapidly adopt them. One recent poll found that 76% of devs used or were planning to use AI tools in their development processes last year. Mistral previously waded into the assistive programming space with Codestral, a generative model for code. But Codestral wasn't released under a license that permitted devs to use the model for commercial applications; its license explicitly banned "any internal usage by employees in the context of [a] company's business activities." Devstral, which Mistral is calling a "research preview," can be downloaded from AI development platforms including Hugging Face and also tapped through Mistral's API. It's priced at $0.1 per million input tokens and $0.3 per million output tokens, tokens being the raw bits of data that AI models work with. (A million tokens is equivalent to about 750,000 words, or roughly 163,000 words longer than "War and Peace.") Mistral says it's "hard at work building a larger agentic coding model that will be available in the coming weeks." Devstral isn't a small model per se, but it's on the smaller side at 24 billion parameters. (Parameters roughly correspond to a model's problem-solving skills, and models with more parameters generally perform better than those with fewer parameters.) Mistral, founded in 2023, is a frontier model lab, aiming to build a range of AI-powered services, including a chatbot platform, Le Chat, and mobile apps. It's backed by VCs including General Catalyst, and has raised over €1.1 billion (roughly $1.24 billion) to date. Mistral's customers include BNP Paribas, AXA, and Mirakl. Devstral is Mistral's third product launch this month. A few weeks ago, Mistral launched Mistral Medium 3, an efficient general-purpose model. Around the same time, the company rolled out Le Chat Enterprise, a corporate-focused chatbot service that offers tools like an AI "agent" builder and integrates Mistral's models with third-party services like Gmail, Google Drive, and SharePoint. This article originally appeared on TechCrunch at Sign in to access your portfolio