logo
#

Latest news with #QualcommTechnologies

Nota AI Demonstrates On-Device AI Breakthrough at Embedded Vision Summit 2025 in Collaboration with Qualcomm AI Hub
Nota AI Demonstrates On-Device AI Breakthrough at Embedded Vision Summit 2025 in Collaboration with Qualcomm AI Hub

Korea Herald

time7 days ago

  • Business
  • Korea Herald

Nota AI Demonstrates On-Device AI Breakthrough at Embedded Vision Summit 2025 in Collaboration with Qualcomm AI Hub

SEOUL, South Korea, May 26, 2025 /PRNewswire/ -- Nota AI, a global leader in AI optimization, showcased its latest edge AI innovations alongside Qualcomm Technologies, Inc. at the Embedded Vision Summit 2025, held May 20–22 in Santa Clara, California. The Embedded Vision Summit is a prominent global conference for innovators incorporating computer vision and AI in products, attended by more than 70 companies and over 1,400 industry experts worldwide. Nota AI prominently featured its collaboration with Qualcomm Technologies, emphasizing the optimization of its proprietary AI model optimization platform, NetsPresso®, for use with the Qualcomm® AI Hub. Both companies utilized video presentations at their booths to demonstrate the enhanced efficiency and scalability achieved through this collaboration. Nota AI's CTO, Tae-Ho Kim, further highlighted these advancements in a Qualcomm Technologies-hosted Deep Dive Session, detailing how the integrated platforms significantly streamline the workflow for developing and deploying AI models on edge devices. "This collaboration shows how we're making edge AI deployment faster, lighter, and more efficient," said Tae-Ho Kim, CTO of Nota AI. "We're excited to deepen our collaboration with Qualcomm Technologies and extend our reach across global edge and IoT applications." Additionally, Nota AI unveiled the NetsPresso Optimization Studio, the latest enhancement to its AI model optimization platform, NetsPresso®. Optimization Studio offers users an intuitive, visual interface designed to simplify AI model optimization. Developers can quickly visualize critical layer details and model performance required for efficient quantization, enabling rapid, data-driven decisions based on actual device performance metrics. Also featured was Nota Vision Agent (NVA), a generative AI-based video analytics solution. NVA enables real-time video event detection, natural language video search, and automated report generation, helping enterprise users maximize situational awareness and operational efficiency. The solution has already proven its commercial viability through a recent supply agreement with the Dubai Roads and Transport Authority (RTA) — a first for a Korean company in this domain. Meanwhile, On May 22, Nota AI filed for a preliminary IPO listing, making it the first AI optimization company from Korea to do so via the country's technology-special track. The IPO plan is attracting significant market attention, backed by Nota AI's robust global expansion and strong product competitiveness. Earlier in April, Nota AI was also recognized as one of the "Top 100 Global Innovative AI Startups" by the global market research firm CB Insights. Looking ahead, Nota AI plans to accelerate its presence across key global markets — including the Middle East, Southeast Asia, and Europe.

Gemma 3n AI model brings real-time multimodal power to mobiles
Gemma 3n AI model brings real-time multimodal power to mobiles

Techday NZ

time22-05-2025

  • Business
  • Techday NZ

Gemma 3n AI model brings real-time multimodal power to mobiles

Gemma 3n, a new artificial intelligence model architected for mobile and on-device computing, has been introduced as an early preview for developers. Developed in partnership with mobile hardware manufacturers, Gemma 3n is designed to support real-time, multimodal AI experiences on phones, tablets, and laptops. The model extends the capabilities of the Gemma 3 family by focusing on performance and privacy in mobile scenarios. The new architecture features collaboration with companies such as Qualcomm Technologies, MediaTek, and Samsung System LSI. The objective is to optimise the model for fast, responsive AI that can operate directly on device, rather than relying on cloud computing. This marks an extension of the Gemma initiative towards enabling AI applications in everyday devices, utilising a shared foundation that will underpin future releases across platforms like Android and Chrome. According to information provided, Gemma 3n is also the core of the next generation of Gemini Nano, which is scheduled for broader release later in the year, bringing expanded AI features to Google apps and the wider on-device ecosystem. Developers can begin working with Gemma 3n today as part of the early preview, helping them to build and experiment with local AI functionalities ahead of general availability. The model has performed strongly in chatbot benchmark rankings. One chart included in the announcement ranks AI models by Chatbot Arena Elo scores, with Gemma 3n noted as ranking highly amongst both popular proprietary and open models. Another chart demonstrates the model's mix-and-match performance with respect to model size. Gemma 3n benefits from Google DeepMind's Per-Layer Embeddings (PLE) innovation, which leads to substantial reductions in RAM requirements. The model is available in 5 billion and 8 billion parameter versions, but, according to the release, it can operate with a memory footprint comparable to much smaller models—2 billion and 4 billion parameters—enabling operation with as little as 2GB to 3GB of dynamic memory. This allows the use of larger AI models on mobile devices or via cloud streaming, where memory overhead is often a constraint. The company states, "Gemma 3n leverages a Google DeepMind innovation called Per-Layer Embeddings (PLE) that delivers a significant reduction in RAM usage. While the raw parameter count is 5B and 8B, this innovation allows you to run larger models on mobile devices or live-stream from the cloud, with a memory overhead comparable to a 2B and 4B model, meaning the models can operate with a dynamic memory footprint of just 2GB and 3GB." Additional technical features of Gemma 3n include optimisations that allow the model to respond approximately 1.5 times faster on mobile devices compared to previous Gemma versions, with improved output quality and lower memory usage. The announcement highlights innovations such as Per Layer Embeddings, KVC sharing, and advanced activation quantisation as contributing to these improvements. The model also supports what the company calls "many-in-1 flexibility." Utilizing a 4B active memory footprint, Gemma 3n incorporates a nested 2B active memory footprint submodel through the MatFormer training process. This design allows developers to balance performance and quality needs without operating separate models, composing submodels on the fly to match a specific application's requirements. Upcoming technical documentation is expected to elaborate on this mix-and-match capability. Security and privacy are also prioritised. The development team states that local execution "enables features that respect user privacy and function reliably, even without an internet connection." Gemma 3n brings enhanced multimodal comprehension, supporting the integration and understanding of audio, text, images, and video. Its audio functionality supports high-quality automatic speech recognition and multilingual translation. Furthermore, the model can accept inputs in multiple modalities simultaneously, enabling the parsing of complex multimodal interactions. The company describes the expansion in audio capabilities: "Its audio capabilities enable the model to perform high-quality Automatic Speech Recognition (transcription) and Translation (speech to translated text). Additionally, the model accepts interleaved inputs across modalities, enabling understanding of complex multimodal interactions." A public release of these features is planned for the near future. Gemma 3n features improved performance in multiple languages, with notable gains in Japanese, German, Korean, Spanish, and French. This is reflected in benchmark scores such as a 50.1% result on WMT24++ (ChrF), a multilingual evaluation metric. The team behind Gemma 3n views the model as a catalyst for "intelligent, on-the-go applications." They note that developers will be able to "build live, interactive experiences that understand and respond to real-time visual and auditory cues from the user's environment," and design advanced applications capable of real-time speech transcription, translation, and multimodal contextual text generation, all executed privately on the device. The company also outlined its commitment to responsible development. "Our commitment to responsible AI development is paramount. Gemma 3n, like all Gemma models, underwent rigorous safety evaluations, data governance, and fine-tuning alignment with our safety policies. We approach open models with careful risk assessment, continually refining our practices as the AI landscape evolves." Developers have two initial routes for experimentation: exploring Gemma 3n via a cloud interface in Google AI Studio using browser-based access, or integrating the model locally through Google AI Edge's suite of developer tools. These options enable immediate testing of Gemma 3n's text and image processing capabilities. The announcement states: "Gemma 3n marks the next step in democratizing access to cutting-edge, efficient AI. We're incredibly excited to see what you'll build as we make this technology progressively available, starting with today's preview."

Qualcomm, Xiaomi ink multi-year pact to integrate on-device AI in edge devices
Qualcomm, Xiaomi ink multi-year pact to integrate on-device AI in edge devices

Time of India

time21-05-2025

  • Business
  • Time of India

Qualcomm, Xiaomi ink multi-year pact to integrate on-device AI in edge devices

NEW DELHI: Qualcomm Technologies and Xiaomi have expanded their partnership with a new multi-year agreement to integrate on-device artificial intelligence (AI) in the latter's smartphones, automotives, AR/VR glasses, wearables, tablets, and other edge devices . Later this year, the Chinese brand will be one of the first to adopt the next-generation premium Snapdragon 8-series . On-device AI refers to running AI models locally on the device, rather than relying on cloud or internet-driven processing. On-device processing is seen as crucial for enhanced privacy and quicker results with minimal latency for applications such as facial recognition, personalised recommendations, genAI, and more. 'We look forward to continuing the next 15 years of our collaboration and leveraging Qualcomm's cutting-edge Snapdragon platforms and technologies to deliver even more innovative and high-quality products to our customers worldwide,' said Lei Jun, CEO of Xiaomi, in a joint statement Wednesday. 'We value the relationship we have built resulting from 15 years of close collaboration and are excited to continue this journey for many years to come, with Snapdragon platforms powering Xiaomi's premium smartphones,' Cristiano Amon, president and CEO of Qualcomm Incorporated, said in the statement. 'We look forward to expanding our work together in automotive, smart home products, wearables, AR/VR glasses, tablets, and more,' Amon added. Xiaomi's various products, including the Mi 15 series, the SU7 electric vehicle (EV), the Xiaomi Buds 5 Pro, the Xiaomi Watch 2 Pro, and others, are powered by Qualcomm chipsets and modems.

XREAL Unveils 'Project Aura' at Google I/O -- An Optical See-Through XR Device for Android XR
XREAL Unveils 'Project Aura' at Google I/O -- An Optical See-Through XR Device for Android XR

Cision Canada

time20-05-2025

  • Cision Canada

XREAL Unveils 'Project Aura' at Google I/O -- An Optical See-Through XR Device for Android XR

MOUNTAIN VIEW, Calif., May 20, 2025 /CNW/ -- XREAL today announced a strategic partnership with Google to expand the ecosystem of spatial computing devices built on Android XR. As part of this collaboration, XREAL unveiled Project Aura at the Google I/O developer conference — XREAL's next-generation extended reality (XR) device designed specifically for the Android XR platform. Project Aura is the second official device announced for Android XR and marks a major milestone for the platform: the introduction of an optical see-through (OST) XR device. A lightweight and tethered, cinematic, and Gemini AI-powered device, Project Aura brings a large field-of-view experience to the Android XR family — setting a new standard for immersive, wearable computing. This collaboration also includes Qualcomm Technologies, Inc., bringing together leading innovation across hardware, silicon, and software to build the next wave of XR experiences. Project Aura uses XREAL's proven track record in lightweight XR hardware, the Android XR software stack, and Qualcomm Technologies' Snapdragon ® XR chipsets optimized for spatial computing. "Google is thrilled to welcome XREAL to the Android XR family and to build great XR experiences on Project Aura," said Shahram Izadi, General Manager and Vice President of XR at Google. "Android XR is the first Android platform built in the Gemini era, and it will support a rich ecosystem of immersive devices, both Virtual see-through (VST) and Optical see-through (OST). By combining our platform with XREAL's leadership in portable XR hardware, we're expanding spatial experiences to OST form-factors that are truly intuitive and accessible, representing a pivotal moment in our ecosystem." "At XREAL, we've always pushed the boundaries of what XR hardware can do — combining performance, comfort, and design into something people can wear every day," said Chi Xu, Co-founder and CEO of XREAL. "Partnering with Google on Android XR takes this vision to the next level. Project Aura reflects the power of this collaboration — merging a robust platform with advanced chipsets and our expertise in optical systems. We believe this is a breakthrough moment for real-world XR." "Qualcomm Technologies is excited to have Snapdragon play a significant role in XREAL's new Android XR solution," said Ziad Asghar, Senior Vice President and General Manager of XR at Qualcomm Technologies, Inc. "This collaboration marks a significant step forward in the expansion of the Android XR ecosystem. Working with XREAL, Snapdragon allows amazing immersive experiences to come to life in a unique optical see-through product. We are thrilled to see immersive experiences coming to more verticals opening up new possibilities for both consumers and developers." The unveiling of Project Aura marks a call-to-action for developers. XREAL, Google, and Qualcomm Technologies invite the developer community to begin envisioning new applications and use cases for this next generation of XR. Developers already building for headsets on the platform will be able to easily bring their apps to Project Aura. While Project Aura makes its public debut today, further details will be announced at Augmented World Expo (AWE) in June 2025, and later this year. To learn more about Project Aura and stay updated, please visit: XREAL is a global leader in augmented reality, creating lightweight AR glasses and spatial computing platforms that blend the digital and physical worlds. Known for its XREAL Air series and Nebula interface, the company is expanding into enterprise and AI-powered experiences — backed by collaborations with Google, Qualcomm Technologies, and a global developer ecosystem. Snapdragon is a trademark or registered trademark of Qualcomm Incorporated. Snapdragon is a product of Qualcomm Technologies, Inc. and/or its subsidiaries.

Qualcomm and e& join forces to advance UAE's 5G and edge AI
Qualcomm and e& join forces to advance UAE's 5G and edge AI

Broadcast Pro

time20-05-2025

  • Business
  • Broadcast Pro

Qualcomm and e& join forces to advance UAE's 5G and edge AI

Additionally, Qualcomm Technologies plans to leverage its new Qualcomm Engineering Centre in Abu Dhabi to support this initiative. Qualcomm Technologies and UAE-based technology group e& have announced a strategic collaboration aimed at accelerating digital transformation across key sectors in the UAE through the development and commercialisation of advanced 5G, edge computing and AI technologies. The partnership focuses on leveraging edge computing—processing data closer to its source—to enhance speed, reduce latency, and improve data security. By bringing computing power to the edge of the network, the collaboration seeks to enable smarter, faster, and more secure solutions for the government, enterprise, and industrial sectors. As part of the initiative, Qualcomm and e& will co-develop industrial and enterprise-grade 5G edge AI gateways to bring artificial intelligence capabilities directly to the network edge. These technologies will support core verticals in boosting operational efficiency and connectivity. In addition, the collaboration will include the rollout of edge AI devices—such as PCs and extended reality (XR) systems powered by Qualcomm's Snapdragon platforms—which will integrate large language models to deliver on-device generative AI and secure AI inferencing across enterprise and government applications. Smart mobility is another major focus area, with the two companies working on solutions aimed at enhancing road safety and improving user experiences while preparing the UAE's transportation infrastructure for the future. The partnership will also support connected industrial IoT solutions to bolster efficiency in sectors like manufacturing and logistics. To further this effort, Qualcomm Technologies plans to utilise its recently established Qualcomm Engineering Centre in Abu Dhabi. The centre will play a key role in evaluating new use cases and accelerating the adoption of 5G and edge AI across critical areas such as energy, retail, smart mobility, and industrial automation. Speaking about the collaboration, Cristiano Amon, President and CEO, Qualcomm Incorporated, said: 'The cooperation between Qualcomm Technologies and e& will drive significant collaboration across some of the most transformative technology areas, including 5G, next-generation computing and intelligence at the edge. We look forward to working with e& to accelerate innovation and technology advancement across its ecosystem of enterprise and government customers in the UAE and beyond.' Hatem Dowidar, Group Chief Executive Officer, e&, added: 'e& and Qualcomm Technologies have a history of cooperation, and this new agreement will help drive the digital transformation of enterprises, significantly enhancing the UAE's role in the global technology landscape. Together, we're bringing powerful AI to the edge – from smart industrial gateways and wearables to mobility and infrastructure — enabling faster, more reliable, and secure experiences across sectors like manufacturing, transport, and government. These innovations will drive real-time intelligence, operational efficiency, and future-ready public services across the UAE.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store