Latest news with #onDeviceAI
Yahoo
21-05-2025
- Business
- Yahoo
SK hynix Develops UFS 4.1 Solution Based on 321-High NAND
Optimized for on-device AI with best-in-class sequential reading performance, low power requirement Thickness reduced by 15% to fit into ultra-slim flagship smartphones Portfolio with world's highest 321-layer product to enhance SK hynix's leadership as full stack AI memory provider SEOUL, South Korea, May 21, 2025 /PRNewswire/ -- SK hynix Inc. (or "the company", announced today that it has developed UFS 4.1 solution product adopting the world's highest 321-layer 1Tb triple level cell 4D NAND flash for mobile applications. The development comes amid increasing requirements for high performance and low power of a NAND solution product to ensure a stable operation of on-device AI. The company expects the UFS 4.1 product, optimized for AI workload, to help enhance its memory leadership in the flagship smartphone markets. With an increase in demand for on-device AI leading to greater importance of the balance between computation capabilities and battery efficiency of a device, the mobile market is now requiring thinness and low power from a mobile device. The latest product comes with a 7% improvement in power efficiency, compared with the previous generation based on 238-high NAND and a slimmer 0.85mm thickness, down from 1mm before, to fit into a ultra-slim smartphone. The product also supports data transfer speed of 4300MB/s, the fastest sequential read* for a fourth-generation of UFS, while providing the best-in-class performance by also improving random read and write speed**, critical for multitasking, by 15% and 40%, respectively. Immediate provision of the required data for on-device AI and faster running speed and the responsivity of an application are expected to enhance user experience. * Sequential Read/Write: speed to read and write data of a file sequentially * Random Read/Write: speed to read and write data of dispersed files SK hynix plans to win customer qualification within the year and ship in volume from the first quarter of next year. The product will be provided in two capacity types – 512GB and 1TB. Ahn Hyun, President and Chief Development Officer, said that SK hynix plans to complete development of the 321-high 4D NAND-based SSD for both consumers and data centers within the year. "We are on track to expand our position as a full-stack AI memory provider in the NAND space by building a product portfolio with AI technological edge." About SK hynix Inc. SK hynix Inc., headquartered in Korea, is the world's top tier semiconductor supplier offering Dynamic Random Access Memory chips ("DRAM") and flash memory chips ("NAND flash") for a wide range of distinguished customers globally. The Company's shares are traded on the Korea Exchange, and the Global Depository shares are listed on the Luxembourg Stock Exchange. Further information about SK hynix is available at View original content to download multimedia: SOURCE SK hynix Inc.


The Verge
19-05-2025
- The Verge
Microsoft is opening its on-device AI models up to web apps in Edge
Web developers will be able to start leveraging on-device AI in Microsoft's Edge browser soon, using new APIs that can give their web apps access to Microsoft's Phi-4-mini model, the company announced at its Build conference today. And Microsoft says the API will be cross-platform, so it sounds like these APIs will work with the Edge browser in macOS, as well. The 3.8-billion-parameter Phi-4-mini is Microsoft's latest small, on-device model, rolled out in February alongside the company's larger Phi-4. With the new APIs, web developers will be able to add prompt boxes and offer writing assistance tools for text generation, summarizing, and editing. And within the next couple of months, Microsoft says it will also release a text translation API. Microsoft is putting these 'experimental' APIs forth as potential web standards, and in addition to being cross-platform, it says they'll also work with other AI models. Developers can start trialing them in the Edge Canary and Dev channels now, the company says. Google offers similar APIs for its Chrome browser. With them, developers can use Chrome's built-in models to offer things like text translation, prompt boxes for text and image generation, and calendar event creation based on webpage content.


Android Authority
16-05-2025
- Business
- Android Authority
Google is about to unleash Gemini Nano's power for third-party Android apps
Edgar Cervantes / Android Authority TL;DR Google is expanding access to Gemini Nano, its on-device AI model, through new ML Kit GenAI APIs. These new APIs, likely to be announced at I/O 2025, will enable developers to easily implement features like text summarization, proofreading, rewriting, and image description generation in their apps. Unlike the experimental AI Edge SDK, ML Kit's GenAI APIs will be in beta, support image input, and be available on a wider range of Android devices beyond the Pixel 9 series. Generative AI technology is changing how we communicate and create content online. Many people ask AI chatbots like Google Gemini to perform tasks such as summarizing an article, proofreading an email, or rewriting a message. However, some people are wary of using these AI chatbots, especially when these tasks involve highly personal or sensitive information. To address these privacy concerns, Google offers Gemini Nano, a smaller, more optimized version of its AI model that runs directly on the device instead of on a cloud server. While access to Gemini Nano has so far been limited to a single device line and text-only input, Google will soon significantly expand its availability and introduce image input support. Late last month, Google published the session list for I/O 2025, which includes a session titled, 'Gemini Nano on Android: Building with on-device gen AI.' The session's description states it will 'introduce a new set of generative AI APIs that harness the power of Gemini Nano. These new APIs make it easy to implement use cases to summarize, proofread, and rewrite text, as well as to generate image descriptions.' In October, Google opened up experimental access to Gemini Nano via the AI Edge SDK, allowing third-party developers to experiment with text-to-text prompts on the Pixel 9 series. The AI Edge SDK enables text-based features like rephrasing, smart replies, proofreading, and summarization, but it notably does not include support for generating image descriptions, a feature Google highlighted for the upcoming I/O session. Thus, it's likely that the 'new set of generative AI APIs' mentioned in the session's description refers to either something entirely different from the AI Edge SDK or a newer version of it. Fortunately, we don't have to wait until next week to find out. Earlier this week, Google quietly published documentation on ML Kit's new GenAI APIs. ML Kit is an SDK that allows developers to leverage machine learning capabilities in their apps without needing to understand how the underlying models work. The new GenAI APIs allow developers to 'harness the power of Gemini Nano to deliver out-of-the-box performance for common tasks through a simplified, high-level interface.' Like the AI Edge SDK, it's 'built on AICore,' enabling 'on-device execution of AI foundation models like Gemini Nano, enhancing app functionality and user privacy by keeping data processing local. Mishaal Rahman / Android Authority In other words, ML Kit's GenAI APIs make it simple for developers to use Gemini Nano for various features in their apps privately and with high performance. These features currently include summarizing, proofreading, or rewriting text, as well as generating image descriptions. All four of these features match what's mentioned in the I/O session's description, suggesting that Google intends to formally announce ML Kit's GenAI APIs next week. Here's a summary of all the features offered by ML Kit's GenAI APIs: Summarization : Summarize articles or chat conversations as a bulleted list. Generates up to three bullet points Languages: English, Japanese, and Korean : Summarize articles or chat conversations as a bulleted list. Proofreading : Polish short content by refining grammar and fixing spelling errors. Languages: English, Japanese, German, French, Italian, Spanish, and Korean : Polish short content by refining grammar and fixing spelling errors. Rewrite : Rewrite short chat messages in different tones or styles. Styles: Elaborate, Emojify, Shorten, Friendly, Professional, Rephrase Languages: English, Japanese, German, French, Italian, Spanish, and Korean : Rewrite short chat messages in different tones or styles. Image description : Generate a short description of a given image. Languages: English : Generate a short description of a given image. Compared to the existing AI Edge SDK, ML Kit's GenAI APIs will be offered in 'beta' instead of 'experimental access.' This 'beta' designation could mean Google will allow apps to use the new GenAI APIs in production. Currently, developers cannot release apps using the AI Edge SDK, meaning no third-party apps can leverage Gemini Nano at this time. Another difference is that the AI Edge SDK is limited to text input, whereas ML Kit's GenAI APIs support images. This image support enables the image description feature, allowing apps to generate short descriptions of any given image. The biggest difference between the current version of the AI Edge SDK and ML Kit's GenAI APIs, though, is device support. While the AI Edge SDK only supports the Google Pixel 9 series, ML Kit's GenAI APIs can be used on any Android phone that supports the multimodal Gemini Nano model. This includes devices like the HONOR Magic 7, Motorola Razr 60 Ultra, OnePlus 13, Samsung Galaxy S25, Xiaomi 15, and more. Developers who are interested in trying out Gemini Nano in their apps can get started by reading the public documentation for the ML Kit GenAI APIs. Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.