
Apple Boosts On-Device AI With Live Translation, Screen Smarts, and Dev Access
During Apple's annual WWDC 2025 conference live stream Monday, the company took some time to outline the next phase of Apple Intelligence, adding real-time translation, expanded visual recognition tools, and on-device AI for developers. The tools work across the entire ecosystem, said Apple, including iPhone, iPad, Mac, Apple Watch, and Vision Pro, and will launch with the fall software updates.
This is a shift from Apple talking about consumer-facing AI features to really focus in on deeper, system-level integration. Apple Intelligence will power Live Translate in calls and messages, identify (and interact with) content on screen, and push further into Image Playground and Genmoji apps. For developers, Apple will let them access its foundational on-device models, making it easier to provide private, AI-enhanced experiences across apps without having to send data to the cloud. This echoes Apple's long-standing stance on privacy and hardware, which contrasts with more cloud-based AI providers like Open AI and Google.
"Last year, we took the first steps on a journey to bring users intelligence that's helpful, relevant, easy to use, and right where users need it, all while protecting their privacy. Now, the models that power Apple Intelligence are becoming more capable and efficient, and we're integrating features in even more places across each of our operating systems," said Craig Federighi, Apple's senior vice president of Software Engineering, during the keynote. "We're also taking the huge step of giving developers direct access to the on-device foundation model powering Apple Intelligence, allowing them to tap into intelligence that is powerful, fast, built with privacy, and available even when users are offline. We think this will ignite a whole new wave of intelligent experiences in the apps users rely on every day. We can't wait to see what developers create." On Device Live Translation A new Apple Intelligence feature enables real-time translation in Messages, letting users chat across languages without leaving the app.
Apple promises that Apple Intelligence will come to eight mor languages by the end of the year, too, including Danish, Dutch, Norwegian, Portuguese (Portugal), Swedish, Turkish, Chinese (traditional), and Vietnamese.
This will no doubt help the Live Translation feature, which will be integrated into Messages, FaceTime, and Phone apps, enabled by Apple's models that run on device as well. Your conversations are yours, promises the company, and even it cannot read them.
Messages will translate your text on the fly, letting you communicate with speakers of otehr languages in real. time. IN Facetime, there'll be translated live captions to read along with what your conversation partner is saying, while the Phone app will turn those captions into audio translations during the whole conversation. Genmoji and Image Playground Updates Image Playground, powered by Apple Intelligence, lets users generate styled visuals on-device—including oil painting effects—using prompts and preset suggestions.
If you're into messing around with AI image generation, Genmoji and Image Playground improvements should be right up your alley.
In addition to creating your own Genmojis with text descriptions, now you can mix emoji together and combine them with descriptions. You'll also be able to change expressions and things like hairstyles in Image Playground and Genmoji when you base your creations on your family and friends.
Image Playground gets more ChatGPT integrations, letting you create images with new styles like oil painting or vector art with a tap or a text description. This of course is not processed on-device; Apple will send the images and descriptions to ChatGPT, which will then return the image to you. You'll have to opt in for that to happen, as well. Visual Intelligence Sees your Screen
Finally, you can have Apple Intelligence interact with the things you see on your screen. This new feature will let you take a screenshot and ask ChatGPT questions about what you're seeing on the screen. In true corporate style, the demo showed someone taking a screenshot of a jacket that they then asked to buy, and highlighting a lamp they also wanted to buy. Whether this function will do more than find you places to buy things is yet to be seen.
One interesting use case, though, is the ability to take a screenshot of an event poster and have Visual Intelligence add it to your calendar. That sounds pretty great, to be honest. Add that concert to my calendar, Apple! Workout Buddy Brings Apple Intelligence to Apple Watch A user runs down a flight of outdoor stairs, tracking fitness progress with Apple Watch's Move ring feature and Workout Buddy.
Fitness fan? Workout Buddy on Apple Watch will take your fitness and workout data and generate "personalized, motivational insights" during your session. Apple wrote:
"To offer meaningful inspiration in real time, Workout Buddy analyzes data from a user's current workout along with their fitness history, based on data like heart rate, pace, distance, Activity rings, personal fitness milestones, and more. A new text-to-speech model then translates insights into a dynamic generative voice built using voice data from Fitness+ trainers, so it has the right energy, style, and tone for a workout. Workout Buddy processes this data privately and securely with Apple Intelligence."
Workout Buddy needs an Apple Watch, Bluetooth headphones, and an Apple Intelligence-capable iPhone nearby.
Originally published on Tech Times

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Int'l Business Times
13 hours ago
- Int'l Business Times
OpenAI Now Hit $10B in Annual Revenue, More Than Twice Compared to Last Year
A new milestone was achieved by one of the world's most popular and biggest AI companies, OpenAI, as the company announced that it hit a $10 billion annual revenue for the first time. This is a significant jump from last year's valuation of the company which is almost twice as much in value, showing off its massive growth for this year alone thanks to the renowned features and technology. The latest figure is a combination of the different aspects of OpenAI's business, centering on its consumer products for users and businesses, with the company able to grow more this year. OpenAI Now Hit $10 Billion in Annual Revenue CNBC reported that OpenAI has revealed that it successfully reached a $10 billion annual recurring revenue (ARR) this year, and this signals massive growth for the company in only a few years. It is known that OpenAI only recently launched ChatGPT in late 2022, making this chatbot less than three years old, alongside other products of the company, but in this short span, they were able to achieve a whopping milestone. It is important to note that last year's annual recurring revenue for OpenAI only reached $5.5 billion, which means that the company almost doubled its ARR in its latest numbers for the year. That being said, CNBC noted that this new milestone is a massive thing for the company as it is known to have lost around $5 billion last year. Moreover, this follows a recent successful raising of $40 billion in funding last March. What Is Next For OpenAI's Future? The Information revealed that after this $10 billion ARR achievement, the company plans to raise its revenue to more than 10 times what it is currently valued, targeting $125 billion come 2029. OpenAI previously revealed last March that it saw a significant boost in its weekly active users reaching 500 OpenAI also shared that earlier this month, it now has three million paying business users. OpenAI's Latest Advancements This year was a massive one for OpenAI as the company pushed for significant upgrades and new features to its different experiences, particularly with the world-renowned chatbot, ChatGPT. Earlier this year, OpenAI released the free AI image generator on ChatGPT which allowed users to directly create AI-generated images on the platform,without having the need to use DALL-E and others. Moreover, the company has also debuted the ChatGPT Memory Boost feature that allows the chatbot to be more remembering of past conversations and take these into account in new or existing ones. OpenAI said that this Memory Boost allows ChatGPT to tailor responses and other forms of content based on previous chats, allowing it to havecontext and better suit what a user needs. There are more plans that OpenAI has in store for users this year, and all of them plan to deliver massive upgrades and new capabilities for its different products, especially ChatGPT, and boost more of its user base. OpenAI currently enjoys a significant number of daily active users and other partners, leading to the company's latest milestone in revenue for the year. Originally published on Tech Times


Int'l Business Times
a day ago
- Int'l Business Times
Apple Boosts On-Device AI With Live Translation, Screen Smarts, and Dev Access
During Apple's annual WWDC 2025 conference live stream Monday, the company took some time to outline the next phase of Apple Intelligence, adding real-time translation, expanded visual recognition tools, and on-device AI for developers. The tools work across the entire ecosystem, said Apple, including iPhone, iPad, Mac, Apple Watch, and Vision Pro, and will launch with the fall software updates. This is a shift from Apple talking about consumer-facing AI features to really focus in on deeper, system-level integration. Apple Intelligence will power Live Translate in calls and messages, identify (and interact with) content on screen, and push further into Image Playground and Genmoji apps. For developers, Apple will let them access its foundational on-device models, making it easier to provide private, AI-enhanced experiences across apps without having to send data to the cloud. This echoes Apple's long-standing stance on privacy and hardware, which contrasts with more cloud-based AI providers like Open AI and Google. "Last year, we took the first steps on a journey to bring users intelligence that's helpful, relevant, easy to use, and right where users need it, all while protecting their privacy. Now, the models that power Apple Intelligence are becoming more capable and efficient, and we're integrating features in even more places across each of our operating systems," said Craig Federighi, Apple's senior vice president of Software Engineering, during the keynote. "We're also taking the huge step of giving developers direct access to the on-device foundation model powering Apple Intelligence, allowing them to tap into intelligence that is powerful, fast, built with privacy, and available even when users are offline. We think this will ignite a whole new wave of intelligent experiences in the apps users rely on every day. We can't wait to see what developers create." On Device Live Translation A new Apple Intelligence feature enables real-time translation in Messages, letting users chat across languages without leaving the app. Apple promises that Apple Intelligence will come to eight mor languages by the end of the year, too, including Danish, Dutch, Norwegian, Portuguese (Portugal), Swedish, Turkish, Chinese (traditional), and Vietnamese. This will no doubt help the Live Translation feature, which will be integrated into Messages, FaceTime, and Phone apps, enabled by Apple's models that run on device as well. Your conversations are yours, promises the company, and even it cannot read them. Messages will translate your text on the fly, letting you communicate with speakers of otehr languages in real. time. IN Facetime, there'll be translated live captions to read along with what your conversation partner is saying, while the Phone app will turn those captions into audio translations during the whole conversation. Genmoji and Image Playground Updates Image Playground, powered by Apple Intelligence, lets users generate styled visuals on-device—including oil painting effects—using prompts and preset suggestions. If you're into messing around with AI image generation, Genmoji and Image Playground improvements should be right up your alley. In addition to creating your own Genmojis with text descriptions, now you can mix emoji together and combine them with descriptions. You'll also be able to change expressions and things like hairstyles in Image Playground and Genmoji when you base your creations on your family and friends. Image Playground gets more ChatGPT integrations, letting you create images with new styles like oil painting or vector art with a tap or a text description. This of course is not processed on-device; Apple will send the images and descriptions to ChatGPT, which will then return the image to you. You'll have to opt in for that to happen, as well. Visual Intelligence Sees your Screen Finally, you can have Apple Intelligence interact with the things you see on your screen. This new feature will let you take a screenshot and ask ChatGPT questions about what you're seeing on the screen. In true corporate style, the demo showed someone taking a screenshot of a jacket that they then asked to buy, and highlighting a lamp they also wanted to buy. Whether this function will do more than find you places to buy things is yet to be seen. One interesting use case, though, is the ability to take a screenshot of an event poster and have Visual Intelligence add it to your calendar. That sounds pretty great, to be honest. Add that concert to my calendar, Apple! Workout Buddy Brings Apple Intelligence to Apple Watch A user runs down a flight of outdoor stairs, tracking fitness progress with Apple Watch's Move ring feature and Workout Buddy. Fitness fan? Workout Buddy on Apple Watch will take your fitness and workout data and generate "personalized, motivational insights" during your session. Apple wrote: "To offer meaningful inspiration in real time, Workout Buddy analyzes data from a user's current workout along with their fitness history, based on data like heart rate, pace, distance, Activity rings, personal fitness milestones, and more. A new text-to-speech model then translates insights into a dynamic generative voice built using voice data from Fitness+ trainers, so it has the right energy, style, and tone for a workout. Workout Buddy processes this data privately and securely with Apple Intelligence." Workout Buddy needs an Apple Watch, Bluetooth headphones, and an Apple Intelligence-capable iPhone nearby. Originally published on Tech Times


DW
6 days ago
- DW
Chile, Google sign first-of-its-kind deal for undersea cable – DW – 06/05/2025
The "Humboldt Project" will connect Chile to Australia with a 14,800-kilometer cable and improve connectivity. Other South American countries, such as Argentina, Paraguay and Brazil are also expected to benefit. Chile and Google on Wednesday signed an agreement to install the first-ever submarine fiber optic cable between South America, Asia and Oceania by 2027. "This cable not only meets a technical need, but also represents a bet on resilience, diversification of digital routes, and the opening up of new possibilities for international collaboration," Foreign Minister Alberto van Klaveren said at the signing ceremony in Santiago. Faster alternative for Chile Chile is currently connected to the US and other continents via an undersea cable. This new initiative, called the "Humboldt Project," is expected to provide a faster alternative. The project is also expected to benefit Argentina, Paraguay, and Brazil. Protesters halt new data center To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video The plan is to build a 14,800-kilometer (9,200-mile) cable from Valparaiso on Chile's western coast to Australia's Sydney, via French Polynesia. "The idea of building this cable is that it can also be used not only by Google but also by other users, such as technology companies operating in Chile," said Cristian Ramos, director of telecommunications infrastructure for Latin America at Alphabet, Google's parent company. The partnership was first announced in January 2024. Chile's government had then said the cable would have a capacity of 144 terabytes per second and would last 25 years. Chile's Telecommunications Minister Juan Carlos Munoz said the new cable will also reduce the lapse between sending and receiving a signal, which will cause a big difference in the field of telemedicine. Google has invested somewhere between $300 million and $550 million (€250-480 million), local officials said. The Chilean government will contribute $25 million (€ 21 million) to the project. Europe's undersea cables under attack? To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video Edited by: Elizabeth Schumacher