
Apple Intelligence Gets New Capabilities Across Devices At WWDC 2025
Cupertino: Unlocking new ways for users, Apple on Monday announced new Apple Intelligence features across iPhone, iPad, Mac, Apple Watch, and Apple Vision Pro.
Developers can now access the Apple Intelligence on-device foundation model to power private, intelligent experiences within their apps, the company said during its 'WWDC25' event here.
In Messages, Live Translation can automatically translate messages. And when on a phone call, the translation is spoken aloud throughout the conversation. With Live Translation, users can communicate across languages whether they're typing in Messages, or speaking during FaceTime or Phone conversations.
Additionally, Shortcuts can now tap into Apple Intelligence directly, and developers will be able to access the on-device large language model at the core of Apple Intelligence, giving them direct access to intelligence that is powerful, fast, built with privacy, and available even when users are offline.
These Apple Intelligence features are available for testing starting today, and will be available to users with supported devices set to a supported language this fall.
'The models that power Apple Intelligence are becoming more capable and efficient, and we're integrating features in even more places across each of our operating systems,' said Craig Federighi, Apple's senior vice president of Software Engineering.
'We're also taking the huge step of giving developers direct access to the on-device foundation model powering Apple Intelligence, allowing them to tap into intelligence that is powerful, fast, built with privacy, and available even when users are offline. We think this will ignite a whole new wave of intelligent experiences in the apps users rely on every day. We can't wait to see what developers create,' he added.
Apple Intelligence features will be coming to eight more languages by the end of the year: Danish, Dutch, Norwegian, Portuguese (Portugal), Swedish, Turkish, Chinese (traditional), and Vietnamese.
Genmoji and Image Playground provide users with even more ways to express themselves. In addition to turning a text description into a Genmoji, users can now mix together emoji and combine them with descriptions to create something new.
When users make images inspired by family and friends using Genmoji and Image Playground, they have the ability to change expressions or adjust personal attributes, like hairstyle, to match their friend's latest look, said Apple.
In Image Playground, users can tap into brand-new styles with ChatGPT, like an oil painting style or vector art. For moments when users have a specific idea in mind, they can tap Any Style and describe what they want. Image Playground sends a user's description or photo to ChatGPT and creates a unique image. Users are always in control, and nothing is shared with ChatGPT without their permission.
Workout Buddy is a first-of-its-kind workout experience on Apple Watch with Apple Intelligence that incorporates a user's workout data and fitness history to generate personalised, motivational insights during their session.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Hindustan Times
13 minutes ago
- Hindustan Times
Meta AI on WhatsApp to support image analysis like ChatGPT and Gemini for free
More than a year after WhatsApp introduced Meta AI, which allows users to interact with AI more intuitively, it's now stepping up its game. In a significant move towards deeper AI integration, the platform is reportedly working on a feature that will allow Meta AI to analyse user-shared images and documents. Based on the input, Meta AI will offer instant insights like verifying authenticity or describing visual content. What is this feature? In a recent post on X, WABetaInfo revealed that users will soon be able to forward images or files directly to Meta AI and ask questions about them. This functionality, reminiscent of what's available on ChatGPT (paid version) or Gemini, expands Meta AI's utility beyond just text prompts. But, what's the catch? WhatsApp plans to offer these features for free, potentially making high-end AI capabilities more accessible to millions. How does this feature impact users' privacy? While there have always been concerns about AI storing your private information and data, Meta clarifies that the AI can only access content that users explicitly share. However, the feature's usage terms have outlined a clause that states shared inputs may be used to improve its AI systems. This might lead to some hesitation among privacy-conscious users despite the feature's utility. Feature availability and accessibility The image analysis feature by Meta AI is currently being rolled out to select beta testers on both iOS and Android platforms. According to WABetaInfo, users on WhatsApp beta for iOS version 25.17.10.78 and Android version 2.25.18.14 may be able to try it out, depending on their eligibility and update history. Some users might receive access by installing certain previous updates. Since the feature is still in the testing phase, a wider release is expected once Meta completes internal testing and gathers user feedback. What is the new AI tab? Alongside image analysis, WhatsApp is also gearing up to roll out a dedicated 'AI' tab. This feature will enable users to build their own custom AI chatbots without any coding. Through a guided, step-by-step interface, users will be able to create bots tailored for personal use or simple business functions. With these developments, Meta is clearly aiming to make AI more approachable, powerful, and personal, right from within the world's most widely used messaging app.


The Hindu
20 minutes ago
- The Hindu
F1 movie: Apple unveils world's first haptic movie trailer for Joseph Kosinski's ‘F1'
Apple has launched a groundbreaking promotional feature for its upcoming racing film F1, introducing the world's first haptic-enabled movie trailer. The immersive trailer, which can be viewed on iPhones running iOS 18.4 via the Apple TV app, gives users a tactile experience of key moments from the film. The trailer uses the iPhone's Taptic Engine to synchronise vibrations with the on-screen action, allowing viewers to feel events like roaring engines, tire skids, and even subtle moments like the click of a seatbelt or a bouncing ball. It mirrors the multi-sensory effects of 4DX theaters but delivers the experience through a smartphone for the first time. Experience the new @F1Movie trailer on iPhone in a way only Apple can deliver. — Tim Cook (@tim_cook) June 11, 2025 Starring Brad Pitt as Sonny Hayes, F1 follows a former Formula One driver who is called out of retirement to mentor a rising talent, played by Damson Idris. Directed by Joseph Kosinski, the film is set for theatrical release on June 27 and will later be available on Apple TV. To access the haptic trailer, users must update their iPhones to iOS 18.4, open the Apple TV app, and select the specially marked F1 trailer. Viewers will feel dynamic feedback throughout the trailer such as acceleration bursts and tension-filled crashes. This marks a first in cinematic promotion, bringing haptic technology that's commonly used in gaming into the world of film.


News18
33 minutes ago
- News18
OpenAI's Sam Altman Tells Us How Much Water Is Used When You Talk To ChatGPT
Last Updated: OpenAI and other tech companies need resources like water and power to keep the AI systems running and ChatGPT answer your questions. OpenAI and other AI giants like Google have people excited for the tech but there are concerns as to how much resources like water and electricity is needed to keep them running. ChatGPT is one of the popular chatbots in the market, and people have been wondering how much OpenAI needs to invest in hardware and other resources to make it faster and reliable. Guess what, Sam Altman has come out publicly to share some stats around the use of ChatGPT and how much energy as well as water the AI chatbot needs to respond to your queries. ChatGPT Needs Water And Energy But How Much? Altman has penned down his belief in a recent blog post, giving us details that will tell us how AI is going to evolve and hopefully consume less resources in the near future. Altman claims that one query with ChatGPT needs around 0.34 watt-hours of power, which is as low as running a light bulb for a few minutes. This might not sound like a lot for a single query but when you combine the billions of queries raised by the AI chatbot daily, you are looking at heavy usage of electricity. He then makes a similar point about the use of water to generate responses from the AI systems. Altman mentions that you need a teaspoon of water to make ChatGPT answer one query, which again seems harmless but in the broader scheme of things, that is a lot of gallons being utilised. Having said that, none of these figures have been verified so we would like to take it with a pinch of salt. In his defence, Altman cites the prospect of running AI coming down gradually like any other technology shift, but that is likely to happen in the long run, and concerns are that by then the taps might dry out for general use. First Published: June 12, 2025, 11:27 IST