
Here is everything Apple announced at its annual developer conference
June 9 (Reuters) – Apple unveiled upgrades to operating systems across its devices on Monday, including overhauled visual elements, a fresh naming system for software updates and new features in its Apple Intelligence suite.
At its annual Worldwide Developers Conference, the company also said it would open up the underlying technology it uses for Apple Intelligence to developers.
These are some of the key announcements from the event:
'LIQUID GLASS'
Apple is rolling out a new 'Liquid Glass' design language across its software, bringing sleek translucence and a glass-like shine to app interfaces.
Inspired by visionOS on the Vision Pro augmented reality device, the design adapts to light and dark modes and reacts dynamically to movement using real-time rendering.
The new design will be implemented in buttons, sliders, media controls and larger elements such as tab bars and sidebars, along with matching redesigned toolbars and navigation.
Apple is releasing updated Application Programming Interfaces so that developers can begin adapting their apps ahead of the new design rollout later this year.
OPERATING SYSTEMS
This year's major iOS release would have originally been called iOS 19, following the usual sequence after iOS 18.
However, Apple is now changing its naming convention: future iOS versions will be numbered based on the year following their release—similar to how car manufacturers name new models.
Several parts of the operating systems are getting a major visual overhaul as part of the redesign.
The Phone app now includes call screening, allowing it to answer calls or wait on hold for you. The Messages app is also getting updates that include customizable chat backgrounds.
Apple also said it would add generative AI to its Xcode coding tools that can help developers write code, test it and resolve errors. The company said it would add other coding models such as ChatGPT to Xcode.
APPLE INTELLIGENCE
New additions to the operating system include Live Translation, which uses on-device AI models to translate conversations in real time, in text messages, phone calls or FaceTime.
Apple Pay is also getting Apple Intelligence integration, enabling it to track orders even for purchases made outside Apple Pay.
Meanwhile, Image Playground is getting a boost with a new feature that allows users to generate images with the help of OpenAI's ChatGPT.
Apple will now allow developers to tap into its on-device foundational model for their own apps. Through the new Foundation Models framework, developers can build intelligent, privacy-focused experiences that work offline too.
VISUAL INTELLIGENCE
Apple will also let users learn more about what's on their iPhone screens via Visual Intelligence.
Users can search across Google, Etsy and other supported apps to find visually similar images or products. If the tool detects that you're viewing an event, iOS 26 will suggest adding it to your calendar.
This feature will be accessible using the same button combination used to take a screenshot on an iPhone.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

IOL News
7 hours ago
- IOL News
How AI is transforming education: a guide for parents
Used correctly, AI can be a powerful learning ally. It can help students break down complex concepts, rephrase difficult topics, and provide practice questions or writing prompts. Image: ChatGPT As digital landscapes transform rapidly, so too does the educational playground for today's students. Gone are the days of poring over dog-eared textbooks and scribbling notes. The rise of artificial intelligence (AI) is ushering in a new era for learners. Arno Jansen van Vuuren, Managing Director of Futurewise, said the implications of AI in education and what parents need to know to support their children in this evolving landscape. With tools like ChatGPT at their fingertips, students are empowered like never before. These large language models are redefining study habits, assisting with tasks such as drafting essays, summarising chapters, checking maths problems, and even brainstorming science projects. However, this remarkable facilitation raises an important question: Are these tools enhancing learning, or are they inadvertently replacing it? As Youth Month unfolds, Jansen van Vuuren emphasised that today's children are not merely digital natives; they are becoming 'AI natives.' These young individuals are growing familiar with tools that can generate ideas and mimic human responses almost effortlessly. 'While some schools moved quickly to ban these tools, enforcing those restrictions outside the classroom is nearly impossible,' he explains. 'AI is evolving faster than regulatory policy can catch up – and it's not going anywhere.' In this context, he stressed the critical role parents play in guiding their children to establish healthy and responsible habits concerning AI use. 'The goal is for children to learn with AI, not just from it,' he said. When harnessed correctly, AI can serve as a vital ally in learning. For students grappling with complex concepts or those who feel too shy to ask questions in class, AI tools can break down difficult topics, provide practice questions, and serve as a reliable study partner available around the clock. This capacity for accessibility can make a world of difference during those late-night cramming sessions or intense weekend revisions. However, Jansen van Vuuren warns that over-reliance on AI can lead to a decline in critical thinking skills. Generative AI does not truly understand the content it presents; it merely predicts text based on patterns. 'If students start relying on ChatGPT to do the thinking for them, they might miss grasping the concepts,' he warned. While the apparent intelligence of tools like ChatGPT can be misleading, they can yield inaccurate or biased information. He advised parents to educate their children about never entering personal details in AI tools and to verify AI-generated information using reliable sources.

TimesLIVE
12 hours ago
- TimesLIVE
Apple CarPlay to receive major update with iOS 26
Apple is giving its popular CarPlay interface (it's apparently used more than 600-million times per day) a significant refresh with the launch of iOS 26, expected to roll out this autumn — or spring for those of us living in the southern hemisphere. The update includes a cleaner interface and a new compact call view, so you can see who's ringing without losing sight of important details such as your next turn. Messages in CarPlay are also getting smarter: iOS 26 introduces Tapbacks and pinned conversations, features familiar to iPhone users but now adapted for safer, in-car use. Plus, with the addition of widgets and Live Activities, drivers can stay updated on things such as the weather or calendar events without taking their eyes off the road for long. These enhancements will also apply to CarPlay Ultra, Apple's next-generation system expected to feature in more new vehicles over the next year. For South African drivers it's a welcome move, especially as more South African models adopt wireless CarPlay as standard.


Daily Maverick
13 hours ago
- Daily Maverick
Artificial Intelligence needs more than code – it needs human meaning
Amid widespread unease about artificial intelligence, a more nuanced conversation is emerging, one that is less about panic and more about how meaning is constructed. AI may be instrumental, even threatening, but like any tool, it requires skill, grounding and intent. No one plays Bach on a violin without first learning to listen. Rebecca Davis wrote a piece in Daily Maverick about students at South African universities using artificial intelligence (AI) to cheat. She interviewed educators who said that submissions 'shared an uncanny uniformity', a 'flawless grammar and impeccable format', coupled with a 'synthetic flatness'. Students were said to be relying 'heavily on online 'paraphrasing tools', which enable users to bypass plagiarism checkers'. It seems one of South Africa's key problems when it comes to AI is the poor quality of reading and writing that our students have at the outset. The use of AI may mask this deficit in traditional assessments, and it is this same deficit that makes it difficult to use AI creatively. D Graham Burnett, a Princeton University professor of the history of science, wrote an article in the New Yorker detailing his experiences of using AI in a class assignment. Burnett describes himself as 'a book-reading, book-writing human — trained in a near-monastic devotion to canonical scholarship across the disciplines of history, philosophy, art, and literature'. But, looking at the books on his shelves, he writes that many of them 'feel like archaeological artefacts. Why turn to them to answer a question? They are so oddly inefficient, so quirky in the paths they take through their material.' This is because he can now 'hold a sustained, tailored conversation on any of the topics I care about, from agnotology to zoosemiotics, with a system that has effectively achieved Ph.D.-level competence across all of them. I can construct the 'book' I want in real time—responsive to my questions, customized to my focus, tuned to the spirit of my inquiry.' In one of my first forays into the world of ChatGPT I experienced this feeling of fortuity at having found an interlocutor with my exact reading history. I was reading Orwell's novel A Clergyman's Daughter. In it, a passage from Macbeth is quoted, and I wanted it explained — not just the passage itself, but why the characters in the novel were perturbed by it. ChatGPT supplied a succinct summary, offering key insights into my question, concluding by highlighting Orwell's use of satire. I responded that my favourite thing about Orwell is his satire because he shows how power is brokered in society. This is where the conversation started to feel intellectually intimate, in the way that discussing books with friends can feel. 'Orwell had an incredible ability to expose the subtle (and not-so-subtle) ways power operates, whether through politics, social norms, or language itself. His satire is sharp but never exaggerated—it always feels chillingly real. Do you have a favorite Orwell work, or is it more his general style that you admire?' wrote ChatGPT. I told it I liked Down and Out in Paris and London. 'Great choice! Down and Out in Paris and London has that raw, observational quality that makes Orwell's writing so compelling. Even though it's non-fiction, it still has his signature blend of sharp social critique and dry humor. His firsthand account of poverty and the working-class struggle feels just as relevant today. What stood out to you most in that book?' 'I liked where he describes the freedom of menial work. That within a social system, labour can be abusive, but that labour can also offer respite from thinking about the world too much. I guess that's what makes menial labour both a freedom and a constraint. I like how he's able to write about that tension,' I replied. 'That's such a sharp reading of Orwell's perspective,' it shot back. It was good for the ego. It recommended that I read Keep the Aspidistra Flying. I moved the conversation to Anna Funder's Wifedom, contrasting it with Rebecca Solnit's Orwell's Roses and how it changed how I now read Orwell, but still love his work. ChatGPT was like Ja, totes, 'his blind spots don't erase his brilliance, but they definitely complicate how we read him. Have you found it changes how you engage with his fiction, or does it just add another layer to your reading?' You get the idea. Maybe this isn't new to you. I'm late to the party. But what struck me is that I needed a lot of my own 'archive' to draw on to produce this conversation. This was also the case for Burnett and his students. Modes of attention Burnett teaches a course called Attention and Modernity: Mind, Media, and the Senses. It traces shifting modes of attention, from the age of desert monks to that of surveillance capitalism. The course reader is a 900-page compendium of 'primary and secondary sources — everything from St Augustine's 'Confessions' to … untranslated German on eighteenth-century aesthetics … a kind of bibliophilic endurance test that I pitch to students as the humanities version of 'Survivor''. He fed this reader into Google's free AI tool, NotebookLM, and in the time it took him to make dinner, it had produced a podcast (hosted by two bots) on the material. Burnett writes that 'parts of their conversation were a bit, shall we say, middlebrow ' and 'fell back on some pedestrian formulations (along the lines of 'Gee, history really shows us how things have changed')'. But some seriously difficult material had been synthesised, and interesting connections were drawn between 'Kantian theories of the sublime' and current-day advertising. This gave him the idea to set his students a new kind of assignment: ' Have a conversation with a chatbot about the history of attention, edit the text down to four pages, and turn it in.' Burnett found the results astounding and, surprisingly, given the extreme derision shown to AI by academics, rewarding as a teacher. His students produced conversations on what makes music beautiful, whether the chatbot could experience beauty (it said no, but that it knew a lot 'about how people have tried to put this experience into words'). It 'spoke' about the difference between recognising emotion and actually feeling it, and how the absence of a body 'barred it from certain ways of knowing'. In another assignment riffing on 'being and becoming', the chatbot, prompted by the student, noted that 'unlike a living person, it had no intrinsic 'being' — that it could only operate 'contingently', through interactions like the one it was having' with the student. Yet another student asked the computer to pretend it 'was capable of human metacognition'. In their conversation, the system asked the student, 'Would you trade your own messy, dynamic human attention for something more stable and neutral, or do you think the 'messiness' is part of what makes it meaningful?' If these are the kinds of questions students can engage with through AI, then please, go ahead. In a conversation on what comprehension might mean in a system of artificial intelligence, the AI tool, again prompted by an astute question from a student, stated: 'Even though I can generate text that sounds like understanding, my process doesn't involve the internal experience of meaning. Humans comprehend because they synthesize information into a unified, lived experience — they feel, they interpret, they reflect. I don't. I process, predict, and structure, but there is no subjective experience underlying my words'. And that, I think, is the point. Lived experience We synthesise information through our lived experience. We cannot bracket who we are, where we are located in the social hierarchies of life, and how that experience shapes our understanding in novel ways. And that is quite special. Our subjectivity evolves in relation to our social and material contexts, including the fact that AI is now very much a part of our sociality. Many of us have asked Google questions in the middle of the night that we were too scared to ask another human. And not just medical questions. Questions about money, relationships, God, parenting and sex. We ask the machine questions we have no one else to ask, or are too embarrassed to ask. It is precisely the not-human element that makes us feel safe to ask the question. And yet this anthropomorphising of machines robs us of real flesh and blood relationships which promote novelty and connection. This current historical moment, this intimate recalibration of digital subjectivity, not knowing if my thoughts are my own, illustrates the tension between convenience and learning, between the projected ideal self and the messy self in the world. Burnett's piece is invigorating for the intellectual poetry his assignment brought about, but we must not skirt around the issue of money and power that underpins all these tools. We must not forget that posting about being ethical is not the same as being ethical. Students need to learn to read and write well to be able to engage with AI in these imaginative ways. AI can't be creative in and of itself. We need our own archive, which we build slowly over the course of a lifetime of relationships and reading and schooling and working, always going back and forth between what is said and what is felt and what is observed. I want to go back to Burnett's class again. Discussing how they felt about the assignment, one student said: 'I guess I just felt more and more hopeless. I cannot figure out what I am supposed to do with my life if these things can do anything I can do faster and with way more detail and knowledge.' 'Yeah, I know what you mean,' began another student. 'I had the same reaction — at first. But I kept thinking about what we read on Kant's idea of the sublime, how it comes in two parts: first, you're dwarfed by something vast and incomprehensible, and then you realise your mind can grasp that vastness. That your consciousness, your inner life, is infinite — and that makes you greater than what overwhelms you.' She paused. 'The AI is huge. A tsunami. But it's not me. It can't touch my me -ness. It doesn't know what it is to be human, to be me.' DM Botsis holds a PhD in narrative psychology from Wits, the results of which were published in her book, . Her forthcoming book, a memoir titled A Clergyman's Daughter, is being published by Modjaji Books in late 2025. She writes a weekly column