logo
Here is everything Apple announced at its annual developer conference

Here is everything Apple announced at its annual developer conference

Observer10-06-2025
Apple unveiled upgrades to operating systems across its devices on Monday, including overhauled visual elements, a fresh naming system for software updates and new features in its Apple Intelligence suite.
At its annual Worldwide Developers Conference, the company also said it would open up the underlying technology it uses for Apple Intelligence to developers. These are some of the key announcements from the event: "LIQUID GLASS" Apple is rolling out a new "Liquid Glass" design language across its software, bringing sleek translucence and a glass-like shine to app interfaces.
Inspired by visionOS on the Vision Pro augmented reality device, the design adapts to light and dark modes and reacts dynamically to movement using real-time rendering. The new design will be implemented in buttons, sliders, media controls and larger elements such as tab bars and sidebars, along with matching redesigned toolbars and navigation. Apple is releasing updated Application Programming Interfaces so that developers can begin adapting their apps ahead of the new design rollout later this year.
OPERATING SYSTEMS
This year's major iOS release would have originally been called iOS 19, following the usual sequence after iOS 18. However, Apple is now changing its naming convention: future iOS versions will be numbered based on the year following their release-similar to how car manufacturers name new models.
Several parts of the operating systems are getting a major visual overhaul as part of the redesign. The Phone app now includes call screening, allowing it to answer calls or wait on hold for you. The Messages app is also getting updates that include customizable chat backgrounds. Apple also said it would add generative AI to its Xcode coding tools that can help developers write code, test it and resolve errors. The company said it would add other coding models such as ChatGPT to Xcode.
APPLE INTELLIGENCE
New additions to the operating system include Live Translation, which uses on-device AI models to translate conversations in real time, in text messages, phone calls or FaceTime. Apple Pay is also getting Apple Intelligence integration, enabling it to track orders even for purchases made outside Apple Pay.
Meanwhile, Image Playground is getting a boost with a new feature that allows users to generate images with the help of OpenAI's ChatGPT. Apple will now allow developers to tap into its on-device foundational model for their own apps. Through the new Foundation Models framework, developers can build intelligent, privacy-focused experiences that work offline too.
VISUAL INTELLIGENCE
Apple will also let users learn more about what's on their iPhone screens via Visual Intelligence. Users can search across Google, Etsy and other supported apps to find visually similar images or products. If the tool detects that you're viewing an event, iOS 26 will suggest adding it to your calendar. This feature will be accessible using the same button combination used to take a screenshot on an iPhone.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

We have a choice, or do we?
We have a choice, or do we?

Observer

time2 days ago

  • Observer

We have a choice, or do we?

We have a choice in how we lead our lives. We have a choice in taking a career path. Steve Jobs, co-founder of Apple, throughout his life gave some impactful, thought-provoking quotes. Some of his most famous include: "Stay hungry. Stay foolish", encouraging a mindset of continuous learning and open-mindedness; and "Think different", emphasising the importance of unconventional thinking. He also famously said, "Your time is limited, so don't waste it living someone else's life", highlighting the value of authenticity and pursuing one's passions. He also famously said this: "The only way to do great work is to love what you do", emphasising that satisfaction and high-quality output stem from genuine passion for one's work. Now here comes the choice part — can we afford to choose any particular profession you like, just because you have an aptitude for it? We have to understand the job market too. What are the jobs that are in demand? Once we understand that, we can shape ourselves to the requirements of the job market. This is where another quote from Jobs, author of 'Make Something Wonderful', makes us think — a pioneer of innovation who encouraged people to think boldly is often quoted for this line, 'Innovation distinguishes leaders from followers'. But the fact is that everyone needs a platform to prove themselves. The youth often begin with new ideas, having a strong belief that they know it better than the others who have been working before them. We have all been there. But time also teaches us the value of experience. We learn eventually how to respect methods that work well and are foolproof. So we learn from them, the seniors and still think of new ways to improve and move forward. There are challenges, however; there is a need for more jobs and would-be entrepreneurs need capital to start their venture. So they ask, 'Do you want to make others dreams come true, or do you want to make yours a reality?' After all time is limited. Finance is something not all of us think about until we reach the crunch. Would it not be nice if the concept were thought of very much like science and social studies? How much should a person save in a month? I had asked a finance expert and he replied, 'At least 30 per cent of your monthly earnings'. You can only shop or enjoy your retail therapy after saving. Shopping, that is a place where we regularly practice our choice. We are so hands-on when we are shopping, knowing exactly what we want. Yet we are not that focused when it comes to other aspects of our lives. It is all about knowing what we want as well as practicing. But going back to our challenges in finding jobs and fulfilling the dream of entrepreneurship, it goes without saying that we must not give up, but we have to keep in mind the alternatives that could be available. If plans A, B and C did not work, do not lose hope because there are so many more alphabets. We must learn from each experience, though. There are times when we might feel tired and that we have run out of resources, especially energy. But that is when we must realise time is the most precious resource. And if we have come this far, we must surely understand that we have been blessed with time. We are encouraged to count our blessings daily because that leads us to practicing gratitude. The human tendency, however, is to complain about what we perceive are our lackings. And to understand better, all we have to do is to look around and see what others are going through as well.

Should AI be banned in schools?
Should AI be banned in schools?

Observer

time2 days ago

  • Observer

Should AI be banned in schools?

Students have started to find academic work, be it for school, college, or a university, super easy. In fact, educators such as teachers in schools, as well as professors in colleges and universities, along with knowledge workers, such as the administrative staff in these academic institutions, have recently started to write and post creative/intelligent topics on social media platforms like never before. The question is, however, what has really changed? In simple words, the rise and utilisation of the Artificial Intelligence (AI) apps. What is literally the problem? Unfortunately, the machines do the majority of that beautiful work/content that we see produced and delivered. This is serious in my humble opinion, and I see a disaster in terms of the quality of graduates whom we expect to be leading the nation tomorrow. My article this week is about the danger and the negative impact that AI is having on the education sector and whether a ban on AI must be made at schools in order to save students and related teachers/educators from becoming obsolete. Today, homework is literally solved easily and submitted instantly. A presentation is generated and delivered flawlessly. Academicians and the administrative staff working for the institutions continuously publish interesting and thought-provoking articles on the social media networks as opposed to the pre-generative AI era. Some of those administrative staff had never been so engaged on social media or similar publications before. However, a dangerous truth lies under this polished surface. Virtually none of the academic stakeholders (students, teachers, professors, and/or administrative staff) are learning or originally producing the content, but the machines (powered by AI) are doing the job for them. Being a technologist myself and someone that has promoted the use of technology since a very young age, I would not deny the positive transformation that AI tools like ChatGPT, Gemini (previously known as Bard), Co-pilot, Claude, and Perplexity have been doing for all the industries and sectors, including education per se. However, the reliance on these technologies, especially the generative AI consumer apps, is very dangerous. Why? Because there will be no quality work or growth from the students and the generation that we would depend on to run our nation tomorrow. Simply logging into one of the generative AI apps, keying in a prompt (a smart question), and then generating an essay or a report in seconds involves no serious thinking nor creative work being produced, but rather a copy-and-paste approach that even a savvy toddler would probably be able to do in the very near future. I am not against using AI apps as a helper, but not relying entirely on the same. What should therefore be done to save humanity? My suggestion is simply to ban AI, especially in schools, until a proper framework and policies are created/available in place. Why? Because the cost of doing nothing (and continuing as it is today) may result in having students who can no longer think, write, or solve problems without the use of generative AI apps. I know some would argue the same about calculators, yet this has matured now, and the same would most probably happen with AI generative apps too in the future (as they get bigger in scope and the kind of things they can generate and do as machines too). A 2024 survey by Turnitin showed that over 67% of students have used AI to complete assignments, many without fully understanding the content. Teachers report students submitting work well above their ability, then failing when asked to explain it. The message is clear, i.e., students are not learning more but are just copying better. Furthermore, I personally have witnessed a number of teachers/educators who now post amazing articles using AI tools. For a proof in a pudding, check LinkedIn today and compare the same with the pre-Gen AI era. To conclude my article this week, I would like to emphasise that the vital damage of what AI would be doing to school may be permanent if action is not taken now. Students that graduate tomorrow will highly depend on AI for any task or job given. The question is, if AI can do the job, then why employ graduates as extra baggage? I leave the thought of whether to ban AI in schools to you until we catch up again next week. Stay tuned.

AI is not your friend
AI is not your friend

Observer

time4 days ago

  • Observer

AI is not your friend

Meta CEO Mark Zuckerberg and OpenAI's Sam Altman have been aggressively promoting the idea that everyone — children included — should form relationships with AI 'friends' or 'companions'. Meanwhile, multinational tech companies are pushing the concept of 'AI agents' designed to assist us in our personal and professional lives, handle routine tasks and guide decision-making. But the reality is that AI systems are not and never will be, friends, companions, or agents. They are and will always remain, machines. We should be honest about that and push back against misleading marketing that suggests otherwise. The most deceptive term of all is 'artificial intelligence'. These systems are not truly intelligent and what we call 'AI' today is simply a set of technical tools designed to mimic certain cognitive functions. They are not capable of true comprehension and are neither objective, fair, nor neutral. Nor are they becoming any smarter. AI systems rely on data to function and increasingly, that includes data generated by tools like ChatGPT. The result is a feedback loop that recycles output without producing deeper understanding. More fundamentally, intelligence is not just about solving tasks; it's also about how those tasks are approached and performed. Despite their technical capabilities, AI models remain limited to specific domains, such as processing large datasets, performing logical deductions and making calculations. When it comes to social intelligence, however, machines can only simulate emotions, interactions and relationships. A medical robot, for example, could be programmed to cry when a patient cries, yet no one would argue that it feels genuine sadness. The same robot could just as easily be programmed to slap the patient and it would carry out that command with equal precision – and with the same lack of authenticity and self-awareness. The machine doesn't 'care'; it simply follows instructions. And no matter how advanced such systems become, that is not going to change. AI systems are not and never will be, friends, companions, or agents. Simply put, machines lack moral agency. Their behaviour is governed by patterns and rules created by people, whereas human morality is rooted in autonomy — the capacity to recognise ethical norms and behave accordingly. By contrast, AI systems are designed for functionality and optimisation. They may adapt through self-learning, but the rules they generate have no inherent ethical meaning. Consider self-driving cars. To get from point A to point B as quickly as possible, a self-driving vehicle might develop rules to optimise travel time. If running over pedestrians would help achieve that goal, the car might do so, unless instructed not to, because it cannot understand the moral implications of harming people. This is partly because machines are incapable of grasping the principle of generalisability — the idea that an action is ethical only if it can be justified as a universal rule. Moral judgment depends on the ability to provide a plausible rationale that others can reasonably accept. These are what we often refer to as 'good reasons'. Unlike machines, humans are able to engage in generalisable moral reasoning and can therefore judge whether their actions are right or wrong. The term 'data-based systems' (DS) is thus more appropriate than 'artificial intelligence', as it reflects what AI can actually do: generate, collect, process and evaluate data to make observations and predictions. It also clarifies the strengths and limitations of today's emerging technologies. At their core, these are systems that use highly sophisticated mathematical processes to analyse vast amounts of data — nothing more. Humans may interact with them, but communication is entirely one-way. DS have no awareness of what they are 'doing' or of anything happening around them. This is not to suggest that DS cannot benefit humanity or the planet. On the contrary, we can and should rely on them in domains where their capabilities exceed our own. But we must also actively manage and mitigate the ethical risks they present. Developing human-rights-based DS and establishing an International Data-Based Systems Agency at the United Nations would be important first steps in that direction. Over the past two decades, Big Tech firms have isolated us and fractured our societies through social media — more accurately described as 'anti-social media', given its addictive and corrosive nature. Now, those same companies are promoting a radical new vision: replacing human connection with AI 'friends' and 'companions'. At the same time, these companies continue to ignore the so-called 'black box problem': the untraceability, unpredictability and lack of transparency in the algorithmic processes behind automated evaluations, predictions and decisions. This opacity, combined with the high likelihood of biased and discriminatory algorithms, inevitably results in biased and discriminatory outcomes. The risks posed by DS are not theoretical. These systems already shape our private and professional lives in increasingly harmful ways, manipulating us economically and politically, yet tech CEOs urge us to let DS tools guide our decisions. To protect our freedom and dignity, as well as the freedom and dignity of future generations, we must not allow machines to masquerade as what they are not: us. @Project Syndicate, 2025

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store