
AI is not your friend
But the reality is that AI systems are not and never will be, friends, companions, or agents. They are and will always remain, machines. We should be honest about that and push back against misleading marketing that suggests otherwise.
The most deceptive term of all is 'artificial intelligence'. These systems are not truly intelligent and what we call 'AI' today is simply a set of technical tools designed to mimic certain cognitive functions. They are not capable of true comprehension and are neither objective, fair, nor neutral.
Nor are they becoming any smarter. AI systems rely on data to function and increasingly, that includes data generated by tools like ChatGPT. The result is a feedback loop that recycles output without producing deeper understanding.
More fundamentally, intelligence is not just about solving tasks; it's also about how those tasks are approached and performed. Despite their technical capabilities, AI models remain limited to specific domains, such as processing large datasets, performing logical deductions and making calculations.
When it comes to social intelligence, however, machines can only simulate emotions, interactions and relationships. A medical robot, for example, could be programmed to cry when a patient cries, yet no one would argue that it feels genuine sadness. The same robot could just as easily be programmed to slap the patient and it would carry out that command with equal precision – and with the same lack of authenticity and self-awareness. The machine doesn't 'care'; it simply follows instructions. And no matter how advanced such systems become, that is not going to change.
AI systems are not and never will be, friends, companions, or agents.
Simply put, machines lack moral agency. Their behaviour is governed by patterns and rules created by people, whereas human morality is rooted in autonomy — the capacity to recognise ethical norms and behave accordingly. By contrast, AI systems are designed for functionality and optimisation. They may adapt through self-learning, but the rules they generate have no inherent ethical meaning.
Consider self-driving cars. To get from point A to point B as quickly as possible, a self-driving vehicle might develop rules to optimise travel time. If running over pedestrians would help achieve that goal, the car might do so, unless instructed not to, because it cannot understand the moral implications of harming people.
This is partly because machines are incapable of grasping the principle of generalisability — the idea that an action is ethical only if it can be justified as a universal rule. Moral judgment depends on the ability to provide a plausible rationale that others can reasonably accept. These are what we often refer to as 'good reasons'. Unlike machines, humans are able to engage in generalisable moral reasoning and can therefore judge whether their actions are right or wrong.
The term 'data-based systems' (DS) is thus more appropriate than 'artificial intelligence', as it reflects what AI can actually do: generate, collect, process and evaluate data to make observations and predictions. It also clarifies the strengths and limitations of today's emerging technologies.
At their core, these are systems that use highly sophisticated mathematical processes to analyse vast amounts of data — nothing more. Humans may interact with them, but communication is entirely one-way. DS have no awareness of what they are 'doing' or of anything happening around them.
This is not to suggest that DS cannot benefit humanity or the planet. On the contrary, we can and should rely on them in domains where their capabilities exceed our own. But we must also actively manage and mitigate the ethical risks they present. Developing human-rights-based DS and establishing an International Data-Based Systems Agency at the United Nations would be important first steps in that direction.
Over the past two decades, Big Tech firms have isolated us and fractured our societies through social media — more accurately described as 'anti-social media', given its addictive and corrosive nature. Now, those same companies are promoting a radical new vision: replacing human connection with AI 'friends' and 'companions'.
At the same time, these companies continue to ignore the so-called 'black box problem': the untraceability, unpredictability and lack of transparency in the algorithmic processes behind automated evaluations, predictions and decisions. This opacity, combined with the high likelihood of biased and discriminatory algorithms, inevitably results in biased and discriminatory outcomes.
The risks posed by DS are not theoretical. These systems already shape our private and professional lives in increasingly harmful ways, manipulating us economically and politically, yet tech CEOs urge us to let DS tools guide our decisions. To protect our freedom and dignity, as well as the freedom and dignity of future generations, we must not allow machines to masquerade as what they are not: us.
@Project Syndicate, 2025

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Observer
8 hours ago
- Observer
New Instagram location sharing feature sparks privacy fears
San Francisco - Instagram users are warning about a new location sharing feature, fearing that the hugely popular app could be putting people in danger by revealing their whereabouts without their knowledge. The Meta-owned image-sharing platform added an option on Wednesday that shares locations using an Instagram map, similar to a feature rival Snapchat has offered since 2017. Some users have since been shocked to discover that their location was being shared, as viral posts have shown. "Mine was turned on and my home address was showing for all of my followers to see," Instagram user Lindsey Bell wrote in reply to a warning posted by "Bachelor" reality television personality Kelley Flanagan to her 300,000 TikTok followers. "Turned it off immediately once I knew, but had me feeling sick about it." In a TikTok video, Flanagan called Instagram's new location-sharing feature "dangerous" and gave step-by-step instructions on how to make sure it is turned off. Instagram chief Adam Mosseri fired off a post on Meta-owned Threads stressing that Instagram location sharing is off by default, meaning users need to opt in for it to be active. "Quick Friend Map clarification, your location will only be shared if you decide to share it, and if you do, it can only be shared with a limited group of people you choose," Mosseri wrote. "To start, location sharing is completely off." The feature was added as a way for friends to better connect, sharing posts from "cool spots," Instagram said in a blog post. Users can be selective regarding who they share locations with, and can turn it off whenever they wish, according to Instagram. Wariness regarding whether Instagram is watching out for user privacy comes just a week after a federal jury in San Francisco sided with women who accused Meta of exploiting health data gathered by the Flo app, which tracks menstruation and efforts to get pregnant. A jury concluded that Meta used women's sensitive health data to better target money-making ads, according to law firm Labaton Keller Sucharow, which represented the plaintiffs. Evidence at trial showed Meta was aware it was getting confidential health data from the third-party app, and that some employees appeared to mock the nature of the information, the law firm contended. "This case was about more than just data -- it was about dignity, trust, and accountability," lead attorney Carol Villegas said in a blog post. Damages in the suit have yet to be determined.


Observer
3 days ago
- Observer
We stopped teaching like its 1985. Here's what happened.
By Nitish Jain, President, SP Jain School of Global Management You're 24. You walk into a job interview with a global tech firm. It's your dream role. A panel of three stares back at you, resumes in hand. The questions come fast. Go-to-market strategy for the UAE. Pricing model for a new subscription-based product. A curveball on AI ethics. You're nervous, but not frozen. You've been here before. Not once or twice, but again and again over the past week. What the panel doesn't know is that you've faced these exact questions. At 10 PM, 2 AM, whenever you needed to. Not with YouTube videos or coaching classes, but with an AI tutor that's been part of your MBA journey from day one, fully integrated into your curriculum. This tutor didn't spoon-feed answers or give generic hacks. It interrogated your thinking. Pulled up real questions asked by real companies. Challenged your logic until your answers held up under pressure. It even pushed back: 'What is the risk you're not seeing here?' or 'That answer sounds safe. Do you actually believe it?' This wasn't revision, it was targeted career prep shaped around you. And by the time you sat in that interview, you were ready. Sounds like a future ambition? It's not. It's happening right now at SP Jain School of Global Management. Students are showing up sharper. Cracking interviews in half the attempts it used to take. Not because they studied harder, but because we stopped teaching like it's 1985. Learning that listens, not lectures Walk into most university classrooms today, and you'll feel like you've stepped into a time capsule. A professor stands at the front, delivering a lecture. Students take notes. Then comes the exam, the grade, the degree. A model that hasn't changed in over a century. Meanwhile, the world outside is moving at warp speed. Job roles are being reinvented. Skills go obsolete in months. AI and automation are rewriting the rules. And yet, education remains slow, standardised, one-size-fits-all. The disconnect keeps growing, and students are the ones paying for it. So at SP Jain Global, we didn't just tweak the old model. We asked: what would learning look like if we built it for tomorrow's learners? The answer was AI. Not as a chatbot or a shiny tool on the side, but as the foundation. That's how we built AI-ELT, our AI-enabled learning assistant, to become a core part of how our students learn. Supporting the UAE's vision for AI As the first country in the world with a dedicated Minister of Artificial Intelligence and with a bold National Strategy for AI 2031, the UAE is actively shaping the future of global innovation. From the vision of Sheikh Mohammed bin Rashid Al Maktoum, Vice President and Prime Minister of the UAE and Ruler of Dubai to introduce AI as a formal subject across K–12 schools in September 2025, to planning to issue over 100,000 Golden Visas to AI professionals, the country is investing deeply in building future-ready human capital to support the adoption of AI in the region. At SP Jain, we see it as our responsibility to carry His Highness' vision forward. Our More than technology. It's intent. AI-ELT wasn't built to just deliver content or run base-level quizzes. It was built to question you, challenge you, push you. It prepares you before class. Supports you after class. And sharpens your thinking when no one else is around. It takes you from two to four to six to ten. Step by step, until your thinking holds up anywhere. Imagine the confidence with which you enter class. You're energised because you're no longer scrambling to understand the basics. You've covered that with the AI already. You're ready to speak up. To debate. To make decisions. And so is everyone else. Class time shifts from what to why. From definitions to decisions. It becomes a live, intellectual workout. Faster-paced. More demanding. Infinitely more valuable. Even quiet students speak up because they've already done the hard thinking in private, with the AI-ELT forcing them to clarify and commit to their ideas. That's the real power of AI in education. Not automation. Not convenience. Not replacing teachers. But transformation. Removing the parts of education that never really worked. The lectures no one remembers. The surface-level prep. The guesswork. It's about creating thinkers, not note-takers. It's about turning classrooms into conversations. And most of all, it's about giving every student, not just the lucky few, access to the kind of deep, personal mentorship and learning that actually sticks. This is what education should have been all along. And now, it finally is.


Observer
5 days ago
- Observer
Should AI be banned in schools?
Students have started to find academic work, be it for school, college, or a university, super easy. In fact, educators such as teachers in schools, as well as professors in colleges and universities, along with knowledge workers, such as the administrative staff in these academic institutions, have recently started to write and post creative/intelligent topics on social media platforms like never before. The question is, however, what has really changed? In simple words, the rise and utilisation of the Artificial Intelligence (AI) apps. What is literally the problem? Unfortunately, the machines do the majority of that beautiful work/content that we see produced and delivered. This is serious in my humble opinion, and I see a disaster in terms of the quality of graduates whom we expect to be leading the nation tomorrow. My article this week is about the danger and the negative impact that AI is having on the education sector and whether a ban on AI must be made at schools in order to save students and related teachers/educators from becoming obsolete. Today, homework is literally solved easily and submitted instantly. A presentation is generated and delivered flawlessly. Academicians and the administrative staff working for the institutions continuously publish interesting and thought-provoking articles on the social media networks as opposed to the pre-generative AI era. Some of those administrative staff had never been so engaged on social media or similar publications before. However, a dangerous truth lies under this polished surface. Virtually none of the academic stakeholders (students, teachers, professors, and/or administrative staff) are learning or originally producing the content, but the machines (powered by AI) are doing the job for them. Being a technologist myself and someone that has promoted the use of technology since a very young age, I would not deny the positive transformation that AI tools like ChatGPT, Gemini (previously known as Bard), Co-pilot, Claude, and Perplexity have been doing for all the industries and sectors, including education per se. However, the reliance on these technologies, especially the generative AI consumer apps, is very dangerous. Why? Because there will be no quality work or growth from the students and the generation that we would depend on to run our nation tomorrow. Simply logging into one of the generative AI apps, keying in a prompt (a smart question), and then generating an essay or a report in seconds involves no serious thinking nor creative work being produced, but rather a copy-and-paste approach that even a savvy toddler would probably be able to do in the very near future. I am not against using AI apps as a helper, but not relying entirely on the same. What should therefore be done to save humanity? My suggestion is simply to ban AI, especially in schools, until a proper framework and policies are created/available in place. Why? Because the cost of doing nothing (and continuing as it is today) may result in having students who can no longer think, write, or solve problems without the use of generative AI apps. I know some would argue the same about calculators, yet this has matured now, and the same would most probably happen with AI generative apps too in the future (as they get bigger in scope and the kind of things they can generate and do as machines too). A 2024 survey by Turnitin showed that over 67% of students have used AI to complete assignments, many without fully understanding the content. Teachers report students submitting work well above their ability, then failing when asked to explain it. The message is clear, i.e., students are not learning more but are just copying better. Furthermore, I personally have witnessed a number of teachers/educators who now post amazing articles using AI tools. For a proof in a pudding, check LinkedIn today and compare the same with the pre-Gen AI era. To conclude my article this week, I would like to emphasise that the vital damage of what AI would be doing to school may be permanent if action is not taken now. Students that graduate tomorrow will highly depend on AI for any task or job given. The question is, if AI can do the job, then why employ graduates as extra baggage? I leave the thought of whether to ban AI in schools to you until we catch up again next week. Stay tuned.