logo
Ignore the tech titans — AI is not your friend

Ignore the tech titans — AI is not your friend

Arab Newsa day ago
https://arab.news/p82jg
Meta CEO Mark Zuckerberg and OpenAI's Sam Altman have been aggressively promoting the idea that everyone, children included, should form relationships with AI 'friends' or 'companions.' Meanwhile, multinational tech companies are pushing the concept of 'AI agents' designed to assist us in our personal and professional lives, handle routine tasks, and guide decision-making.
But the reality is that AI systems are not, and never will be, friends, companions, or agents. They are, and will remain, machines. We should be honest about that and push back against misleading marketing that suggests otherwise.
The most deceptive term of all is 'artificial intelligence.' These systems are not truly intelligent, and what we call 'AI' today is simply a set of technical tools designed to mimic certain cognitive functions. They are incapable of true comprehension and are not objective, fair, or neutral.
Nor are they becoming any smarter. AI systems rely on data to function, and increasingly that includes data generated by tools such as ChatGPT. The result is a feedback loop that recycles output without producing deeper understanding.
More fundamentally, intelligence is not just about solving tasks; it is also about how those tasks are approached and performed. Despite their technical capabilities, AI models remain limited to specific domains, such as processing large datasets, performing logical deductions, and making calculations.
When it comes to social intelligence, however, machines can only simulate emotions, interactions, and relationships. A medical robot, for example, could be programmed to cry when a patient cries, yet no one would argue that it feels genuine sadness. The same robot could just as easily be programmed to slap the patient, and it would carry out that command with equal precision — and with the same lack of authenticity and self-awareness. The machine does not 'care'; it simply follows instructions. And no matter how advanced such systems become, that is not going to change.
Simply put, machines lack moral agency. Their behavior is governed by patterns and rules created by people, whereas human morality is rooted in autonomy — the capacity to recognize ethical norms and behave accordingly. By contrast, AI systems are designed for functionality and optimization. They may adapt through self-learning, but the rules they generate have no inherent ethical meaning.
Consider self-driving cars. To get from point A to point B as quickly as possible, a self-driving vehicle might develop rules to optimize travel time. If running over pedestrians would help achieve that goal, the car might do so, unless instructed not to, because it cannot understand the moral implications of harming people.
This is partly because machines are incapable of grasping the principle of generalizability — the idea that an action is ethical only if it can be justified as a universal rule. Moral judgment depends on the ability to provide a plausible rationale that others can reasonably accept. These are what we often refer to as 'good reasons.' Unlike machines, humans are able to engage in generalizable moral reasoning and, therefore, can judge whether their actions are right or wrong.
Simply put, machines lack moral agency. Their behavior is governed by patterns and rules created by people, whereas human morality is rooted in autonomy — the capacity to recognize ethical norms and behave accordingly.
Peter G. Kirchschlager
The term 'data-based systems' is thus more appropriate than 'artificial intelligence,' as it reflects what AI can actually do: generate, collect, process, and evaluate data to make observations and predictions. It also clarifies the strengths and limitations of today's emerging technologies.
At their core, these are systems that use highly sophisticated mathematical processes to analyze vast amounts of data — nothing more. Humans may interact with them, but communication is entirely one-way. Data-based systems have no awareness of what they are 'doing' or of anything happening around them.
This is not to suggest that DS cannot benefit humanity or the planet. On the contrary, we can and should rely on them in domains where their capabilities exceed our own. But we must also actively manage and mitigate the ethical risks they present. Developing human-rights-based DS and establishing an international data-based systems agency at the UN would be important first steps in that direction.
Over the past two decades, Big Tech firms have isolated us and fractured our societies through social media — more accurately described as 'antisocial media,' given its addictive and corrosive nature. Now, those same companies are promoting a radical new vision: replacing human connection with AI 'friends' and 'companions.'
At the same time, these companies continue to ignore the so-called 'black box problem': the untraceability, unpredictability, and lack of transparency in the algorithmic processes behind automated evaluations, predictions, and decisions. This opacity, combined with the high likelihood of biased and discriminatory algorithms, inevitably results in biased and discriminatory outcomes.
The risks posed by DS are not theoretical. These systems already shape our private and professional lives in increasingly harmful ways, manipulating us economically and politically, yet tech CEOs urge us to let DS tools guide our decisions. To protect our freedom and dignity, as well as the freedom and dignity of future generations, we must not allow machines to masquerade as what they are not: us.
• Peter G. Kirchschlager, Professor of Ethics and Director of the Institute of Social Ethics ISE at the University of Lucerne, is a visiting professor at ETH Zurich.
©Project Syndicate
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

How Saudi Arabia's KAUST is building the tools, talent, and vision for generative AI
How Saudi Arabia's KAUST is building the tools, talent, and vision for generative AI

Arab News

time4 hours ago

  • Arab News

How Saudi Arabia's KAUST is building the tools, talent, and vision for generative AI

DHAHRAN: As Saudi Arabia accelerates its push to become a global hub for artificial intelligence, the research labs at the King Abdullah University of Science and Technology are working to ensure the Kingdom doesn't just consume AI — but helps build it. On KAUST's Red Sea campus, the newly established Center of Excellence in Generative AI — CoE Gen AI — is at the heart of this effort. One of its founding members, Prof. Peter Wonka, is leading a team developing foundational models and tailored tools designed to align with Saudi Arabia's national priorities — from personalized education and energy modeling to AI-generated Arabic content. 'This is a really exciting time to be involved in AI,' Wonka, who is originally from Austria, told Arab News. 'This is the time of tremendous progress.' Saudi Arabia's ambitions are part of a global race to dominate generative AI. PwC estimates that AI will contribute about SR 878 billion ($235 billion), or roughly 12.4 percent of Saudi Arabia's gross domestic product, by 2030. According to research from McKinsey, generative AI could add between $2.6 trillion and $4.4 trillion annually to the global economy — with around 75 percent of that impact concentrated in customer operations, marketing, software engineering, and research and development. Under its national AI strategy, Saudi Arabia has pledged approximately $20 billion in cumulative investments by 2030, aimed at building AI infrastructure, research, and talent development, according to Reuters. 'It has been a very exciting year discussing with various industries in Saudi Arabia about their AI adoption and possible projects we can work on together,' said Wonka. 'Our 30 faculty cover a lot of expertise, making us an excellent and reliable partner for many industries in the Kingdom.' The CoE Gen AI currently includes faculty members working across five major research initiatives, supported by KAUST's Visual Computing Center, where Wonka serves as associate director. Each project maps to one of Saudi Arabia's national research, development, and innovation priorities — health and wellness, sustainability and essential needs, energy and industrial leadership, and economies of the future — alongside foundational AI research that supports applications across domains. One of KAUST's major translational focuses is education. Earlier this year, Arab News reported that AI would be integrated into the school curriculum starting in the coming academic year. The introduction of this nationwide AI curriculum aims to support the Kingdom's Human Capability Development Program, part of the Vision 2030 plan for national development and diversification. The curriculum was unveiled in July by the National Curriculum Center, with the Ministry of Education, Ministry of Communications and Information Technology, and the Saudi Data and Artificial Intelligence Authority. It features age-appropriate AI modules in the form of interactive and hands-on teaching. But Wonka cautions that simply adding AI isn't enough. 'It only makes sense with an additional understanding of other subjects and with a good grasp of fundamentals,' he said. At the university level, AI tools also have the potential of being misused. The result is a widening gap between students who treat AI as a shortcut and those who strategically integrate it into a broader learning workflow. 'People that possibly grew up without AI have better fact-checking skills,' said Wonka, emphasizing that over-reliance on these tools can produce incorrect or low-quality outputs. He encourages students to use generative tools to brainstorm ideas, rewrite for clarity, or structure outlines — but warns against letting the model replace original research. 'A paper written entirely by AI has no research, no ideas,' he said. That tension between power and reliability is central to KAUST's AI safety research. 'These tools can give wrong answers very confidently and sometimes even mix up elementary facts,' he said. 'Still, it may be too optimistic to hope for AI tools that give no wrong answers. A more pragmatic approach would be to learn how to use AI tools despite the fact that they may give wrong answers from time to time.' Beyond education, generative AI is poised to impact nearly every sector. 'For business communication, AI is already used for spell-checking, editing, and drafting complete letters or emails,' said Wonka. 'Doctors will heavily lean on AI tools to support diagnosing their patients … Cars will have more and more automatic driving features to assist humans in driving. Creative work, such as graphic design and photography, will greatly rely on AI assistance.' Yet, while the technology is advancing quickly, Wonka remains cautious. 'That is not something I would feel very comfortable predicting,' he said. 'A lot of people are just speculating.' Instead, he envisions a more subtle shift — AI gradually embedding itself into daily workflows. 'What I'm betting on happening almost certainly is more that AI will be integrated in everybody's life to make it 25 percent more efficient.' A key part of the CoE Gen AI's mission is to build tools that are culturally relevant and locally deployable. 'There are a lot of different companies that are interested in having local use of AI so that they don't send all their data to the big companies — because they're really worried about that,' said Wonka. To that end, KAUST is also investing in Arabic-language AI systems for tutoring, content generation, and search. 'The main emphasis of our work in AI in intelligent tutoring is to develop tools for personalized learning, personalized testing, and dual language support in both English and Arabic,' said Wonka. In visual content generation, his team is focused on personalization and the creation of culturally relevant content in both languages. Meanwhile, in the realm of smart cities, KAUST researchers are exploring how AI can support digital twins, architectural data analysis, and urban planning. 'Cognitive cities are able to learn, adapt, predict, and proactively respond,' he said. One open question, he added, is how far AI will go in replacing human roles altogether. KAUST has launched a new Center of Excellence in Generative AI to develop foundational models aligned with national priorities. The research spans five core areas, including personalized education, sustainable energy modeling, Arabic content generation, and smart city planning. Prof. Peter Wonka leads the initiative, emphasizing both academic rigor and practical industry collaboration within the Kingdom. 'Will we see most doctors' offices staffed only by robots, or will there only be human doctors using AI tools? It's important for everyone to invest time in learning about AI and using AI tools.' Success, he said, is measured both academically and practically. 'For translational impact, the university considers metrics such as the success of startup companies, the number and size of in-Kingdom collaborations and their impact, patents and their generated revenue, as well as successful research collaborations within the Kingdom and abroad that are externally funded. 'Ultimately, a lot of the success of the university is about collaborations. The CoE Gen AI aims to support in-Kingdom businesses and government entities to develop impactful AI projects.' Wonka joined KAUST in 2012 after stints at Arizona State University and Georgia Tech. With a background in computer science, computer vision, and urban modeling, he has helped shape KAUST's AI vision from its early stages to its current, ambitious scope. Now, as one of the Kingdom's leading researchers in generative AI, he is focused on ensuring that Saudi Arabia, his home for the last 14 years, is not just reacting to the global AI wave — but actively shaping it. 'I do believe that you'll see a lot of impact coming from AI,' he said. 'And I think as a researcher, it's really exciting to be on the forefront of something that is that exciting.'

The critical groundwork needed to win the AI race
The critical groundwork needed to win the AI race

Arab News

time4 hours ago

  • Arab News

The critical groundwork needed to win the AI race

Artificial intelligence has become a high-stakes global race that everyone is watching closely. But far from merely leading the race, Saudi Arabia has set out to help design the racetrack. While most countries are debating regulations and scrambling for compute, the Kingdom is building a vertically integrated AI engine at an unprecedented scale, from sovereign data centers and large chips procurement deals to venture capital and large language models. Saudi Arabia is executing a top down play to become a global force in AI. And it is moving fast. AI is more than algorithms. The critical groundwork lies in digital infrastructure, reliable data, regulatory alignment, and talent. Saudi Arabia understands this better than most, and is moving with intent to shape the global AI landscape. At the center of this strategy is Humain, the newly launched state-owned infrastructure titan, with multi-gigawatt ambitions, hundreds of thousands of chip orders, and partnerships spanning NVDIA, AMD, and Qualcomm. Complemented by an additional $10 billion in venture capital, the Kingdom is committed to scalability. It offers abundant and low-cost energy for AI compute, making the entire AI system not only viable but globally competitive. In addition to projects with Google Cloud and Groq's new Riyadh region, Humain aims to become one of the world's largest AI infrastructure providers. Its first phase includes scaling capacity to support 6.6GW by 2034, including 18,000 Nvidia Blackwell chips. Humain represents more than its tens of billions of dollars of investments; it signals to the AI world what few countries can claim: intent backed by execution at scale. AI is nothing without digital infrastructure. High-performance computing centers, specialized data hubs, fiber deployments, and energy-efficient hardware are the foundation on which this technology thrives. Saudi Arabia recognizes this and is backing ambitions with one of the most aggressive infrastructure buildouts globally. If AI infrastructure is the engine, data is the fuel. Indeed, the quality of it determines how far you can go. The more relevant and robust the datasets, the sharper and more contextually aware the AI. Unlike countries that use foreign cloud providers for data storage and processing outside their borders, Saudi Arabia is treating data as a sovereign asset, where it should remain under national domains. Owning specialized and well-structured data means owning your future in AI. It is a simple, yet powerful, formula: Proprietary data equals competitive advantage. AI is more than algorithms. The critical groundwork lies in digital infrastructure, reliable data, regulatory alignment, and talent. Saudi Arabia understands this better than most. Javier Alvarez A bank that uses its own transactional data to train an AI fraud detection tool will always outperform one using general, third-party datasets. This is the kind of edge the Kingdom is building into its national and business-level frameworks. Through upcoming legislation, AI companies will be regulated under the laws of their country of origin — an ambitious attempt to balance openness with compliance and trust. This data strategy is not isolated from the infrastructure, which in Saudi Arabia will give clients full visibility on how their data is used and processed in real time. Transparency by design is Saudi Arabia's approach through a globalized world of data scrutiny and regulation. Saudi Arabia's AI efforts are not solely focused on servers, chips, and top-notch infrastructure, but are also about shaping the future workforce and their skills. Initiatives led by the Saudi Data and Artificial Intelligence Authority and major partnerships with global tech leaders are helping to build talent pipelines that match the scale of infrastructure investments. One of the key challenges remains the global shortage of skilled AI professionals. Education initiatives in Saudi universities are a long-term fix, but for now, the Kingdom will have to continue attracting world-class tech talent. The UAE ranks higher in AI talent attraction globally, but Saudi Arabia is rapidly narrowing the gap. Saudi Arabia's focus on creating locally trained models in Arabic like ALLaM is a strategic move to avoid reliance on AI systems that do not represent regional cultures or languages accurately. By owning the regulation process and embedding an AI framework that prioritizes transparency and ethical considerations, the Kingdom ensures that AI adoption comes with accountability and responsibility for all its stakeholders. The global AI race is heating up, yes, and the path to leadership is paved by smart, steady, and strategic decisions. Saudi Arabia's investments in infrastructure, data sovereignty, clear regulation, and education initiatives are the foundation stones for what could become one of the most advanced AI ecosystems in the world. • Javier Alvarez is senior managing director & technology head for the Middle East at FTI Delta

Pakistan says satellite launch with China reflects friendship ‘higher than the skies'
Pakistan says satellite launch with China reflects friendship ‘higher than the skies'

Arab News

time5 hours ago

  • Arab News

Pakistan says satellite launch with China reflects friendship ‘higher than the skies'

BEIJING: Pakistan's Planning Minister Ahsan Iqbal said recently that Islamabad and Beijing's collaboration, which resulted in the successful launch of a Pakistani Remote Sensing Satellite, shows that the bilateral friendship between the two nations is 'higher than the skies.' China launched the Pakistan satellite (PRSS-1) from the Xichang Satellite Launch Center in southwest China's Sichuan Province on Jul. 31. The satellite, being primarily used in the fields of land resource surveys and disaster prevention and mitigation, will help promote the development of Pakistan, Iqbal said in a recent interview with the China Central Television (CCTV). 'This [satellite] is becoming a very important tool for development of mankind in future,' Iqbal said. 'Because through satellite technology and communication, you can observe earth to prevent or to manage disasters.' He said one can manage the agriculture sector 'better' with the use of satellites and even cities as well. The Pakistani minister said there are so many economic applications that satellites offer and promise, adding that 'this is key to our futures.' 'With this launch of satellite, I can proudly say that Pakistan-China friendship, which used to be higher than the Himalayas, now is higher than the sky,' he concluded. The satellite launch marked another step in Pakistan's growing engagement with outer space with Chinese assistance. The two countries are also preparing to send the first Pakistani astronaut into space aboard China's Tiangong space station, with training programs currently underway.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store