
Should AI be banned in schools?
The question is, however, what has really changed? In simple words, the rise and utilisation of the Artificial Intelligence (AI) apps. What is literally the problem? Unfortunately, the machines do the majority of that beautiful work/content that we see produced and delivered.
This is serious in my humble opinion, and I see a disaster in terms of the quality of graduates whom we expect to be leading the nation tomorrow. My article this week is about the danger and the negative impact that AI is having on the education sector and whether a ban on AI must be made at schools in order to save students and related teachers/educators from becoming obsolete.
Today, homework is literally solved easily and submitted instantly. A presentation is generated and delivered flawlessly. Academicians and the administrative staff working for the institutions continuously publish interesting and thought-provoking articles on the social media networks as opposed to the pre-generative AI era.
Some of those administrative staff had never been so engaged on social media or similar publications before. However, a dangerous truth lies under this polished surface. Virtually none of the academic stakeholders (students, teachers, professors, and/or administrative staff) are learning or originally producing the content, but the machines (powered by AI) are doing the job for them.
Being a technologist myself and someone that has promoted the use of technology since a very young age, I would not deny the positive transformation that AI tools like ChatGPT, Gemini (previously known as Bard), Co-pilot, Claude, and Perplexity have been doing for all the industries and sectors, including education per se.
However, the reliance on these technologies, especially the generative AI consumer apps, is very dangerous. Why? Because there will be no quality work or growth from the students and the generation that we would depend on to run our nation tomorrow.
Simply logging into one of the generative AI apps, keying in a prompt (a smart question), and then generating an essay or a report in seconds involves no serious thinking nor creative work being produced, but rather a copy-and-paste approach that even a savvy toddler would probably be able to do in the very near future. I am not against using AI apps as a helper, but not relying entirely on the same.
What should therefore be done to save humanity? My suggestion is simply to ban AI, especially in schools, until a proper framework and policies are created/available in place. Why? Because the cost of doing nothing (and continuing as it is today) may result in having students who can no longer think, write, or solve problems without the use of generative AI apps. I know some would argue the same about calculators, yet this has matured now, and the same would most probably happen with AI generative apps too in the future (as they get bigger in scope and the kind of things they can generate and do as machines too).
A 2024 survey by Turnitin showed that over 67% of students have used AI to complete assignments, many without fully understanding the content. Teachers report students submitting work well above their ability, then failing when asked to explain it.
The message is clear, i.e., students are not learning more but are just copying better. Furthermore, I personally have witnessed a number of teachers/educators who now post amazing articles using AI tools. For a proof in a pudding, check LinkedIn today and compare the same with the pre-Gen AI era.
To conclude my article this week, I would like to emphasise that the vital damage of what AI would be doing to school may be permanent if action is not taken now. Students that graduate tomorrow will highly depend on AI for any task or job given. The question is, if AI can do the job, then why employ graduates as extra baggage? I leave the thought of whether to ban AI in schools to you until we catch up again next week. Stay tuned.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Observer
2 days ago
- Observer
Should AI be banned in schools?
Students have started to find academic work, be it for school, college, or a university, super easy. In fact, educators such as teachers in schools, as well as professors in colleges and universities, along with knowledge workers, such as the administrative staff in these academic institutions, have recently started to write and post creative/intelligent topics on social media platforms like never before. The question is, however, what has really changed? In simple words, the rise and utilisation of the Artificial Intelligence (AI) apps. What is literally the problem? Unfortunately, the machines do the majority of that beautiful work/content that we see produced and delivered. This is serious in my humble opinion, and I see a disaster in terms of the quality of graduates whom we expect to be leading the nation tomorrow. My article this week is about the danger and the negative impact that AI is having on the education sector and whether a ban on AI must be made at schools in order to save students and related teachers/educators from becoming obsolete. Today, homework is literally solved easily and submitted instantly. A presentation is generated and delivered flawlessly. Academicians and the administrative staff working for the institutions continuously publish interesting and thought-provoking articles on the social media networks as opposed to the pre-generative AI era. Some of those administrative staff had never been so engaged on social media or similar publications before. However, a dangerous truth lies under this polished surface. Virtually none of the academic stakeholders (students, teachers, professors, and/or administrative staff) are learning or originally producing the content, but the machines (powered by AI) are doing the job for them. Being a technologist myself and someone that has promoted the use of technology since a very young age, I would not deny the positive transformation that AI tools like ChatGPT, Gemini (previously known as Bard), Co-pilot, Claude, and Perplexity have been doing for all the industries and sectors, including education per se. However, the reliance on these technologies, especially the generative AI consumer apps, is very dangerous. Why? Because there will be no quality work or growth from the students and the generation that we would depend on to run our nation tomorrow. Simply logging into one of the generative AI apps, keying in a prompt (a smart question), and then generating an essay or a report in seconds involves no serious thinking nor creative work being produced, but rather a copy-and-paste approach that even a savvy toddler would probably be able to do in the very near future. I am not against using AI apps as a helper, but not relying entirely on the same. What should therefore be done to save humanity? My suggestion is simply to ban AI, especially in schools, until a proper framework and policies are created/available in place. Why? Because the cost of doing nothing (and continuing as it is today) may result in having students who can no longer think, write, or solve problems without the use of generative AI apps. I know some would argue the same about calculators, yet this has matured now, and the same would most probably happen with AI generative apps too in the future (as they get bigger in scope and the kind of things they can generate and do as machines too). A 2024 survey by Turnitin showed that over 67% of students have used AI to complete assignments, many without fully understanding the content. Teachers report students submitting work well above their ability, then failing when asked to explain it. The message is clear, i.e., students are not learning more but are just copying better. Furthermore, I personally have witnessed a number of teachers/educators who now post amazing articles using AI tools. For a proof in a pudding, check LinkedIn today and compare the same with the pre-Gen AI era. To conclude my article this week, I would like to emphasise that the vital damage of what AI would be doing to school may be permanent if action is not taken now. Students that graduate tomorrow will highly depend on AI for any task or job given. The question is, if AI can do the job, then why employ graduates as extra baggage? I leave the thought of whether to ban AI in schools to you until we catch up again next week. Stay tuned.


Observer
4 days ago
- Observer
AI is not your friend
Meta CEO Mark Zuckerberg and OpenAI's Sam Altman have been aggressively promoting the idea that everyone — children included — should form relationships with AI 'friends' or 'companions'. Meanwhile, multinational tech companies are pushing the concept of 'AI agents' designed to assist us in our personal and professional lives, handle routine tasks and guide decision-making. But the reality is that AI systems are not and never will be, friends, companions, or agents. They are and will always remain, machines. We should be honest about that and push back against misleading marketing that suggests otherwise. The most deceptive term of all is 'artificial intelligence'. These systems are not truly intelligent and what we call 'AI' today is simply a set of technical tools designed to mimic certain cognitive functions. They are not capable of true comprehension and are neither objective, fair, nor neutral. Nor are they becoming any smarter. AI systems rely on data to function and increasingly, that includes data generated by tools like ChatGPT. The result is a feedback loop that recycles output without producing deeper understanding. More fundamentally, intelligence is not just about solving tasks; it's also about how those tasks are approached and performed. Despite their technical capabilities, AI models remain limited to specific domains, such as processing large datasets, performing logical deductions and making calculations. When it comes to social intelligence, however, machines can only simulate emotions, interactions and relationships. A medical robot, for example, could be programmed to cry when a patient cries, yet no one would argue that it feels genuine sadness. The same robot could just as easily be programmed to slap the patient and it would carry out that command with equal precision – and with the same lack of authenticity and self-awareness. The machine doesn't 'care'; it simply follows instructions. And no matter how advanced such systems become, that is not going to change. AI systems are not and never will be, friends, companions, or agents. Simply put, machines lack moral agency. Their behaviour is governed by patterns and rules created by people, whereas human morality is rooted in autonomy — the capacity to recognise ethical norms and behave accordingly. By contrast, AI systems are designed for functionality and optimisation. They may adapt through self-learning, but the rules they generate have no inherent ethical meaning. Consider self-driving cars. To get from point A to point B as quickly as possible, a self-driving vehicle might develop rules to optimise travel time. If running over pedestrians would help achieve that goal, the car might do so, unless instructed not to, because it cannot understand the moral implications of harming people. This is partly because machines are incapable of grasping the principle of generalisability — the idea that an action is ethical only if it can be justified as a universal rule. Moral judgment depends on the ability to provide a plausible rationale that others can reasonably accept. These are what we often refer to as 'good reasons'. Unlike machines, humans are able to engage in generalisable moral reasoning and can therefore judge whether their actions are right or wrong. The term 'data-based systems' (DS) is thus more appropriate than 'artificial intelligence', as it reflects what AI can actually do: generate, collect, process and evaluate data to make observations and predictions. It also clarifies the strengths and limitations of today's emerging technologies. At their core, these are systems that use highly sophisticated mathematical processes to analyse vast amounts of data — nothing more. Humans may interact with them, but communication is entirely one-way. DS have no awareness of what they are 'doing' or of anything happening around them. This is not to suggest that DS cannot benefit humanity or the planet. On the contrary, we can and should rely on them in domains where their capabilities exceed our own. But we must also actively manage and mitigate the ethical risks they present. Developing human-rights-based DS and establishing an International Data-Based Systems Agency at the United Nations would be important first steps in that direction. Over the past two decades, Big Tech firms have isolated us and fractured our societies through social media — more accurately described as 'anti-social media', given its addictive and corrosive nature. Now, those same companies are promoting a radical new vision: replacing human connection with AI 'friends' and 'companions'. At the same time, these companies continue to ignore the so-called 'black box problem': the untraceability, unpredictability and lack of transparency in the algorithmic processes behind automated evaluations, predictions and decisions. This opacity, combined with the high likelihood of biased and discriminatory algorithms, inevitably results in biased and discriminatory outcomes. The risks posed by DS are not theoretical. These systems already shape our private and professional lives in increasingly harmful ways, manipulating us economically and politically, yet tech CEOs urge us to let DS tools guide our decisions. To protect our freedom and dignity, as well as the freedom and dignity of future generations, we must not allow machines to masquerade as what they are not: us. @Project Syndicate, 2025


Observer
5 days ago
- Observer
Google – Energy Dome partnership to spur Oman's clean energy goals
MUSCAT, AUG 2 Tech giant Google's recent strategic partnership with Italian energy storage firm Energy Dome holds promising implications for the Sultanate of Oman's clean energy ambitions, according to the company's local technology partner, Takhzeen Oman. Last month, Google — the search engine division of Alphabet Inc — announced an equity investment and global partnership with Energy Dome, focusing on its long-duration energy storage (LDES) solutions. Energy Dome has developed a novel energy storage system that uses carbon dioxide (CO₂) contained within a distinctive dome-shaped structure. During periods of excess renewable energy on the grid, the system uses surplus electricity to compress CO₂ gas into a liquid state. When the demand for clean power rises, the liquid CO₂ is allowed to expand back into a hot, pressurized gas, generating a powerful force — much like steam escaping a pressure cooker. This force spins a turbine to produce zero-emission electricity, which can be supplied to the grid for durations ranging from 8 to 24 hours. Significantly, the Omani government—represented by sovereign wealth fund Oman Investment Authority (OIA)—has been an early supporter of Energy Dome and its storage technology. Three years ago, OIA's venture capital arm, IDO Investments, was among several international entities that participated in a funding round to support the company. Additionally, an MoU signed between OIA and Energy Dome aims to explore potential collaboration opportunities in Oman—a step that led to the establishment of Takhzeen Oman as the Italian firm's local technology partner. In remarks to the Observer, Takhzeen Oman's Managing Director, Eng Nawaf al Balushi, welcomed the partnership announcement. 'Takhzeen Oman proudly congratulates its technology partner, Energy Dome, on achieving a significant global milestone through a strategic partnership and investment from Google to advance long-duration energy storage solutions. As the local partner of Energy Dome in the Sultanate of Oman, Takhzeen views this development as a major endorsement of the CO₂ Battery—a breakthrough in long-duration energy storage that combines innovation, scalability, and cost-effectiveness. Google's decision to both partner with and invest in Energy Dome validates the growing global confidence in this technology to play a key role in supporting the transition to net-zero.' The partnership, he further noted, has the potential to bolster global momentum around long-duration storage as a critical enabler of renewable energy integration. 'For Oman, it presents exciting prospects as Takhzeen works to bring this innovative solution to the region, aligned with Oman Vision 2040 and the nation's ambitious clean energy goals. We are proud to represent Energy Dome in Oman. It underscores the importance of localized, forward-looking partnerships in delivering real climate solutions. Together, we are committed to transforming the energy landscape in Oman and beyond.' The collaboration between Takhzeen Oman and Energy Dome is set to play a pivotal role in developing Oman's first large-scale, long-duration energy storage project—supporting the country's rising electricity demand while enhancing grid reliability and sustainability, Eng Al Balushi added. During a visit to Muscat earlier this year, Energy Dome officials revealed that the company's partnership with Takhzeen Oman will help deliver a 'hybrid project' that combines solar power generation with CO₂ battery storage. Green electricity from the adjacent solar power plant will charge the battery during the day, with the stored energy discharged at night or during peak-pricing hours when grid supply is limited. As the technology provider, Energy Dome will deliver the full battery storage plant for the Oman project. Takhzeen Oman, for its part, will handle the plant's installation and will own and operate it for a period of 30 years. The green electricity generated by the project will be sold to an offtaker.