logo
Paul W. Bennett: AI isn't revolutionizing learning. It's mimicking original thought

Paul W. Bennett: AI isn't revolutionizing learning. It's mimicking original thought

National Post31-07-2025
Generative AI platforms such as ChatGPT have entered classrooms, universities, and homework routines with astonishing speed and little attention to the long-term consequences. A recent Canadian news report, aired on CBC's The National, also revealed that teachers have mostly been left to fend for themselves.
Article content
The current infatuation with AI is part of a recurrent pattern, but the latest educational fad is far more fundamental in its impact on teaching and learning in classrooms. It's time to ask: Are these tools eating away at our brain power and leading schools astray?
Article content
Article content
Article content
Technology evangelists and educators espousing '21st century learning' tout its ability to save time, individualize instruction, and increase access to information. But little has been done to assess its effects on students' ability to think independently, write clearly, and engage with knowledge deeply.
Article content
Article content
What's encouraging is the fact that leading cognitive scientists, evidence-based researchers, and experienced frontline teachers are beginning to right the balance.
Article content
There is mounting evidence that the emergence of ChatGPT and similar AI tools are short-circuiting deeper learning, eroding critical thinking capacities, and undermining the teaching of writing. Our brains, it turns out, need knowledge to function at their best.
Article content
Generative AI is proving to be a large learning model which encourages passivity in learners. Leading cognitive scientist, Barbara Oakley, an American expert on learning how to learn, warns that ' mental effort is essential to build real understanding.' Meaningful learning, according to Oakley, is built through deliberate practice, cognitive struggle, and retrieval of knowledge — all processes undermined when students delegate intellectual labour to AI tools.
Article content
Article content
Bypassing the productive discomfort associated with writing and problem-solving, students risk becoming consumers of content rather than producers of thought. The process of wrestling with an argument, organizing one's thoughts, and finding the right words is foundational to critical thinking.
Article content
Article content
Generative AI, however, short-circuits this developmental trajectory by offering polished outputs without much heavy lifting. If students become accustomed to outsourcing the most demanding aspects of thinking and writing, known as cognitive offloading, they lose the capacity to do it themselves.
Article content
American education commentator, Natalie Wexler, author of The Knowledge Gap, sees AI as the latest educational trend that emphasizes skills over content, inhibiting our capacity to grasp and understand knowledge in context. True critical thinking, she argues, cannot be taught in isolation from a deep base of knowledge. In her view, students need a well-stocked mental library of facts, concepts, and contexts to think critically and write effectively. Generative AI, by providing surface-level responses to prompts, may reinforce the illusion that knowledge is readily available and easily synthesized, even when it lacks depth or coherence. Students may come to view knowledge acquisition as unnecessary, assuming that AI can fill in the gaps. This undermines both the cognitive effort required to develop coherent explanations and the long-term retention that underpins higher-order thinking and genuine problem-solving.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Scam centers exposed as Meta purges millions of WhatsApp accounts
Scam centers exposed as Meta purges millions of WhatsApp accounts

Canada News.Net

time15 hours ago

  • Canada News.Net

Scam centers exposed as Meta purges millions of WhatsApp accounts

NEW YORK CITY, New York: Meta, the parent company of WhatsApp, has announced that it removed 6.8 million WhatsApp accounts in the first half of the year due to their ties with international scam networks. These accounts were linked to organized criminal "scam centers" operating across borders, targeting people through online fraud. The mass takedown is part of Meta's broader effort to fight online scams, which have become more frequent and sophisticated. In a statement released this week, Meta said it is also introducing new tools on WhatsApp to help users identify and avoid scams. One such feature is a safety notice that appears when someone not in your contact list adds you to a group chat. Another ongoing test will prompt users to pause before replying to suspicious messages. According to Meta, criminal scam centers, often run through forced labor and organized crime, are among the most active sources of digital fraud. These scammers frequently switch platforms to avoid detection, sometimes beginning on dating apps or SMS and then moving to social media or payment platforms. Meta cited recent scam campaigns that used Facebook, Instagram, TikTok, Telegram, and even ChatGPT to spread fraudulent schemes. These included fake offers to pay for social media engagement, pyramid schemes, and misleading cryptocurrency investment pitches. One such campaign, reportedly run from a scam center in Cambodia, was disrupted by Meta in collaboration with OpenAI, the creators of ChatGPT. The scammers had been using AI-generated messages to trick users and expand their operations across platforms. With scams growing more elaborate, Meta is urging users to remain cautious and use the new security tools being rolled out across its services.

Stored in Canada, owned in the U.S.: Sovereignty concerns grow over health data
Stored in Canada, owned in the U.S.: Sovereignty concerns grow over health data

CTV News

time21 hours ago

  • CTV News

Stored in Canada, owned in the U.S.: Sovereignty concerns grow over health data

Canada's population-based health data is a valuable national asset, not just for improving care, but also for advancing the global health AI race. But experts are sounding the alarm that this data may be at risk from foreign surveillance, monetization and a lack of adequate domestic protections. A new report, published in the Canadian Medical Association Journal, outlines both the opportunities and vulnerabilities tied to Canada's health information. The report urges immediate and multipronged action to protect the data's security and sovereignty. 'The good news is our health data is valuable,' said Dr. Kumanan Wilson, University of Ottawa professor and both the CEO and chief scientific officer of the Bruyère Health Research Institute, in an interview with Wilson says Canada's health data also has monetary value because we are in the age of artificial intelligence and Canada has a lot of what AI needs. 'We have population-based data because we have a public health system,' he said. 'The U.S. doesn't have that. Our data is more valuable than their data.' This, Wilson says, creates a significant economic opportunity for Canada to lead in health AI, but only if the country can ensure the data stays secure and is used appropriately. 'I would rather have a situation where Canadian companies are building AI algorithms based on our data (rather) than U.S. companies , and that Canada can benefit from it,' he said. 'Backdoor access' At the centre of the concern is where and how health data is stored. Electronic medical records from hospitals and clinics are often stored on cloud servers and their management is dominated by three U.S. providers: Epic, Cerner, and MEDITECH. While many are physically located in Canada, they are typically owned and operated by U.S. tech giants , such as Amazon Web Services, Google Cloud and Microsoft Azure. This setup, experts warn, creates a backdoor for U.S. authorities to demand access. 'Just because it's on Canadian soil doesn't necessarily provide the protection , because it is still held by a U.S. company,' said Wilson. The risk isn't theoretical. Following the 2001 Patriot Act and the 2018 Clarifying Lawful Overseas Use of Data Act (CLOUD Act), U.S. law enforcement agencies can legally compel American companies to hand over data, even if it's stored in another country. 'The U.S. government could still mandate transfer because these are U.S. companies and they will be required to do what the U.S. government asked them to do,' Wilson said. 'We know that this administration can cause companies to do what it wants through offering contracts or access to government contracts and government money.' In an email to Epic said most Canadian customers have their own database and control over it. The company said it is not subject to the U.S. CLOUD Act, as it 'does not meet the definitions for the type of companies to which it applies.' Epic added that the health data of its Canadian customers is stored in Canada, and that 'most customers manage the servers and encryption keys for their data.' For clients who use Epic to host their systems, the company said the data still resides in Canada, with Epic managing the servers and keys. When asked about potential Canadian data localization laws, the company responded that Epic staff have 'years of extensive training and deep expertise' and warned that having another company manage its software could 'significantly increase the risk of data corruption, cyber security breaches, and patient safety errors.' 'Canada could lead the world in health AI' To counter these risks, the report recommends a combination of technological and legislative fixes: Encryption by design to make any intercepted data unreadable without a secure decryption key A blocking statute to prevent companies from complying with foreign data requests Data localization laws to ensure health data remains within Canada Investment in sovereign Canadian cloud infrastructure 'If you're a vendor that wants to operate within Canada, you're going to have to adhere to some rules,' said cybersecurity expert Ritesh Kotak in a video interview with 'Those rules could be the fact that there's data localization requirements, meaning health data, which is highly sensitive, must reside within a particular geographical boundary.' Kotak also emphasized the importance of encrypting data so that even if it ends up in foreign hands, it cannot be read. 'The easiest way to think about this is when you go on a website and you put in a password … if a hacker was to get the data, they wouldn't get the plain text of what you're putting in, they would get mumbo jumbo,' he said. When it comes to security controls, sovereign data storage infrastructure is a key component. 'We need to move in the direction of sovereign Canadian data servers controlled by Canadian companies,' Wilson said. 'Though the U.S. companies are dominant in the market because they are good at what they do… we have to make sure we can match that.' Still, both Wilson and Kotak agree that digitized systems are crucial for modern care. 'We cannot go back to pen and paper,' Kotak said. 'We got to leverage the advancements in technology that are occurring … but we have to think these things through before just hitting 'I agree' and allowing any vendor to come in and introduce a piece of software that may possess additional risks.' Wilson echoed the sentiment, framing the issue not just as one of risk, but of missed opportunity. 'Canada could lead the world in health AI because of our public health system and our population health data,' he said. 'What I would hate to see is the country that is south of our border… use our own data to grow their economy and have a competitive advantage against us.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store