logo
Students' use of AI spells death knell for critical thinking

Students' use of AI spells death knell for critical thinking

The Guardian02-03-2025

Regarding your report (UK universities warned to 'stress-test' assessments as 92% of students use AI, 26 February), for centuries universities have seen themselves as repositories of knowledge and the truth. This began breaking down when experts were no longer valued, critical thinking undermined and public discourse increasingly polarised.
In this world, traditional sources of knowledge have been increasingly rejected. Books, journal articles and old media are challenged by developments in information presentation and retrieval, most notably through apps and social media. It has led to the 'Tinderfication' of knowledge.
Curated reading lists, for example, which academics spend time on researching, highlighting key thinkers and writings, are often overlooked by students in favour of a Google search. If a student does not like what they read, they can simply swipe left. Algorithms can then send students in unexpected directions, often diverting them away from academic rigour to non-academic resources.
It is important that students have access to learning materials 24/7. But does knowledge become another convenience food? It is available at the touch of a button online, is effectively delivered to your door and there are so many outlets to choose from. There might be quantity, but not necessarily quality: AI is the ultimate convenience food.
This raises fundamental questions about not just what we mean by knowledge, but also what the role of education, and academics, will be in the future. I can appreciate the benefits of AI in the sciences, economics or mathematics, where facts are often unquestionable, but what about the humanities and social sciences, where much is contestable?
We are rapidly losing ground to profound societal changes that could have unimaginable consequences for universities if we do not respond quickly.Prof Andrew MoranLondon Metropolitan University
As a university lecturer in the humanities, where essays remain a key means of assessment, I am not surprised to hear that there has been an explosive increase in the use of AI. It is aggressively promoted as a time-saving good by tech companies, and wider political discourse only reinforces this view without questioning AI's limitations and ethics.
While AI may be useful in several academic contexts – in writing basic reports and conducting initial research, for example – its use by students to write essays is indicative of the devaluing of humanities subjects and a misunderstanding of what original writing in disciplines such as history, literature and philosophy enables: critical thinking.
'How can I tell what I think till I see what I say?' asked the great novelist EM Forster. He meant that writing is a sophisticated form of thinking, and that learning to write well, to feel one's way through the development of an idea or argument, is at the heart of writing. When we ask AI to write an essay, we are not simply outsourcing labour, we are outsourcing our thinking and its development, which over time will only render us more confused and less intelligent.
In a neoliberal technological age in which we are often obsessed with a product rather than the process by which it was made, it is hardly surprising that the true value of writing is being overlooked. Students are simply taking their cues from a world losing touch with the irreplaceable value of human creativity and critical thinking. Dr Ben WilkinsonSheffield
Have an opinion on anything you've read in the Guardian today? Please email us your letter and it will be considered for publication in our letters section.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Layoffs sweep America as AI leads job cut 'bloodbath'
Layoffs sweep America as AI leads job cut 'bloodbath'

Daily Mail​

time5 hours ago

  • Daily Mail​

Layoffs sweep America as AI leads job cut 'bloodbath'

Elon Musk and hundreds of other tech mavens wrote an open letter two years ago warning AI would 'automate away all the jobs' and upend society. And it seems as if we should have listened to them. Layoffs are sweeping America, nixing thousands of roles at Microsoft, Walmart, and other titans, with the newly unemployed speaking of a'bloodbath' on the scale of the pandemic. This time it's not blue-collar and factory workers facing the ax - it's college grads with white-collar roles in tech, finance, law, and consulting. Entry-level jobs are vanishing the fastest, stoking fears of recession and a generation of disillusioned graduates left stranded with CVs no one wants. Graduates are now more likely to be unemployed than others, data has shown. Chatbots have already taken over data entry and customer service posts. Next-generation 'agentic' AI can solve problems, adapt, and work independently. These 'smartbots' are already spotting market trends, running logistics operations, writing legal contracts, and diagnosing patients. The markets have seen the future: AI investment funds are growing by as much as 60 per cent a year. 'The AI layoffs have begun, and they're not stopping,' says tech entrepreneur Alex Finn. Luddites who don't embrace the tech 'will be completely irrelevant in the next five years,' he posted on X. Procter & Gamble, which makes diapers, laundry detergent, and other household items, this week said it would cut 7,000 jobs, or about 15 per cent of non-manufacturing roles. Its two-year restructuring plan involves shedding managers who can be automated away. Microsoft last month announced a cull of 6,000 staff - about three per cent of its workforce - targeting managerial flab, after a smaller round of performance-related cuts in January. LA-based tech entrepreneur Jason Shafton said the software giant's layoffs spotlight a trend 'redefining' the job market. 'If AI saves each person 10 per cent of their time (and let's be real, it's probably more), what does that mean for a company of 200,000?' he wrote. Retail titan Walmart, America's biggest private employer, is slashing 1,500 tech, sales, and advertising jobs in a streamlining effort. Citigroup, cybersecurity firm CrowdStrike, Disney, online education firm Chegg, Amazon, and Warner Bros. Discovery have culled dozens or even hundreds of their workers in recent weeks. Musk himself led a federal sacking spree during his 130-day stint at the Department of Government Efficiency, which ended on May 30. Federal agencies lost some 135,000 to firings and voluntary resignation under his watch, and 150,000 more roles are set to be mothballed. Employers had already announced 220,000 job cuts by the end of February, the highest layoff rate seen since 2009. In announcing cuts, executives often talk about restructuring and tough economic headwinds. Many are spooked by President Donald Trump's on-and-off tariffs, which sent stock markets into free-fall and prompted CEOs to second-guess their long-term plans. Others say something deeper is happening, as companies embrace the next-generation models of chatbots and AI. Robots and machines have for decades usurped factory workers. AI chatbots have more recently replaced routine, repetitive, data entry, and customer service roles. A new and more sophisticated technology - called Agentic AI - now operates more independently: perceiving the environment, setting goals, making plans, and executing them. AI-powered software now writes reports, analyzes spreadsheets, creates legal contracts, designs logos, and even drafts press releases, all in seconds. Banks are axing graduate recruitment schemes. Law firms are replacing paralegals with AI-driven tools. Even tech startups, the birthplace of innovation, are swapping junior developers for code-writing bots. Managers increasingly seek to become 'AI first' and test whether tasks can be done by AI before hiring a human. That's now company policy at Shopify and is how fintech firm Klarna shrank its headcount by 40 per cent, CEO Sebastian Siemiatkowski told CNBC last month. Experienced workers are encouraged to automate tasks and get more work done; recent graduates are struggling to get their foot in the door. From a distance, the job market looks relatively buoyant, with unemployment holding steady at 4.2 per cent for the third consecutive month, the Labor Department reported on Friday. But it's unusually high - close to 6 per cent - among recent graduates. The Federal Reserve Bank of New York recently said job prospects for these workers had 'deteriorated noticeably'. That spells trouble not just for young workers, but for the long-term health of businesses - and the economy. Economists warn of an AI-induced downturn, as millions lose jobs, spending plummets, and social unrest festers. It's been dubbed an industrial revolution for the modern era, but one that's measured in years, not decades. Dario Amodei, CEO of Anthropic, one of the world's most powerful AI firms, says we're at the start of a storm. AI could wipe out half of all entry-level white-collar jobs - and spike unemployment to 10-20 per cent in the next one to five years, he told Axios. Lawmakers have their heads in the sand and must stop 'sugar-coating' the grim reality of the late 2020s, Amodei said. 'Most of them are unaware that this is about to happen,' he said. 'It sounds crazy, and people just don't believe it.' Frustrations: Sacked workers have taken to social media to vent their frustrations about the new tech crunch Young people who've been culled are taking to social media to vent their anger as the door to a middle-class lifestyle closes on them. Patrick Lyons calls it 'jarring and unexpected' how he lost his Austin-based program managing job in an 'emotionless business decision' by Microsoft. 'There's nothing the 6,000 of us could have done to prevent this,' he posted. A young woman coder, known by her TikTok handle dotisinfluencing, posts a daily video diary about the 'f***ing massacre' of layoffs at her tech company as 'AI is taking over'. Her job search is going badly. She claims one recruiter appeared more interested in taking her out for drinks than offering a paycheck. 'I feel like s***,' she added. Ben Wolfson, a young Meta software engineer, says entry-level software jobs dried up in 2023. 'Big tech doesn't want you, bro,' he said. Critics say universities are churning out graduates into a market that simply doesn't need them. A growing number of young professionals say they feel betrayed - promised opportunity, but handed a future of 'AI-enhanced' redundancy. Others are eyeing an opportunity for a payout to try something different. Donald King posted a recording of the meeting in which he was unceremoniously laid off from his data science job at consulting firm PwC. 'RIP my AI factory job,' he said. 'I built the thing that destroyed me.' He now posts from Porto, in Portugal - a popular spot for digital nomads - where he's founded a marketing startup. Industry insiders say it won't be long before another generation of AI arrives to automate new sectors. As AI improves, the difference between 'safe' and 'automatable' work gets blurrier by the day. Human workers are advised to stay one step ahead and build AI into their own jobs to increase productivity. Optimists point to such careers as radiology - where humans initially looked set to be outmoded by machines that could speedily read medical scans and pinpoint tumors. But the layoffs didn't happen. The technology has been adopted - but radiologists adapted, using AI to sharpen images and automate some tasks, and boost productivity. Some radiology units even expanded their increasingly efficient human workforce. Others say AI is a scapegoat for 2025's job cuts - that executives are downsizing for economic reasons, and blaming technology so as not to panic shareholders. But for those who have lost their jobs, the future looks bleak.

The AI Risk Equation: Delay vs Safety – Calculating the True Cost: By Erica Andersen
The AI Risk Equation: Delay vs Safety – Calculating the True Cost: By Erica Andersen

Finextra

time5 hours ago

  • Finextra

The AI Risk Equation: Delay vs Safety – Calculating the True Cost: By Erica Andersen

In the race to adopt artificial intelligence, too many enterprises are flooring the brakes while neglecting the accelerator. As the saying goes, "AI may not be coming for your job, but a company using AI is coming for your company." The pressure to integrate AI solutions is becoming intense, and organizations that have missed early adoption windows are increasingly turning to external vendors for quick fixes. The longer enterprises wait, the faster and riskier it becomes when they are forced to adopt AI. By delaying, they have to learn fast how to do it with no experience under their belt. This article explores the significant risks of unchecked AI deployment and offers guidance for navigating the challenges. When AI Tools Go Rogue Remember the UK Post Office Horizon scandal? A conventional software system led to hundreds of innocent people being prosecuted, some imprisoned, and lives utterly destroyed. That was just normal software. The AI tools your organization might be preparing to unleash represent an entirely different beast. AI is like an adolescent—moody, unpredictable, and occasionally dangerous. Consider Air Canada's chatbot debacle: it confidently provided customers with incorrect bereavement policy information, and the courts ruled that Air Canada had to honor what their digital representative had erroneously promised. While in this case one might argue the chatbot was more humane than the company's actual policies, the financial implications were significant. The critical question is: will your AI tool be trusted to behave and do its job, or will it go on a rampage and wreck your business? Learning how to deploy AI with robust oversight is a critical skill organizations must master for successful AI deployments, and not to play Russian roulette. Companies starting now, are getting a significant edge in learning how to control this critical technology. The Zillow Cautionary Tale Zillow's failed foray into real estate flipping highlights the dangers of AI relying solely on past data. The algorithm, confident in its predictions, failed to account for rapidly changing market conditions, such as a drop in demand or nearby property issues—it could take months for Zillow's algorithm to recognize the impact on valuation. Meanwhile, savvy sellers capitalized on this, unloading properties to Zillow before Zillow detected the prices plummeting, costing the company 10% of its workforce. The problem? Zillow's AI was backward-looking, trained on historical data, and unable to adapt to dynamic environments. This same issue plagues stock-picking algorithms and other systems. that perform beautifully on historical data but collapse when faced with new market conditions. If your AI is making decisions based solely on past data without accounting for environmental changes, you're setting yourself up for a Zillow-style catastrophe . To mitigate this risk, ensure your AI's training data represents current and anticipated future conditions. Consider the risks carefully! This is particularly crucial for financial systems, where tail risks are more frequent than models predict. Medical applications, like analyzing skin conditions, are much less susceptible to changing environments, as long as the AI is trained on a representative sample of the population. Startup Corner-Cutting: From Unicorns to Bankruptcy Your vendor might be cutting corners. While they may not be another Theranos, the risk is real. Take the UK tech unicorn that recently collapsed into bankruptcy amid financial reporting discrepancies. It has now emerged that was a fraud, and people using the service are left with orphaned applications. Startups face intense pressure to deliver results, which can lead to critical oversights with inconvenient truths often getting swept under the rug. One common pitfall is bias in training data. When your system makes judgments about people, inherent biases can lead to discriminatory outcomes, and can even perpetuate and amplify discriminatory outcomes. Even tech giants aren't immune. Amazon attempted to build an AI resume screening tool to identify top talent by analyzing their current workforce's resumes. The problem? AWS, their massive cloud division, was predominantly male, so the AI learned to favor male candidates. Even after purging overtly gender-identifying information, the system still detected subtle language patterns more common in men's resumes and continued its bias. If you're using AI to determine whether someone qualifies for financing, how can you be sure the system isn't perpetuating existing biases? My advice, before deploying AI that makes decisions about people, carefully evaluate the data and the potential for bias. Consider implementing bias detection and mitigation techniques. Better yet, start now with an internal trial to see the problems that bias in the data might cause. Those organizations getting hands on experience right now, will be well ahead of their peers who have not started. The Hallucination Problem Then there are "hallucinations" in generative AI—a polite term for making things up, which is exactly what's happening. Just ask Elon Musk, whose chatbot Grok fabricated a story about NBA star Klay Thompson throwing bricks through windows in Sacramento. Sacramento might be bland, but it did not drive Klay to throw bricks through his neighbor's windows. Such fabrications are potentially damaging to reputations, including your company's. How can you prevent similar embarrassments? Keep humans in the decision loop—at minimum, you'll have someone to blame when things go wrong. It wasn't the AI you purchased from "Piranha AI backed by Shady VC" that approved those questionable loans; it was Johnny from accounting who signed off on them. A practical approach is designing your AI to show its work. When the system generates outputs by writing code to extract database information, this transparency, or "explainable AI", approach allows you to verify the results and logic used to arrive at them. There are other techniques that can reduce or eliminate the effect of hallucinations, but you need to get some hands-on experience to understand when they occur, what they say, and what risk this exposes your organization to. The Economic and Societal Costs of AI Failures The costs of AI security and compliance failures extend far beyond immediate losses: Direct Financial Costs: AI security breaches can lead to significant financial losses through theft, ransom payments, and operational disruption. The average cost of a data breach reached $4.45 million in 2023, with AI-enhanced attacks potentially driving this figure higher. Regulatory Penalties: Non-compliant AI systems increasingly face steep regulatory penalties. Under GDPR, companies can be fined up to 4% of annual global revenue. Reputational Damage: When AI systems make discriminatory decisions or privacy violations occur, the reputational damage can far exceed direct financial losses and persist for years. Market Confidence Erosion: Systematic AI failures across an industry can erode market confidence, potentially triggering investment pullbacks and valuation corrections. Societal Trust Decline: Each high-profile AI failure diminishes public trust in technology and institutions, making future innovation adoption more difficult. The Path Forward As you enter this dangerous world, you face a difficult reality: do you delay implementing AI, and then have to scramble to catch up, or are you more cautious and start working on AI projects now. The reality is that your competitors are likely adopting AI, and you must as well in the not-so-distant future. Some late starters will implement laughably ridiculous systems that cripple their operations. Don't assume that purchasing from established vendors guarantees protection—many products assume you will manage the risks. Trying to run a major AI project with no experience is like trying to drive a car with no training. Close calls are the best you can hope for. The winners will be companies that carefully select the best AI systems while implementing robust safeguards. Don't assume established vendors are immune to the risks. Consider the following steps: Prioritize Human Oversight: Implement robust human review processes for AI outputs. Implement robust human review processes for AI outputs. Focus on Data Quality: Ensure your training data is accurate, representative, and accounts for potential biases. Ensure your training data is accurate, representative, and accounts for potential biases. Demand Explainability: Choose AI systems that provide transparency into their decision-making processes. Choose AI systems that provide transparency into their decision-making processes. Establish Ethical Guidelines: Develop clear ethical guidelines for AI development and deployment. Alternatively, an AI consultancy can provide guidance. However, vet them carefully or you might end up with another problem rather than a solution. Develop clear ethical guidelines for AI development and deployment. Alternatively, an AI consultancy can provide guidance. However, vet them carefully or you might end up with another problem rather than a solution. Apply Proper Security and Compliance Measures: This isn't just good ethics—it's good business. In the race to AI adoption, remember: it's better to arrive safely than to crash spectacularly before reaching the finish line. Those who have already started their AI journey are learning valuable lessons about what works and what doesn't. The longer you wait, the more risky your position becomes. For everyone else, all you can hope for is more empty chambers in your Russian roulette revolver. Written by Oliver King-Smith, CEO of smartR AI.

Deep Dive: Agentic AI in Payments and Commerce: By Sam Boboev
Deep Dive: Agentic AI in Payments and Commerce: By Sam Boboev

Finextra

time6 hours ago

  • Finextra

Deep Dive: Agentic AI in Payments and Commerce: By Sam Boboev

The fintech world is entering a new era where AI can do more than chat or make recommendations – it can act. In this 'agentic' age of commerce, autonomous AI agents are increasingly capable of making purchases, managing finances, and executing transactions on behalf of users without direct human intervention. What began as experimental chatbots has rapidly evolved into full-fledged agentic AI systems with 'advanced human-like reasoning and interaction capabilities' that are 'transforming the finance and retail sectors' among others. In just the past few months, major payment networks, fintech giants, and startups alike have unveiled tools to empower these AI agents to shop, pay, and transact in the real world. This deep dive explores how the concept of agentic AI emerged in payments and commerce, what key solutions are being built – from PayPal's Agent Toolkit to Visa's Intelligent Commerce – and what it all means for consumers, merchants, and the broader fintech ecosystem. The significance of this shift is hard to overstate. Some compare it to the leap from physical stores to e-commerce, or from web shopping to mobile. As Visa's Chief Product and Strategy Officer Jack Forestell put it, 'Just like the shift from physical shopping to online, and from online to mobile, Visa is setting a new standard for a new era of commerce' with AI agents. The idea is that soon millions of consumers will trust AI assistants to not only find the perfect product or best deal, but also buy it for them – all while handling payments seamlessly in the background. According to Forestell, 'Soon people will have AI agents browse, select, purchase and manage on their behalf. These agents will need to be trusted with payments, not only by users, but by banks and sellers as well'. In other words, the race is on to build the trust, infrastructure, and standards that will let AI-driven commerce flourish safely. This isn't just hype from incumbents. A wave of startups and developers is also charging into the agentic payments gold rush. In late 2024 and early 2025, 'a surge of launches by startups [aimed] to capitalize on the new AI agent economy' has been evident. Fintech innovators see an opportunity to remove friction from transactions by letting AI do the heavy lifting. But they also recognize huge challenges around security, identity, and fraud when algorithmic agents start handling money. Are we really ready to let AI agents loose on our wallets? This article will delve into how the industry is addressing those questions and reimagining commerce itself – from autonomous shopping assistants to AI-powered back-office bots – all through the lens of factual developments and solutions that have emerged in the past year. The Rise of Agentic AI in Commerce Not long ago, 'autonomous AI agents' sounded like science fiction. Yet the rapid advances in generative AI (GenAI) and large language models over 2023–2024 have made it possible for software bots to carry out complex tasks with minimal supervision. Instead of just answering questions, AI can now be agentic – capable of making decisions and taking actions to achieve specific goals. In practical terms, an agentic AI might not only recommend a product but actually place an order, or not only flag a fraudulent transaction but automatically shut down the affected account. The concept took center stage as companies like OpenAI released frameworks for AI agents that can use tools and APIs. This opened the door for integrating payment capabilities directly into AI workflows. Financial services quickly became fertile ground for these innovations. According to a PwC executive playbook, 'multimodal GenAI agentic frameworks have emerged as transformative catalysts, enabling businesses to accelerate process automation at an unprecedented scale', with finance and retail among the sectors already seeing impact. Early experiments had AI agents assist with things like investment research, loan document analysis, and customer support. By late 2024, attention turned to payments and commerce – arguably the next frontier for agentic AI. After all, shopping and financial transactions involve myriad routine decisions and steps (searching for products, comparing options, entering payment details, etc.) that an AI could potentially handle faster and more efficiently than a human. Crucially, the infrastructure to support such autonomy was starting to fall into place. Payment APIs have proliferated, digital wallets and tokenization are widespread, and e-commerce is API-driven – all of which make it easier to plug an AI agent into the commerce loop. In October 2024, industry observers like Sardine noted that 'AI agents are the hottest trend in banking right now, offering massive productivity gains by automating complex tasks and making decisions at lightning speed – tasks that once required human oversight'. However, as Sardine's Head of Strategy Ravi Loganathan cautioned, this promise comes with challenges: 'How do you know the AI agent is operating within your consent? How do you link each payment back to a verified identity? How do we prevent fraud against the agents or prevent the agents from committing fraud?'. These questions underscored the need for new frameworks and safeguards before handing the keys (and the credit cards) to AI. By early 2025, the concept of agentic commerce had moved from theory to reality. In April and May 2025, a flurry of announcements from top payments companies signaled that autonomous shopping and payments are officially here. Mastercard unveiled its Agent Pay initiative; Visa introduced Intelligent Commerce; PayPal, Stripe, and Coinbase each launched toolkits for AI agent transactions; and startups like PayOS came out of stealth to tie everything together. Each of these efforts contributes a piece to the emerging ecosystem of agent-enabled commerce. Let's examine these key products and solutions driving the agentic AI revolution in payments.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store