
Does college still have a purpose in the age of ChatGPT?
HighlightsThe rise of artificial intelligence in academia has led to an increase in students outsourcing their homework, making it difficult for professors to differentiate between student-written work and AI-generated content. Despite the benefits of incorporating AI into college curricula, particularly in enhancing engagement and preparing students for the workforce, there are concerns that it undermines the critical thinking and intellectual growth that humanities education aims to foster. To address the challenges posed by AI in education, institutions must establish clear policies on acceptable use of technology, implement stricter in-class assessments, and develop better tools for detecting AI-generated text.
For many college students these days, life is a breeze. Assignments that once demanded days of diligent research can be accomplished in minutes. Polished essays are available, on demand, for any topic under the sun. No need to trudge through Dickens or Demosthenes; all the relevant material can be instantly summarized after a single chatbot prompt.
Welcome to academia in the age of artificial intelligence. As several recent reports have shown, outsourcing one's homework to AI has become routine. Perversely, students who still put in the hard work often look worse by comparison with their peers who don't. Professors find it nearly impossible to distinguish computer-generated copy from the real thing — and, even weirder, have started using AI themselves to evaluate their students' work.
It's an untenable situation: computers grading papers written by computers, students and professors idly observing, and parents paying tens of thousands of dollars a year for the privilege. At a time when academia is under assault from many angles, this looks like a crisis in the making.
Incorporating AI into college curricula surely makes sense in many respects. Some evidence suggests it may improve engagement. Already it's reshaping job descriptions across industries, and employers will increasingly expect graduates to be reasonably adept at using it. By and large this will be a good thing as productivity improves and innovation accelerates.
But much of the learning done in college isn't vocational. Humanities, in particular, have a higher calling: to encourage critical thinking, form habits of mind, broaden intellectual horizons — to acquaint students with 'the best that has been thought and said,' in Matthew Arnold's phrase. Mastering Aristotle or Aquinas or Adam Smith requires more than a sentence-long prompt, and is far more rewarding.
Nor is this merely the dilettante's concern. Synthesizing competing viewpoints and making a considered judgment; evaluating a work of literature and writing a critical response; understanding, by dint of hard work, the philosophical basis for modern values: Such skills not only make one more employable but also shape character, confer perspective and mold decent citizens. A working knowledge of civics and history doesn't hurt.
For schools, the first step must be to get serious. Too many have hazy or ambiguous policies on AI; many seem to be hoping the problem will go away. They must clearly articulate when enlisting such tools is acceptable — ideally, under a professor's guidance and with a clear pedagogical purpose — and what the consequences will be for misuse. There's plenty of precedent: Honor codes, for instance, have been shown to reduce cheating, in particular when schools take them seriously, students know precisely what conduct is impermissible and violations are duly punished.
Another obvious step is more in-class assessment. Requiring students to take tests with paper and pencil should not only prevent cheating on exam day but also offer a semester-long incentive to master the material. Likewise oral exams. Schools should experiment with other creative and rigorous methods of evaluation with AI in mind. While all this will no doubt require more work from professors, they should see it as eminently in their self-interest.
Longer-term, technology may be part of the solution. As a Bloomberg Businessweek investigation found last year, tools for detecting AI-generated text are still imperfect: simultaneously easy to evade and prone to false positives. But as more schools crack down, the market should mature, the software improve and the temptation to cheat recede. Already, students are resorting to screen recordings and other methods of proving they've done the work; if that becomes customary, so much the better.
College kids have always cheated and always will. The point is to make it harder, impose consequences and — crucially — start shaping norms on campus for a new and very strange era. The future of the university may well depend on it.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Scroll.in
6 days ago
- Scroll.in
Aristotle would scoff at Mark Zuckerberg's suggestion that AI can solve the loneliness epidemic
Mark Zuckerberg recently suggested that AI chatbots could combat social isolation by serving as 'friends' for people experiencing loneliness. He cited statistics that the average American has fewer than three friends but yearns for as many as 15. He was close: According to a 2021 report from the Survey Center on American Life, about half of Americans have fewer than four close friends. Zuckerberg then posited that AI could help bridge this gap by providing constant, personalized interactions. 'I would guess that over time we will find the vocabulary as a society to be able to articulate why that is valuable,' he added. Zuckerberg explaining how Meta is creating personalized AI friends to supplement your real ones: 'The average American has 3 friends, but has demand for 15.' — Roman Helmet Guy (@romanhelmetguy) April 30, 2025 Loneliness and social disconnection are serious problems. But can AI really be a solution? Might relying on AI for emotional support create a false sense of connection and possibly exacerbate feelings of isolation? And while AI can simulate certain aspects of companionship, doesn't it lack the depth, empathy and mutual understanding inherent to human friendship? Researchers have started exploring these questions. But as a moral philosopher, I think it's worth turning to a different source: the ancient Greek philosopher Aristotle. Though it might seem odd to consult someone who lived over 2,000 years ago on questions of modern technology, Aristotle offers enduring insights about friendships – and which ones are particularly valuable. More important than spouses, kids or money In his philosophical text Nicomachean Ethics, Aristotle maintained that true friendship is essential for 'eudaimonia,' a Greek word that is typically translated as 'flourishing' or 'well-being.' For Aristotle, friends are not just nice to have – they're a central component of ethical living and essential for human happiness and fulfillment. 'Without friends, no one would choose to live,' he writes, 'though he had all other goods.' A solitary existence, even one of contemplation and intellectual achievement, is less complete than a life with friends. Friendship contributes to happiness by providing emotional support and solidarity. It is through friendship that individuals can cultivate their virtues, feel a sense of security and share their accomplishments. Empirical evidence seems to support the connection between friendship and eudaimonia. A 2023 Pew Center research report found that 61 per cent of adults in the US say having close friends is essential to living a fulfilling life – a higher proportion than those who cited marriage, children or money. A British study of 6,500 adults found that those who had regular interactions with a wide circle of friends were more likely to have better mental health and be happier. And a meta-analysis of nearly 150 studies found that a lack of close friends can increase the risk of death as much as smoking, drinking or obesity. Different friends for different needs But the benefit of friendship that Aristotle focuses on the most is the role that it plays in the development of virtue. The first tier is what he calls 'friendships of utility,' or a friendship that is based on mutual benefit. Each party is primarily concerned with what they can gain from the other. These might be colleagues at work or neighbours who look after each other's pets when one of them is on vacation. The problem with these friendships is that they are often fleeting and dissolve once one person stops benefiting from the relationship. The second is 'friendships of pleasure,' which are friendships based on shared interests. These friendships can also be transient, depending on how long the shared interests last. Passionate love affairs, people belonging to the same book club and fishing buddies all fall into this category. This type of friendship is important, since you tend to enjoy your passions more when you can share them with another person. But this is still not the highest form of friendship. According to Aristotle, the third and strongest form of friendship is a 'virtuous friendship.' This is based on mutual respect for each other's virtues and character. Two people with this form of friendship value each other for who they truly are and share a deep commitment to the well-being and moral development of one another. These friendships are stable and enduring. In a virtuous friendship, each individual helps the other become better versions of themselves through encouragement, moral guidance and support. As Aristotle writes: 'Perfect friendship is the friendship of men who are good and alike in virtue. … Now those who wish well to their friends for their sake are most truly friends; for they do this by reason of their own nature and not incidentally; therefore their friendship lasts as long as they are good – and goodness is an enduring thing.' In other words, friendships rooted in virtue not only bring happiness and fulfilment but also facilitate personal growth and moral development. And it happens naturally within the context of the relationship. According to Aristotle, a virtuous friend provides a mirror in which one can reflect upon their own actions, thoughts and decisions. When one friend demonstrates honesty, generosity or compassion, the other can learn from these actions and be inspired to cultivate these virtues in themselves. No nourishment for the soul So, what does this mean for AI friends? By Aristotle's standards, AI chatbots – however sophisticated – cannot be true friends. They may be able to provide information that helps you at work, or engage in lighthearted conversation about your various interests. But they fundamentally lack qualities that define a virtuous friendship. AI is incapable of mutual concern or genuine reciprocity. While it can be programmed to simulate empathy or encouragement, it does not truly care about the individual – nor does it ask anything of its human users. Moreover, AI cannot engage in the shared pursuit of the good life. Aristotle's notion of friendship involves a shared journey on the path to eudaimonia, during which each person helps another live wisely and well. This requires the kind of moral development that only human beings, who face real ethical challenges and make real decisions, can undergo. I think it is best to think of AI as a tool. Just like having a good shovel or rake can improve your quality of life, having the rake and the shovel do not mean you no longer need any friends – nor do they replace the friends whose shovels and rakes you used to borrow. While AI may offer companionship in a limited and functional sense, it cannot meet the Aristotelian criteria for virtuous friendship. It may fill a temporary social void, but it cannot nourish the soul. If anything, the rise of AI companions should serve as a reminder of the urgent need to foster real friendships in an increasingly disconnected world.


Time of India
28-05-2025
- Time of India
Does college still have a purpose in the age of ChatGPT?
HighlightsThe rise of artificial intelligence in academia has led to an increase in students outsourcing their homework, making it difficult for professors to differentiate between student-written work and AI-generated content. Despite the benefits of incorporating AI into college curricula, particularly in enhancing engagement and preparing students for the workforce, there are concerns that it undermines the critical thinking and intellectual growth that humanities education aims to foster. To address the challenges posed by AI in education, institutions must establish clear policies on acceptable use of technology, implement stricter in-class assessments, and develop better tools for detecting AI-generated text. For many college students these days, life is a breeze. Assignments that once demanded days of diligent research can be accomplished in minutes. Polished essays are available, on demand, for any topic under the sun. No need to trudge through Dickens or Demosthenes; all the relevant material can be instantly summarized after a single chatbot prompt. Welcome to academia in the age of artificial intelligence. As several recent reports have shown, outsourcing one's homework to AI has become routine. Perversely, students who still put in the hard work often look worse by comparison with their peers who don't. Professors find it nearly impossible to distinguish computer-generated copy from the real thing — and, even weirder, have started using AI themselves to evaluate their students' work. It's an untenable situation: computers grading papers written by computers, students and professors idly observing, and parents paying tens of thousands of dollars a year for the privilege. At a time when academia is under assault from many angles, this looks like a crisis in the making. Incorporating AI into college curricula surely makes sense in many respects. Some evidence suggests it may improve engagement. Already it's reshaping job descriptions across industries, and employers will increasingly expect graduates to be reasonably adept at using it. By and large this will be a good thing as productivity improves and innovation accelerates. But much of the learning done in college isn't vocational. Humanities, in particular, have a higher calling: to encourage critical thinking, form habits of mind, broaden intellectual horizons — to acquaint students with 'the best that has been thought and said,' in Matthew Arnold's phrase. Mastering Aristotle or Aquinas or Adam Smith requires more than a sentence-long prompt, and is far more rewarding. Nor is this merely the dilettante's concern. Synthesizing competing viewpoints and making a considered judgment; evaluating a work of literature and writing a critical response; understanding, by dint of hard work, the philosophical basis for modern values: Such skills not only make one more employable but also shape character, confer perspective and mold decent citizens. A working knowledge of civics and history doesn't hurt. For schools, the first step must be to get serious. Too many have hazy or ambiguous policies on AI; many seem to be hoping the problem will go away. They must clearly articulate when enlisting such tools is acceptable — ideally, under a professor's guidance and with a clear pedagogical purpose — and what the consequences will be for misuse. There's plenty of precedent: Honor codes, for instance, have been shown to reduce cheating, in particular when schools take them seriously, students know precisely what conduct is impermissible and violations are duly punished. Another obvious step is more in-class assessment. Requiring students to take tests with paper and pencil should not only prevent cheating on exam day but also offer a semester-long incentive to master the material. Likewise oral exams. Schools should experiment with other creative and rigorous methods of evaluation with AI in mind. While all this will no doubt require more work from professors, they should see it as eminently in their self-interest. Longer-term, technology may be part of the solution. As a Bloomberg Businessweek investigation found last year, tools for detecting AI-generated text are still imperfect: simultaneously easy to evade and prone to false positives. But as more schools crack down, the market should mature, the software improve and the temptation to cheat recede. Already, students are resorting to screen recordings and other methods of proving they've done the work; if that becomes customary, so much the better. College kids have always cheated and always will. The point is to make it harder, impose consequences and — crucially — start shaping norms on campus for a new and very strange era. The future of the university may well depend on it.


Time of India
27-05-2025
- Time of India
Does college still have a purpose in the age of ChatGPT?
For many college students these days, life is a breeze. Assignments that once demanded days of diligent research can be accomplished in minutes. Polished essays are available, on demand, for any topic under the sun. No need to trudge through Dickens or Demosthenes; all the relevant material can be instantly summarized after a single chatbot prompt. Welcome to academia in the age of artificial intelligence. As several recent reports have shown, outsourcing one's homework to AI has become routine. Perversely, students who still put in the hard work often look worse by comparison with their peers who don't. Professors find it nearly impossible to distinguish computer-generated copy from the real thing — and, even weirder, have started using AI themselves to evaluate their students' work. It's an untenable situation: computers grading papers written by computers, students and professors idly observing, and parents paying tens of thousands of dollars a year for the privilege. At a time when academia is under assault from many angles, this looks like a crisis in the making. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Council or Housing Association Tenant? Check Eligibility for Compensation National Disrepair Claims Undo Bloomberg Incorporating AI into college curricula surely makes sense in many respects. Some evidence suggests it may improve engagement. Already it's reshaping job descriptions across industries, and employers will increasingly expect graduates to be reasonably adept at using it. By and large this will be a good thing as productivity improves and innovation accelerates. Live Events But much of the learning done in college isn't vocational. Humanities, in particular, have a higher calling: to encourage critical thinking, form habits of mind, broaden intellectual horizons — to acquaint students with 'the best that has been thought and said,' in Matthew Arnold's phrase. Mastering Aristotle or Aquinas or Adam Smith requires more than a sentence-long prompt, and is far more rewarding. Nor is this merely the dilettante's concern. Synthesizing competing viewpoints and making a considered judgment; evaluating a work of literature and writing a critical response; understanding, by dint of hard work, the philosophical basis for modern values: Such skills not only make one more employable but also shape character, confer perspective and mold decent citizens. A working knowledge of civics and history doesn't hurt. For schools, the first step must be to get serious. Too many have hazy or ambiguous policies on AI; many seem to be hoping the problem will go away. They must clearly articulate when enlisting such tools is acceptable — ideally, under a professor's guidance and with a clear pedagogical purpose — and what the consequences will be for misuse. There's plenty of precedent: Honor codes, for instance, have been shown to reduce cheating, in particular when schools take them seriously, students know precisely what conduct is impermissible and violations are duly punished. Another obvious step is more in-class assessment. Requiring students to take tests with paper and pencil should not only prevent cheating on exam day but also offer a semester-long incentive to master the material. Likewise oral exams. Schools should experiment with other creative and rigorous methods of evaluation with AI in mind. While all this will no doubt require more work from professors, they should see it as eminently in their self-interest. Longer-term, technology may be part of the solution. As a Bloomberg Businessweek investigation found last year, tools for detecting AI-generated text are still imperfect: simultaneously easy to evade and prone to false positives. But as more schools crack down, the market should mature, the software improve and the temptation to cheat recede. Already, students are resorting to screen recordings and other methods of proving they've done the work; if that becomes customary, so much the better. College kids have always cheated and always will. The point is to make it harder, impose consequences and — crucially — start shaping norms on campus for a new and very strange era. The future of the university may well depend on it.