logo
Does college still have a purpose in the age of ChatGPT?

Does college still have a purpose in the age of ChatGPT?

Time of India28-05-2025
HighlightsThe rise of artificial intelligence in academia has led to an increase in students outsourcing their homework, making it difficult for professors to differentiate between student-written work and AI-generated content. Despite the benefits of incorporating AI into college curricula, particularly in enhancing engagement and preparing students for the workforce, there are concerns that it undermines the critical thinking and intellectual growth that humanities education aims to foster. To address the challenges posed by AI in education, institutions must establish clear policies on acceptable use of technology, implement stricter in-class assessments, and develop better tools for detecting AI-generated text.
For many college students these days, life is a breeze. Assignments that once demanded days of diligent research can be accomplished in minutes. Polished essays are available, on demand, for any topic under the sun. No need to trudge through Dickens or Demosthenes; all the relevant material can be instantly summarized after a single chatbot prompt.
Welcome to academia in the age of artificial intelligence. As several recent reports have shown, outsourcing one's homework to AI has become routine. Perversely, students who still put in the hard work often look worse by comparison with their peers who don't. Professors find it nearly impossible to distinguish computer-generated copy from the real thing — and, even weirder, have started using AI themselves to evaluate their students' work.
It's an untenable situation: computers grading papers written by computers, students and professors idly observing, and parents paying tens of thousands of dollars a year for the privilege. At a time when academia is under assault from many angles, this looks like a crisis in the making.
Incorporating AI into college curricula surely makes sense in many respects. Some evidence suggests it may improve engagement. Already it's reshaping job descriptions across industries, and employers will increasingly expect graduates to be reasonably adept at using it. By and large this will be a good thing as productivity improves and innovation accelerates.
But much of the learning done in college isn't vocational. Humanities, in particular, have a higher calling: to encourage critical thinking, form habits of mind, broaden intellectual horizons — to acquaint students with 'the best that has been thought and said,' in Matthew Arnold's phrase. Mastering Aristotle or Aquinas or Adam Smith requires more than a sentence-long prompt, and is far more rewarding.
Nor is this merely the dilettante's concern. Synthesizing competing viewpoints and making a considered judgment; evaluating a work of literature and writing a critical response; understanding, by dint of hard work, the philosophical basis for modern values: Such skills not only make one more employable but also shape character, confer perspective and mold decent citizens. A working knowledge of civics and history doesn't hurt.
For schools, the first step must be to get serious. Too many have hazy or ambiguous policies on AI; many seem to be hoping the problem will go away. They must clearly articulate when enlisting such tools is acceptable — ideally, under a professor's guidance and with a clear pedagogical purpose — and what the consequences will be for misuse. There's plenty of precedent: Honor codes, for instance, have been shown to reduce cheating, in particular when schools take them seriously, students know precisely what conduct is impermissible and violations are duly punished.
Another obvious step is more in-class assessment. Requiring students to take tests with paper and pencil should not only prevent cheating on exam day but also offer a semester-long incentive to master the material. Likewise oral exams. Schools should experiment with other creative and rigorous methods of evaluation with AI in mind. While all this will no doubt require more work from professors, they should see it as eminently in their self-interest.
Longer-term, technology may be part of the solution. As a Bloomberg Businessweek investigation found last year, tools for detecting AI-generated text are still imperfect: simultaneously easy to evade and prone to false positives. But as more schools crack down, the market should mature, the software improve and the temptation to cheat recede. Already, students are resorting to screen recordings and other methods of proving they've done the work; if that becomes customary, so much the better.
College kids have always cheated and always will. The point is to make it harder, impose consequences and — crucially — start shaping norms on campus for a new and very strange era. The future of the university may well depend on it.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Beta Release: Aristotle AI Promises Smarter Conversations
Beta Release: Aristotle AI Promises Smarter Conversations

Economic Times

timean hour ago

  • Economic Times

Beta Release: Aristotle AI Promises Smarter Conversations

Vlad Tenev, the co-founder and CEO of Robinhood, decided to co-found an AI company. His new venture, Harmonic, is a different beast altogether: a startup focused on artificial intelligence, now entering public view with the beta launch of its chatbot app, on both iOS and Android, Aristotle is, at the surface level, another entrant in the increasingly saturated AI assistant space. Think ChatGPT, but with more branding flair. Harmonic pitches Aristotle as a reasoning-first model, but not just a chatbot that spits out facts, but one that aims to engage in deeper, more thoughtful conversations. The name alone sets expectations high, evoking philosophical inquiry rather than just transactional queries. Somewhat, the app interface is clean, and the conversation flow feels more nuanced than what you'd get from a baseline language model. Aristotle seems to push users to consider 'why' and 'how' more than just 'what.' It occasionally probes with follow-up questions, which can be refreshing - or redundant, depending on what you're looking said, what Harmonic is doing right is focusing. The company isn't chasing enterprise contracts or trying to be everything to everyone. This is a mobile-first, consumer-facing experience, optimized for individual users rather than institutions. In a market where many AI startups are pivoting toward enterprise sales just to survive, this alone sets Harmonic apart - for now. Aristotle achieved a gold medal performance on the 2025 International Math Olympiad (IMO) through a formal test (meaning the problems were translated into a machine‑readable format). Google and OpenAI also developed AI models that achieved gold medal performance on this year's IMO, but through informal tests taken in natural language. The model itself hasn't been open-sourced or benchmarked publicly, so it's hard to gauge technical merit beyond anecdotal usage. Tenev claims Aristotle is 'built to reason.'Ultimately, Aristotle is interesting. The app feels polished, the ideas behind it are ambitious, and Harmonic's commitment to building a truly conversational AI is commendable. But unless the product can show measurable improvements in logic, comprehension, or trustworthiness over established models, it risks becoming just another pretty interface on top of the same backend now, it's a beta worth keeping an eye on, but not because of what it is, but because of what it might evolve into.

How AI and Charter Schools Could Close the Tutoring Gap
How AI and Charter Schools Could Close the Tutoring Gap

Mint

time16-06-2025

  • Mint

How AI and Charter Schools Could Close the Tutoring Gap

(Bloomberg Opinion) -- The greatest school in history isn't Oxford, Cambridge, Harvard or any other university you know. And no matter how hard you try, your kids won't get in. Why? Partly because it was so selective it only admitted one student — but mainly because it closed in 336 BC. For me, Aristotle's seven-year tutelage of Alexander is the education against which all others should be judged (after all, more than 2,300 years later we still refer to the lone pupil as 'The Great'). It's the ultimate testament to the power of tutoring — a power that artificial intelligence is poised to unlock. The problem with tutoring is it can't scale. Or it couldn't. Because even as we're besieged by concerns that AI-aided plagiarism is destroying education, we're starting to see evidence that AI-enabled tutoring might supercharge it. Getting the technology right, though, will require lots of real-life experimentation. While there's a limit to how much our traditional public school system allows for this kind of test-and-learn approach, this need creates an opportunity for the country's growing crop of charter schools to make a unique contribution to the future of education. The wealthy's appreciation of tutoring did not die with Alexander. I paid rent my first year out of college as a private math tutor and today there are a host of companies offering tutoring services, with those at the high end often charging more than $1,000 per hour. But for every student who can afford tutoring, there are hundreds more who could benefit from it. A meta-analysis of dozens of experiments with K-12 tutoring, conducted with students of all socioeconomic statuses, found that the additional academic attention significantly boosts student performance. And let's say you could overcome the cost issue — with more than 50 million students in US primary and secondary schools, there will never be enough tutors to work with them all. Early experiments with AI-based tutoring suggest it might help fill the gap. In a study of three middle schools in Pennsylvania and California, researchers found that a hybrid human-AI tutoring model — where the technology supported human tutors, allowing them to work with many more pupils — generated significant improvements in math performance, with the biggest increases going to the lowest-performing students. And in a study of four high schools in Italy, researchers replaced traditional homework in English classes with interactive sessions with OpenAI's ChatGPT-4 and found that all the AI-aided groups did at least as well as those engaged in traditional homework — with some performing significantly better. It could help at a college level, too. In a Harvard University physics course, for example, professors trained an AI tutor to work with some students (replacing their normal class time) while others had a traditional instructor-guided class. Students with AI tutors performed better — in fact they learned twice as much — and were more engaged with the lessons than those in the normal class, even though they had less interaction with a human instructor. The most impressive findings may come from the developing world. Rising Academies, a network of private schools with more than 250,000 students across Africa, has implemented Rori, an AI-based math tutor for students, and Tari, a support system for teachers, both powered by Anthropic's Claude and accessible via WhatsApp. Students who used Rori for two 30-minute sessions twice a week for 8 months showed an improvement in their math performance 'equal or greater than a year of schooling.' None of this means AI-aided tutoring is a panacea. But it does suggest that such tutors are, if well-designed and implemented, very likely to be helpful even if they remain inferior to the best human options. Since many families can't access or afford traditional tutoring, what matters is if they are better than no tutors at all. But 'well-designed and implemented' is a crucial part of that sentence. We don't yet know what the best practices are for AI tutors. Learning this will require extensive experimentation. And, much as it pains me to say this as a proud product of public schools, that kind of free-form experimentation is likely to be a struggle for public school bureaucracy. Research by the Department of Education and the Center on Reinventing Public Education at Arizona State University suggests that charter schools, which operate with more freedom about how they staff and teach, are often more innovative than traditional public schools. And because charters are not private schools, they cannot charge tuition or be selective about who they admit. This lets them generate useful data about what does and doesn't work. Of course, this doesn't mean that charter schools are better than their public counterparts. Most innovations fail. But however painful failure is for an individual school, it can actually benefit the system because even bad outcomes produce useful information. Successful AI-based tutoring programs pioneered at charters can and will be adopted by public schools, and failed ones avoided. Given the potentially revolutionary change in education AI is driving, learning should be our primary goal — and charters are likely to be our best instrument toward in Bloomberg Opinion: This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners. Gautam Mukunda writes about corporate management and innovation. He teaches leadership at the Yale School of Management and is the author of 'Indispensable: When Leaders Really Matter.' More stories like this are available on

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store