logo
Using ChatGPT to Write Your College Essay Won't Help You Get Into Your School of Choice

Using ChatGPT to Write Your College Essay Won't Help You Get Into Your School of Choice

Yahoo25-03-2025

DragonImages
This story about generative AI was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.
Will Geiger estimates that he read about 10,000 college application essays over the course of a years-long career in college admissions and scholarships before ChatGPT came on the scene in 2022.
Shortly afterwards, Geiger began to notice that essays felt less and less like they had been written by 17- or 18-year-olds. He saw more hyperorganized five-paragraph essays; more essays that were formatted as a letter to someone; and certain examples and words being used over and over again by different students.
Geiger began to see less humanity shining through and more instances of words — like 'cornerstone' and 'bedrock' — that are not commonly used by typical teenagers.
'They felt a little bit sterile,' said Geiger, the cofounder and CEO of a company called Scholarships360, an online platform used by more than 300,000 students last year to find and apply for scholarships.
AI and College Admissions Essays: Don't Rely on ChatGPT to Write Your College Essay
*This op-ed makes the case that students are smarter, more original, and more interesting than a computer.*
Curious, Scholarships360 staffers deployed AI-detection software called GPTZero. It checked almost 1,000 essays submitted for one scholarship and determined that about 42 percent of them had likely been composed with the help of generative AI.
With college acceptances beginning to roll in for high school seniors, and juniors starting to brainstorm the essays they'll submit with their applications in the fall, Geiger is concerned. When students use AI to help write their essays, he said, they are wasting a valuable opportunity.
'The essay is one of the few opportunities in the admissions process for a student to communicate directly with a scholarship committee or with an admissions reader,' Geiger said. 'That provides a really powerful opportunity to share who you are as a person, and I don't think that an AI tool is able to do that.'
Madelyn Ronk, a 20-year-old student at Penn State Beaver, said she never considered using ChatGPT to write the personal statement required for her transfer application from community college last year. A self-described Goody Two-shoes, she didn't want to get in trouble. But there was another reason: She didn't want to turn in the same essay as anyone else.
'I want to be unique. I feel like when people use AI constantly, it just gives the same answer to every single person,' said Ronk, who wrote her essay about volunteering for charitable organizations in her hometown. 'I would like my answer to be me. So I don't use AI.'
Geiger said students' fears about submitting a generic essay are valid — they're less likely to get scholarships that way. But that doesn't mean they have to avoid generative AI altogether. Some companies offer services to help students use AI to improve their work, rather than to cheat — such as getting help writing an outline, using proper grammar or making points effectively. Generative AI can proofread an essay, and can even tell a student whether their teacher is likely to flag it as AI-assisted.
PackBack, for example, is an online platform whose software can chat with students and give feedback as they are writing. The bot might flag grammatical errors or the use of passive voice or whether students are digressing from their point. Craig Booth, the company's chief technology officer, said the software is designed to introduce students to ethical uses of AI.
A 2024 survey of college applicants found that about 50 percent had used AI for brainstorming essays, 47 percent had used it to create an outline, and about 20 percent had used it to generate first drafts.
Not all scholarship providers or colleges have policies on exactly how AI can or cannot be used in prospective student essays. For example, Common App forbids the use of generative AI but doesn't check individual essays unless someone files a report of suspected fraud. Jackson Sternberg, a spokesperson for Common App, declined to share how many reports of fraud they get each year or how they handle their investigations.
Related:
Tools like GPTZero aren't reliable 100 percent of the time. The Markup, a news outlet focused on technology, reported on a study that found writing by non-native-English speakers is far more likely to get flagged as being AI-generated than writing by native English speakers. And several other studies have found that the accuracy rates of such tools vary widely.
Because detection software isn't always accurate, Geiger said, Scholarships360 doesn't base scholarship decisions on whether essays were flagged as being generated by AI. But, he said, many of the students whose essays were flagged weren't awarded a given scholarship because 'if your writing is being mistaken for AI,' whether you used the technology or not, for a scholarship or admissions essay, 'it's probably going to be missing the mark.'
Jonah O'Hara, who serves as chair of the admissions practices committee at the National Association of College Admissions Counselors, said that using AI isn't 'inherently evil,' but colleges and scholarship providers need to be transparent about their expectations and students need to disclose when they're using it and for what. Colleges that are using AI in the admissions review process also need to be transparent about that with prospective students, he said.
O'Hara, who is director of college counseling at Rocky Hill Country Day School in Rhode Island, said that he has always discouraged students from using a thesaurus in writing college application essays, or using any words that aren't normal for them.
'If you don't use 'hegemony' and 'parsimonious' in text messages with your friends, then why would you use it in an essay to college? That's not you,' O'Hara said. 'If you love the way polysyllabic words roll off your tongue, then, of course, if it's your voice, then use it.'
Related:
Generative AI is, functionally, the latest evolution of the thesaurus, and O'Hara wonders whether it has 'put a shelf life on the college essay.'
There was a time when some professors offered self-scheduled, unproctored take-home exams, O'Hara recalled. Students had to sign an honor statement promising that everything they submitted was their own work. But the onus was on the professors to write cheat-proof exams. O'Hara said if the college essay is going to survive, he thinks this is the direction administrators will have to go.
'If we get to a point where colleges cannot confidently determine [its] authenticity,' he said, 'then they may abandon it entirely.'
Contact staff writer Olivia Sanchez at 212-678-8402 or osanchez@hechingerreport.org.
Originally Appeared on Teen Vogue
Check out more Teen Vogue education coverage:
Affirmative Action Benefits White Women Most
How Our Obsession With Trauma Took Over College Essays
So Many People With Student Debt Never Graduated College
The Modern American University Is a Right-Wing Institution

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Gemini could soon rival ChatGPT with its new privacy feature (APK teardown)
Gemini could soon rival ChatGPT with its new privacy feature (APK teardown)

Android Authority

time23 minutes ago

  • Android Authority

Gemini could soon rival ChatGPT with its new privacy feature (APK teardown)

Ryan Haines / Android Authority TL;DR Google is working on a temporary chat feature for Gemini. This feature could be similar to ChatGPT's Temporary Chats, which give users a blank slate for conversation and doesn't save any memory. Users will be able to access Gemini's temporary chat feature by clicking on a new disappearing clock icon. OpenAI's ChatGPT and Google's Gemini have emerged as popular AI assistants. Both are usually neck-and-neck when it comes to features, but given the pace of innovation, a few things are missing here and there. It seems Google wants to close the gap on its end, as it could bring a ChatGPT-like temporary chat feature to Gemini in the future. Authority Insights story on Android Authority. Discover You're reading anstory on Android Authority. Discover Authority Insights for more exclusive reports, app teardowns, leaks, and in-depth tech coverage you won't find anywhere else. An APK teardown helps predict features that may arrive on a service in the future based on work-in-progress code. However, it is possible that such predicted features may not make it to a public release. We've spotted code within Google app v16.22.44 beta that indicates that Google is working on a temporary chat feature. We managed to activate the feature to give you an early look at it: AssembleDebug / Android Authority In the screenshot above, you can see a new disappearing clock icon right next to the New Chat button in the sidebar (which is an upcoming tablet-friendly feature that hasn't rolled out yet). You can tap on the icon to presumably start a temporary chat. We couldn't get the feature to work, but we presume it will work similarly to ChatGPT's Temporary Chat feature. When you start a Temporary Chat in ChatGPT, the AI assistant doesn't display the chat in your chat history, saving you from the trouble of deleting queries from your history. Further, the conversation begins as a blank slate, as ChatGPT won't be aware of your previous conversations, nor have any past or future memory. However, it will still follow custom instructions if they are enabled. OpenAI may keep a copy of your conversation for up to 30 days for 'safety purposes,' but they won't be used to improve their models. The feature is very similar to what most users recognize as 'incognito mode' in their browsers. Google has yet to share details about the temporary chats feature in Gemini. We'll keep you updated when we learn more. Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.

OpenAI cofounder tells new graduates the day is coming when AI 'will do all the things that we can'
OpenAI cofounder tells new graduates the day is coming when AI 'will do all the things that we can'

Business Insider

time38 minutes ago

  • Business Insider

OpenAI cofounder tells new graduates the day is coming when AI 'will do all the things that we can'

Ilya Sutskever says it might take years, but he believes AI will one day be able to accomplish everything humans can. Sutskever, the cofounder and former chief scientist of ChatGPT maker OpenAI, spoke about the technology while giving a convocation speech at the University of Toronto, his alma mater, last week. "The real challenge with AI is that it is really unprecedented and really extreme, and it's going to be very different in the future compared to the way it is today," he said. Sutskever said that while AI is already better at some things than humans, "there are so many things it cannot do as well and it's so deficient, so you can say it still needs to catch up on a lot of things." But, he said, he believes "AI will keep getting better and the day will come when AI will do all the things that we can do." "How can I be so sure of that?" he continued. "We have a brain, the brain is a biological computer, so why can't a digital computer, a digital brain, do the same things? This is the one-sentence summary for why AI will be able to do all those things, because we have a brain and the brain is a biological computer." As is customary at convocation and commencement ceremonies, Sutskever also gave advice to the new graduates. He implored them to "accept reality as it is, try not to regret the past, and try to improve the situation." "It's so easy to think, 'Oh, some bad past decision or bad stroke of luck, something happened, something is unfair,'" he said. "It's so easy to spend so much time thinking like this while it's just so much better and more productive to say, 'Okay, things are the way they are, what's the next best step?'" Sutskever hasn't always taken his own advice on the matter, though. He's said before that he regrets his involvement in the November 2023 ousting of OpenAI CEO Sam Altman. Sutskever was a member of the board, which fired Altman after saying it "no longer has confidence" in his ability to lead OpenAI and that he was "not consistently candid in his communications." A few days later, however, Sutskever expressed regret for his involvement in the ouster and was one of hundreds of OpenAI employees who signed an open letter threatening to quit unless Altman was reinstated as CEO. "I deeply regret my participation in the board's actions," Sutskever said in a post on X at the time. "I never intended to harm OpenAI."

AI should unlock clinical trial data, say experts
AI should unlock clinical trial data, say experts

Yahoo

timean hour ago

  • Yahoo

AI should unlock clinical trial data, say experts

Artificial intelligence (AI) has the potential to uncover valuable insights hidden in protocols and clinical study reports across the pharmaceutical industry, according to experts at the Outsourcing in Clinical Trials East Coast 2025 conference in King of Prussia. During a panel discussion on AI in clinical research, Prasanna Rao, Chief Products and Innovation officer at Saama emphasised the importance of making protocol data available to large language models (LLMs), so tools such as ChatGPT can support informed patient decision-making. 'Companies are very protective of clinical protocol data, and I'm not sure why,' said Rao. 'These documents are already distributed broadly, including to patients. There's nothing particularly sensitive in them. I believe the real concern is that big pharma is nervous about AI analysing these protocols and potentially uncovering insights that people might overlook - but there's no real reason for that fear.' Rao urged the industry to be more transparent: 'We need to make this kind of data available. Patients in a trial want to fully understand what they're signing up for. Imagine a patient uploads a protocol and asks, 'What will I go through in this study?' AI could then analyse related data and provide a clear summary, helping them make better decisions.' Cole Eshbach, Senior Clinical Trial Manager at Endo cautioned that while AI is powerful, it's not always the right tool for every trial situation. 'There's a popular saying right now - AI can be like a nail looking for a hammer,' said Eshbach. 'Just because AI is available doesn't mean it's the best solution for your needs. If you don't have a large trial database or the necessary infrastructure, it might not be the most cost-effective option. Sometimes, a simpler tool you already have can do the job just as well.' Bryan Clayton, founder and CEO of BC Consulting Group, encouraged attendees to experiment with AI tools through everyday conversation. 'My advice to people is simple: talk to ChatGPT, or whatever AI tool you're using like you would talk to a person. First, treat it like a conversation. Second, ask it what you should be asking. It might sound odd, but say, 'Hey ChatGPT, what should I ask you about this protocol or topic?' It'll guide you. If you keep the dialogue going, you'll start to see its potential - and realise, 'Wow, I can actually use this to do something valuable.' Looking ahead, Rao pointed to agentic AI - AI systems that can autonomously plan and execute complex tasks – as a major advancement poised to transform clinical trials. 'I built an agent to help with a specific task - checking if anyone else was running a similar Phase III trial. It gave me answers in seconds,' shared Rao. 'That's the power of this technology. It's the same kind of deep research AI others use, but you can build your own with open-source tools, which is more cost-effective.' He added that Saama is actively deploying such agents across its platform. 'It's still early, but in the next year or two, we'll see major time and cost savings. Tasks like data gathering that used to take a lot of manual effort can now be automated. That frees up knowledge workers to focus on more important work.' Cole Eshbach closed the discussion with a long-term perspective. 'We're not where we need to be yet, but five years from now, the impact of AI could be enormous. Every industry shift now includes AI. It's going to transform our field, and the sooner we adapt, the more we'll benefit - saving money, speeding up timelines, and improving things for sites and participants.' "AI should unlock clinical trial data, say experts" was originally created and published by Clinical Trials Arena, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store