Latest news with #EllaStapleton


Forbes
23-06-2025
- Forbes
3 Ways To Use AI So It Won't Dumb You Down At School Or Work
We've all heard about smart tech. What about using tech wisely? For years I worked as a college essay coach. I helped students create narratives to accompany their applications. In all that time what surprised me most was a universal discomfort with writing. Not only did my students struggle to construct powerful narratives, but many also had difficulty simply generating ideas. It's therefore no surprise so many college students now turn to AI to complete their work. In August 2024, found that nearly 90% now use it to complete academic assignments. 'And they are using it regularly: Twenty-four percent reported using AI daily; 54% daily or weekly; and 54% on at least a weekly basis.' Though many higher education institutions officially condemn AI as a form of cheating, many professors are also apparently using AI as a helpful resource, for everything from creating syllabi to addressing students on why they received a particular grade. In May, The New York Times reported the story of Ella Stapleton, a college senior irked by what appears to be an academic double standard. 'Ms. Stapleton filed a formal complaint with Northeastern's business school, citing the undisclosed use of A.I. as well as other issues she had with his teaching style, and requested reimbursement of tuition for that class. As a quarter of the total bill for the semester, that would be more than $8,000.' The Hidden Costs to Letting AI Do Your Thinking 'Necessity is the mother of invention' is a famous saying describing the natural tendency to fashion solutions to life's challenges. Ever since we crawled out of caves toward the bright lights of civilization, humankind has sought tools to lighten our mental and physical loads. The wheel is the most obvious example of devising an implement to assist with transportation difficulties. More recently, teleconferencing applications like Zoom and Microsoft Teams enabled remote work during the COVID-19 pandemic. Most of us would agree these two innovations produced a net positive effect, resulting in a progressively better society. Can we say the same about students and professors turning to AI for help with critical thinking? Not according to a revealing new study from MIT's Media Lab. The researchers engaged 54 subjects ranging in age from 18-54 to write SAT essays using ChatGPT, Google Search, and just their own faculties. As Time reports: '…ChatGPT users had the lowest brain engagement and 'consistently underperformed at neural, linguistic, and behavioral levels.' Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.' Slide Rules to Smartphones: This Isn't a New Problem As recently as the 1960s and early 1970s, engineers and astronauts relied on slide rules for complex math calculations like those presented in the film Apollo 13. Unlike the calculator and even ChatGPT which instantly spits out answers, slide rules force humans to still use their brains, sharpening one's mental abilities, approximation skills, and logical reasoning. 'Use it or lose it' is another famous saying apt to this discussion. Now that every smartphone comes equipped with a built-in calculator, there's little incentive for people from any walk of life to regularly use math skills. Without such practice, they often atrophy once we've completed our formal schooling. Ditto for applications like Waze and Maps. It's gotten to the point that many people rely entirely on their phones for basic navigation from home to work. Are We Growing Too Dependent on Tech? As far back as 2018, pundits warned about the dangers of cognitive diminishment due to an overreliance on artificial intelligence. Statesman Henry Kissinger was one such person. 'AI, by mastering certain competencies more rapidly and definitively than humans, could over time diminish human competence and the human condition itself as it turns it into data,' he wrote in a revealing piece for The Atlantic. Less than 10 years later his prescience is disturbingly spot-on. Much like the Internet's stunning ubiquity, AI is fast becoming the go-to tool of choice, not just for students and teachers, but for business professionals everywhere. Talk about necessity! Among other things, artificial intelligence now helps companies achieve unprecedented levels of productivity, including automating repetitive takes, improving customer service, personalizing marketing outreach, optimizing talent management, strengthening cybersecurity, and enhancing market research—to name a few. But as Kissinger warned and the MIT survey reveals, there's danger here. If students increasingly outsource thinking to computers, what will happen to future people? Will we end up like the pathetically helpless and overfed automatons floating onboard the Wall-E spaceship? Will other dystopian fare like Idiocracy come true? Not if we wake up to the problem and do something about it. Now. 3 Ways to Use AI as a Second Brain, Not a Crutch The AI genie is out of the bottle. Students, professors, and business professionals alike are going to use it. There's no stopping that. What we can do is rethink our relationship to innovation. We've heard about smart technology for more than a decade. Now it's time for what I dub wise technology: a strategy for how humans can use AI—without being used by it themselves. Here are my top three suggestions. Schooling's real purpose is not to get good grades. It's to actually learn. If you turn off your mind and turn on AI to do your assignments, you're the one who will suffer long-term. First things first: change your mindset. Avoid academic shortcuts. Instead, do the hard work to educate yourself. And don't just stop when you graduate. Carry that lifelong mentality to the workforce and beyond. There's nothing more important than developing your own faculties. AI can boost your imagination, serving as the ultimate thought partner. It only becomes a threat to your cognitive abilities when you close your own mind to its genius. Instead, reopen it, using AI as a brainstormer and a collaborator. Leverage it as a force multiplier to develop world-changing ideas, products, and art, not as a talent calculator. The former requires your active participation. The latter relegates you to little more than an order taker. We know AI hallucinates. It gets things wrong. This isn't only the reason not to just blindly follow AI. Pushing back against AI enables you to flex your own mental muscles. Doing so helps you learn the why behind the answers it gives you. This process strengthens your mental abilities, learning from an outsourced brain in a digital mentor/mentee relationship. What a Wise Philosopher Can Teach Us About Smart Tech More than 2,000 years ago, Socrates—a wise man himself—expanded people's minds by asking them a series of questions. His process was called the Socratic Method, and it led to the development of modern philosophy. Nowadays we may look back at him and say, 'Wow. What a genius!' Socrates didn't see it that way. Instead, all his intellectual searching led him to sagaciously remark: 'The only true wisdom is in knowing you know nothing.' Now as we stand at an inflection point with AI advancing by the second, people young and old would do well to adopt a similar wise mindset. Specifically, we must strive to be ceaselessly curious about our world and ourselves. After all, it's this very curiosity that enables AI's ceaseless intellectual growth. Now that's something we can learn from.

Yahoo
26-05-2025
- Business
- Yahoo
'He's Telling Us Not To Use AI — Then Using It Himself': Student Demands $8,000 Tuition Refund After Discovering Professor's ChatGPT Use
A college senior's frustration over her professor's behind-the-scenes use of artificial intelligence has sparked debate about transparency, fairness, and the changing dynamics of higher education in the age of ChatGPT. Ella Stapleton, a business major at Northeastern University, noticed something unusual in her professor's lecture materials. The notes included strange errors, a reference to "ChatGPT" in the bibliography, and AI-generated images that appeared distorted — some even showing people with extra limbs. Don't Miss: Hasbro, MGM, and Skechers trust this AI marketing firm — 'Scrolling To UBI' — Deloitte's #1 fastest-growing software company allows users to earn money on their phones. That was enough to prompt Stapleton to dig deeper. When she confirmed that her professor had been using AI tools without informing students, she filed a formal complaint and requested a tuition refund of more than $8,000 — the amount she paid for the course. "He's telling us not to use it, and then he's using it himself," Stapleton told The New York Times, referring to what she viewed as a double standard. The professor, Rick Arrowood, admitted to using several AI platforms to help develop his lecture materials, including ChatGPT, the AI search engine Perplexity, and presentation generator Gamma. He told the Times that he now realizes he should have taken a closer look at the materials before sharing them with students. "In hindsight...I wish I would have looked at it more closely," Arrowood said. He added that professors should be transparent about AI use and thoughtful about how it's integrated into teaching. "If my experience can be something people can learn from, then, OK, that's my happy spot." After a series of meetings, Northeastern University decided not to approve Stapleton's refund request. Trending: Maker of the $60,000 foldable home has 3 factory buildings, 600+ houses built, and big plans to solve housing — In a statement to Fortune, Northeastern Vice President for Communications Renata Nyul said the university supports the use of AI to improve teaching, research, and operations. However, the school expects responsible use: "The university provides an abundance of resources to support the appropriate use of AI and continues to update and enforce relevant policies enterprise-wide." Northeastern's policy requires both students and faculty to provide proper attribution when using AI-generated content in any submitted work. It also emphasizes the importance of checking AI output for accuracy. The situation highlights a shift in the conversation around AI in education. Early concerns focused on students using generative AI to shortcut their assignments. Now, it's students who are raising flags about how professors are using the same from a Tyton Partners survey shows that students are still leading the charge in AI adoption. About 59% of students reported using generative AI tools regularly, compared to roughly 40% of instructors and administrators. But the student backlash at Northeastern shows that it's not just about access or usage — it's about trust. Students are beginning to question what they're paying for and whether professors are delivering on the promise of a human-led learning experience. As colleges continue to explore how AI fits into the classroom, the question may no longer be whether AI should be used, but how transparently and responsibly it should be integrated. Read Next: Maximize saving for your retirement and cut down on taxes: . Deloitte's fastest-growing software company partners with Amazon, Walmart & Target – Image: Shutterstock UNLOCKED: 5 NEW TRADES EVERY WEEK. Click now to get top trade ideas daily, plus unlimited access to cutting-edge tools and strategies to gain an edge in the markets. Get the latest stock analysis from Benzinga? APPLE (AAPL): Free Stock Analysis Report TESLA (TSLA): Free Stock Analysis Report This article 'He's Telling Us Not To Use AI — Then Using It Himself': Student Demands $8,000 Tuition Refund After Discovering Professor's ChatGPT Use originally appeared on © 2025 Benzinga does not provide investment advice. All rights reserved. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Hans India
21-05-2025
- Business
- Hans India
ChatGPT Adds PDF Download Option for Deep Research Reports
In a welcome update for users, OpenAI has introduced a PDF download feature for Deep Research reports in ChatGPT, making it easier to save and share long-form AI-generated research without losing formatting. The feature is designed to address a persistent user complaint: that copying and pasting the research into other applications often distorted its layout. With the new functionality, users can now export reports as PDFs directly from the platform, ensuring that formatting and structure remain intact. What is Deep Research? Deep Research is a tool within ChatGPT that allows users to conduct multi-step investigations on complex topics. It works by scouring hundreds of sources on the internet and summarizing the information in a detailed report within minutes—work that might take a human several hours to compile manually. Until now, the only way to extract the report was to copy and paste the text, which disrupted formatting. The new download option changes that, offering a polished, professional-looking document for offline reading, printing, or sharing. How to Download Deep Research Reports as PDFs The update was first noticed by a user on X (formerly Twitter), and is now fully available on the web version of ChatGPT. After generating a Deep Research report, users can simply: • Click the share icon in the upper right corner of the report, • Select 'Download as PDF' from the dropdown menu, • Save the file to their device. This added convenience is now accessible to all ChatGPT users, regardless of their subscription tier—including Free, Plus, Team, Pro, Enterprise, and Edu plans. OpenAI confirmed the feature's rollout on May 17. Alongside the PDF feature, OpenAI also introduced a GitHub connector for Deep Research, allowing developers to integrate research tasks directly with GitHub projects, further expanding the tool's utility. Meanwhile, an AI Controversy at Northeastern University In separate news, a Northeastern University professor is facing criticism for relying on ChatGPT and other AI tools to prepare classroom materials, while discouraging students from doing the same. Ella Stapleton, a business student, spotted strange images and errors in class notes, some directly referencing ChatGPT. She filed a complaint and requested a tuition refund, which the university denied. Professor Rick Arrowood admitted using ChatGPT, Perplexity, and Gamma, but added, 'In hindsight, I wish I would have looked at it more closely.' He acknowledged he should have reviewed the AI-generated content more thoroughly.


Evening Standard
19-05-2025
- Science
- Evening Standard
US student seeks college refund after she spotted her teacher was using ChatGPT
In February, Ella Stapleton was going over her organisational behaviour class lecture notes when she came across a directive addressed to ChatGPT. The New York Times claims that the content used expressions like 'expand on all areas' and displayed typical indicators of artificial intelligence-generated content, including clumsy wording, warped visuals, and even errors that resembled machine output.


Hindustan Times
19-05-2025
- Hindustan Times
ChatGPT now lets you download Deep Research reports as PDFs - here's how
OpenAI has introduced a feature for ChatGPT users which now lets them download Deep Research reports as PDF files and make it easier to save and share detailed findings. This update comes after users faced issues with copying reports, which often ruined the original layout and formatting. What is Deep Research Deep Research is a feature that helps users conduct multi-step investigations on complex topics. When you enter a prompt, ChatGPT searches through hundreds of websites and compiles the information into a single report. This process takes minutes, compared to the hours it might take a person to do the same work manually. Also read: Scammers steal over Rs. 11.55 crore from bank accounts after hacking mobile number Previously, users could only copy the report text, but pasting it elsewhere disrupted the formatting. The new PDF download option maintains the report's structure and makes it more convenient for saving or printing. Steps to Download Deep Research as a PDF A user on X first noticed the rollout of this feature. It is now live on the web version of ChatGPT. After generating a Deep Research report, users will see an option to export it as a PDF. This allows for easier sharing and archiving without altering the format. Also read: Google renames Find My Device to Find Hub: What's new, and why Android trackers still need work To download a Deep Research report as a PDF, follow these steps: Click the share icon in the upper right corner of the Deep Research report Select 'Download as PDF' from the menu Tap to save the file on your device Feature Now Available for All ChatGPT Users On May 17, OpenAI confirmed the rollout of the feature for all users. It is available across ChatGPT's free and paid plans, including ChatGPT Plus, Team, Pro, Enterprise, and Edu. Alongside this update, OpenAI also announced the launch of a GitHub connector for Deep Research. Also read: How to quietly limit someone on Instagram without blocking, unfollowing, or causing drama Professor Faces Backlash for AI Use in Class In other news, a controversy has arisen at Northeastern University involving a professor who used ChatGPT to prepare lecture notes while advising students not to rely on AI tools. Business student Ella Stapleton spotted errors and odd images in the lecture materials, including a direct reference to ChatGPT. She raised concerns with the university and requested a tuition refund, but the school denied her claim after several meetings. Professor Rick Arrowood admitted to using AI tools like ChatGPT, Perplexity, and Gamma to create lectures, saying he reviewed the content but missed some AI-generated mistakes. Speaking to The New York Times, he said, 'In hindsight, I wish I would have looked at it more closely.' He expressed regret for not examining the materials more closely. First Published Date: 19 May, 16:44 IST