Latest news with #GrahamBurnett
Yahoo
07-05-2025
- Yahoo
AI isn't replacing student writing – but it is reshaping it
I'm a writing professor who sees artificial intelligence as more of an opportunity for students, rather than a threat. That sets me apart from some of my colleagues, who fear that AI is accelerating a glut of superficial content, impeding critical thinking and hindering creative expression. They worry that students are simply using it out of sheer laziness or, worse, to cheat. Perhaps that's why so many students are afraid to admit that they use ChatGPT. In The New Yorker magazine, historian D. Graham Burnett recounts asking his undergraduate and graduate students at Princeton whether they'd ever used ChatGPT. No one raised their hand. 'It's not that they're dishonest,' he writes. 'It's that they're paralyzed.' Students seem to have internalized the belief that using AI for their coursework is somehow wrong. Yet, whether my colleagues like it or not, most college students are using it. A February 2025 report from the Higher Education Policy Institute in the U.K. found that 92% of university students are using AI in some form. As early as August 2023 – a mere nine months after ChatGPT's public release – more than half of first-year students at Kennesaw State University, the public research institution where I teach, reported that they believed that AI is the future of writing. It's clear that students aren't going to magically stop using AI. So I think it's important to point out some ways in which AI can actually be a useful tool that enhances, rather than hampers, the writing process. Helping with the busywork A February 2025 OpenAI report on ChatGPT use among college-aged users found that more than one-quarter of their ChatGPT conversations were education-related. The report also revealed that the top five uses for students were writing-centered: starting papers and projects (49%); summarizing long texts (48%); brainstorming creative projects (45%); exploring new topics (44%); and revising writing (44%). These figures challenge the assumption that students use AI merely to cheat or write entire papers. Instead, it suggests they are leveraging AI to free up more time to engage in deeper processes and metacognitive behaviors – deliberately organizing ideas, honing arguments and refining style. If AI allows students to automate routine cognitive tasks – like information retrieval or ensuring that verb tenses are consistent – it doesn't mean they're thinking less. It means their thinking is changing. Of course, students can misuse AI if they use the technology passively, reflexively accepting its outputs and ideas. And overreliance on ChatGPT can erode a student's unique voice or style. However, as long as students learn how to use AI intentionally, this shift can be seen as an opportunity, rather than a loss, Clarifying the creative vision It has also become clear that AI, when used responsibly, can augment human creativity. For example, science comedy writer Sarah Rose Siskind recently gave a talk to Harvard students about her creative process. She spoke about how she uses ChatGPT to brainstorm joke setups and explore various comedic scenarios, which allows her to focus on crafting punchlines and refining her comedic timing. Note how Siskin used AI in ways that didn't supplant the human touch. Instead of replacing her creativity, AI amplified it by providing structured and consistent feedback, giving her more time to polish her jokes. Another example is the Rhetorical Prompting Method, which I developed alongside fellow Kennesaw State University researchers. Designed for university students and adult learners, it's a framework for conversing with an AI chatbot, one that emphasizes the importance of agency in guiding AI outputs. When writers use precise language to prompt, critical thinking to reflect, and intentional revision to sculpt inputs and outputs, they direct AI to help them generate content that aligns with their vision. There's still a process The Rhetorical Prompting Method mirrors best practices in process writing, which encourages writers to revisit, refine and revise their drafts. When using ChatGPT, though, it's all about thoughtfully revisiting and revising prompts and outputs. For instance, say a student wants to create a compelling PSA for social media to encourage campus composting. She considers her audience. She prompts ChatGPT to draft a short, upbeat message in under 50 words that's geared to college students. Reading the first output, she notices it lacks urgency. So she revises the prompt to emphasize immediate impact. She also adds some additional specifics that are important to her message, such as the location of an information session. The final PSA reads: 'Every scrap counts! Join campus composting today at the Commons. Your leftovers aren't trash – they're tomorrow's gardens. Help our university bloom brighter, one compost bin at a time.' The Rhetorical Prompting Method isn't groundbreaking; it's riffing on a process that's been tested in the writing studies discipline for decades. But I've found that it works by directing writers how to intentionally prompt. I know this because we asked users about their experiences. In an ongoing study, my colleagues and I polled 133 people who used the Rhetorical Prompting Method for their academic and professional writing: 92% reported that it helped them evaluate writing choices before and during their process. 75% said that they were able to maintain their authentic voice while using AI assistance. 89% responded that it helped them think critically about their writing. The data suggests that learners take their writing seriously. Their responses reveal that they are thinking carefully about their writing styles and strategies. While this data is preliminary, we continue to gather responses in different courses, disciplines and learning environments. All of this is to say that, while there are divergent points of view over when and where it's appropriate to use AI, students are certainly using it. And being provided with a framework can help them think more deeply about their writing. AI, then, is not just a tool that's useful for trivial tasks. It can be an asset for creativity. If today's students – who are actively using AI to write, revise and explore ideas – see AI as a writing partner, I think it's a good idea for professors to start thinking about helping them learn the best ways to work with it. This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Jeanne Beatrix Law, Kennesaw State University Read more: Jeanne Beatrix Law does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.


Business Mayor
05-05-2025
- Science
- Business Mayor
The great cognitive migration: How AI is reshaping human purpose, work and meaning
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Humans have always migrated to survive. When glaciers advanced, when rivers dried up, when cities fell, people moved. Their journeys were often painful, but necessary, whether across deserts, mountains or oceans. Today, we are entering a new kind of migration — not across geography but across cognition. AI is reshaping the cognitive landscape faster than any technology before it. In the last two years, large language models (LLMs) have achieved PhD-level performance across many domains. It is reshaping our mental map much like an earthquake can upset the physical landscape. The rapidity of this change has led to a seemingly watchful inaction: We know a migration is coming soon, but we are unable to imagine exactly how or when it will unfold. But, make no mistake, the early stage of a staggering transformation is underway. Tasks once reserved for educated professionals (including authoring essays, composing music, drafting legal contracts and diagnosing illnesses), are now performed by machines at breathtaking speed. Not only that, but the latest AI systems can make fine-grained inferences and connections long thought to require unique human insight, further accelerating the need for migration. For example, in a New Yorker essay, Princeton history of science professor Graham Burnett marveled at how Google's NotebookLM made an unexpected and illuminating link between theories from Enlightenment philosophy and a modern TV advertisement. As AI grows more capable, humans will need to embrace new domains of meaning and value in areas where machines still falter, and where human creativity, ethical reasoning, emotional resonance and the weaving of generational meaning remain indispensable. This 'cognitive migration' will define the future of work, education and culture, and those who recognize and prepare for it will shape the next chapter of human history. Like climate migrants who must leave their familiar surroundings due to rising tides or growing heat, cognitive migrants will need to find new terrain where their contributions can have value. But where and how exactly will we do this? Moravec's Paradox provides some insight. This phenomenon is named for Austrian scientist Hans Moravec, who observed in the 1980s that tasks humans find difficult are easy for a computer, and vice-versa. Or, as computer scientist and futurist Kai-Fu Lee has said: 'Let us choose to let machines be machines, and let humans be humans.' Moravec's insight provides us with an important clue. People excel at tasks that are intuitive, emotional and deeply tied to embodied experience, areas where machines still falter. Successfully navigating through a crowded street, recognizing sarcasm in conversation and intuiting that a painting feels melancholy are all feats of perception and judgment that millions of years of evolution have etched deep into human nature. In contrast, machines that can ace a logic puzzle or summarize a thousand-page novel often stumble at tasks we consider second nature. The human domains AI cannot yet reach As AI rapidly advances, the safe terrain for human endeavor will migrate toward creativity, ethical reasoning, emotional connection and the weaving of deep meaning. The work of humans in the not-too-distant future will increasingly demand uniquely human strengths, including the cultivation of insight, imagination, empathy and moral wisdom. Like climate migrants seeking new fertile ground, cognitive migrants must chart a course toward these distinctly human domains, even as the old landscapes of labor and learning shift under our feet. Not every job will be swept away by AI. Unlike geographical migrations which might have clearer starting points, cognitive migration will unfold gradually at first, and unevenly across different sectors and regions. The diffusion of AI technologies and its impact may take a decade or two. Many roles that rely on human presence, intuition and relationship-building may be less affected, at least in the near term. These roles include a range of skilled professions from nurses to electricians and frontline service workers. These roles often require nuanced judgment, embodied awareness and trust, which are human attributes for which machines are not always suited. Cognitive migration, then, will not be universal. But the broader shift in how we assign value and purpose to human work will still ripple outward. Even those whose tasks remain stable may find their work and meaning reshaped by a world in flux. Some promote the idea that AI will unlock a world of abundance where work becomes optional, creativity flourishes and society thrives on digital productivity. Perhaps that future will come. But we cannot ignore the monumental transition it will require. Jobs will change faster than many people can realistically adapt. Institutions, built for stability, will inevitably lag. Purpose will erode before it is reimagined. If abundance is the promised land, then cognitive migration is the required, if uncertain, journey to reach it. Just as in climate migration, not everyone will move easily or equally. Our schools are still training students for a world that is vanishing, not the one that is emerging. Many organizations cling to efficiency metrics that reward repeatable output, the very thing AI can now outperform us on. And far too many individuals will be left wondering where their sense of purpose fits in a world where machines can do what they once proudly did. Human purpose and meaning are likely to undergo significant upheaval. For centuries, we have defined ourselves by our ability to think, reason and create. Now, as machines take on more of those functions, the questions of our place and value become unavoidable. If AI-driven job losses occur on a large scale without a commensurate ability for people to find new forms of meaningful work, the psychological and social consequences could be profound. It is possible that some cognitive migrants could slip into despair. AI scientist Geoffrey Hinton, who won the 2024 Nobel Prize in physics for his groundbreaking work on deep learning neural networks that underpin LLMs, has warned in recent years about the potential harm that could come from AI. In an interview with CBS, he was asked if he despairs about the future. He said he did not because, ironically, he found it very hard to take [AI] seriously. He said: 'It's very hard to get your head around the point that we are at this very special point in history where in a relatively short time, everything might totally change. A change on a scale we've never seen before. It's hard to absorb that emotionally.' Read More 5 AI takeaways from CES for enterprise business There will be paths forward. Some researchers and economists, including MIT economist David Autor, have begun to explore how AI could eventually help rebuild middle-class jobs, not by replacing human workers, but by expanding what humans can do. But getting there will require deliberate design, social investment and time. The first step is acknowledging the migration that has already begun. Migration is rarely easy or fast. It often takes generations to adapt fully to new environments and realities. Many individuals will likely struggle through a multi-stage grieving process of denial, anger, bargaining, depression and, finally, acceptance before they can move toward new forms of contribution and meaning. And some may never fully migrate. Coping with change, at both the individual and societal level, will be one of the greatest challenges of the AI era. The age of AI is not just about building smarter machines and the benefits they will offer. It is also about migrating toward a deeper understanding and embracing what makes us human. Gary Grossman is EVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.