logo
The Case For Human Writing

The Case For Human Writing

Forbes21-05-2025

John Warner's book about writing
Hatchette Book Group
In the book 'More Than Words,' writer-educator John Warner makes the case for renewing the concept of writing as a fundamentally human activity.
Large Language Model bots like ChatGPT offer not intelligence, but automation, he argues. Generative intelligence promises 'to turn text production into a commodity.' ChatGPT does not read or think or understand language, but renders words as tokens on which it performs complex math. 'When ChatGPT strings together its tokens in the form of syntax, it is not wrestling with an idea. It is arranging language.'
Warner's concerns go deeper than LLM's huge and its frequent public mistakes, like the newspaper insert that just last week, included by major newspaper, recommending books that have never been written and cited people who don't exist (as reported by real humans Damon Beres and Charlie Warzel at The Atlantic). Nor does Warner simply argue that we must circle the wagons and repel the cyber-invaders.
Warner has argued before that much of what LLMs will destroy deserves to be destroyed. We should, he argues, take the advent of chatbots as 'an opportunity to reconsider exactly what we value and why we value those things."
Trends in education have taught generations a devalued view of writing (and reading as well). Warner observes that we are on our second or third generation 'of students who experience school not as an opportunity for learning, but a grim march through proficiencies, attached to extremely high stakes, stakes often measured by tests that are not reflective of genuine learning.' The result that he has seen in his own classrooms are students who have been 'incentivized to not write but instead to produce writing related simulations.' In education, writing has become performance rather than communication, and if we want students to simply follow a robotic algorithm to create a language product--well, that is exactly the task that a LLM is well-suited to perform.
Why not put words to their best use-- to communicate human thoughts and feelings, ideas and emotions that one person intentionally tries to convey to another human being?
After examining the depths and details of the challenge, Warner does offer some hope and advice in the final section of the book: resist, renew, and explore.
Warner encourages us to resist 'technological determinism,' the argument that AI is inevitable and therefore we should neither resist or regulate it, as well as the huge hype, the manufactured sense that this is the future and you must get on board. Warner also points out the constant tendency to anthropomorphize AI, even though it is a machine that does not think, understand, or empathize, folks are constantly projecting those qualities onto the AI.
Warner encourages renewing the sense of and appreciation for the human. And he calls on readers to explore their understanding of the field, in particular finding guides, people who have invested the time and study and thought to provide deeper insights into this growing field.
I asked Warner what he thought had changed since he'd wrapped up the book. 'It's really just an intensification of the stuff I cover in the book' he said. 'The hype is greater. The threat of giving ourselves over to these things is greater.'
The book is an impassioned argument for the value of human language. At one point Warner responds to the notion that AI somehow improves on human work, noting that LLMs are machine. 'To declare the machines superior means believing that what makes humans human is inherently inferior." To those who argue that chatbots teamed up with humans will be able to create more, better, faster writing, Warner says no.
'I'll tell you why not. Because ChatGPT cannot write. Generating syntax is not the same thing as writing. Writing is an embodied act of thinking and feeling. Writing in communicating with intention. Yes, the existence of a product at the end of the process is an indicator that writing has happened, but by itself, it does not define what writing is or what it means to the writer or the audience for that writing.'
'More Than Words' is a bracing and encouraging defense of the human in creation and communication. It's a valuable read for anyone who works with students, values reading and writing, or wishes for an antidote to the constant AI hucksterism of the moment.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Goldman Sachs wants students to stop using ChatGPT in job interviews with the bank
Goldman Sachs wants students to stop using ChatGPT in job interviews with the bank

Yahoo

timean hour ago

  • Yahoo

Goldman Sachs wants students to stop using ChatGPT in job interviews with the bank

Goldman Sachs is cautioning its young job-seekers against using AI during the interview process. Instead, the $176 billion bank is encouraging applicants to study up on the firm in preparation. Other businesses like Anthropic and Amazon have also warned candidates against deploying AI—and if they're caught, they could be disqualified. While many companies are boasting about all the efficiencies that will come with AI, some are dissuading potential hires from using it to get a leg up in interviews with recruiters and hiring managers. Goldman Sachs' campus recruitment team for the bank's private investing academy in EMEA recently sent out an email to students reminding them of its expectations for interviews, as reported by eFinancialCareers. Goldman uses video interviewing platform HireVue to pre-assess candidates and maintains a set of best practices for job-seekers. Based on the best practices guidelines, the young applicants are encouraged to prepare for interviews by studying the $176 billion firm's financial results, business principles, and core values. But they can't bank on AI to help them out. 'As a reminder, Goldman Sachs prohibits the use of any external sources, including ChatGPT or Google search engine, during the interview process,' the email noted, according to someone who saw the message. HireVue is an AI-powered talent evaluation platform, known for asking behavioral questions that reveal applicants' skills. Gen Z job-seekers might be tempted to use ChatGPT or other chatbots to game the recruitment process—but it's discouraged, and isn't the most viable option. The typical Goldman Sachs virtual interview allows for 30 seconds of prep after the question, followed by a two-minute response time, according to research from eFinancialCareers. That makes it hard for job-seekers to quickly type a prompt into the chatbot, churn out an answer, and decide what the line of attack is. Plus, the responses aren't tailored and unique to the individual, potentially hurting the interviewee more than helping. Goldman's job-seeker AI policy could seem ironic, as half of the firm's 46,000 employees have access to the technology. But other companies are navigating that same paradox as they try to fully flesh out their AI strategies in an ever-changing technological environment. Goldman Sachs isn't the only major company warning its applicants not to use AI during recruitment. The $61.5 billion AI giant Anthropic went on a hiring spree last month, but told job-seekers that they can't use the advanced technology to fill out their applications. The company argued that it wants to test the communication skills of potential hires, and AI use clouds that assessment. 'Please do not use AI assistants during the application process,' Anthropic wrote in the description for its hundreds of job postings. 'We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills.' Retail giant Amazon also doesn't like it when potential talent uses AI tools during the recruitment process. Earlier this year, the $2 trillion behemoth shared guidelines with internal recruiters, stressing that candidates who are caught using AI during job interviews should be disqualified. According to Amazon, the tools give an 'unfair advantage' that masks analysis of someone's 'authentic' capabilities. 'To ensure a fair and transparent recruitment process, please do not use gen Al tools during your interview unless explicitly permitted,' the guidelines, as reported by Business Insider, noted. 'Failure to adhere to these guidelines may result in disqualification from the recruitment process.' This story was originally featured on

Nvidia chief calls AI ‘the greatest equalizer' — but warns Europe risks falling behind
Nvidia chief calls AI ‘the greatest equalizer' — but warns Europe risks falling behind

San Francisco Chronicle​

timean hour ago

  • San Francisco Chronicle​

Nvidia chief calls AI ‘the greatest equalizer' — but warns Europe risks falling behind

PARIS (AP) — Will artificial intelligence save humanity — or destroy it? Lift up the world's poorest — or tighten the grip of a tech elite? Jensen Huang — the global chip tycoon widely predicted to become one of the world's first trillionaires — offered his answer on Wednesday: neither dystopia nor domination. AI, he said, is a tool for liberation. Wearing his signature biker jacket and mobbed by fans for selfies, the Nvidia CEO cut the figure of a tech rockstar as he took the stage at VivaTech in Paris. 'AI is the greatest equalizer of people the world has ever created,' Huang said, kicking off one of Europe's biggest technology industry fairs. Huang's core argument: AI can level the playing field, not tilt it. Critics argue Nvidia's dominance risks concentrating power in the hands of a few. But Huang insists the opposite — that by slashing computing costs and expanding access, 'we're democratizing intelligence' for startups and nations alike. But beyond the sheeny optics, Nvidia used the Paris summit to unveil a wave of infrastructure announcements across Europe, signaling a dramatic expansion of the AI chipmaker's physical and strategic footprint on the continent. In France, the company is deploying 18,000 of its new Blackwell chips with startup Mistral AI. In Germany, it's building an industrial AI cloud to support manufacturers. Similar rollouts are underway in Italy, Spain, Finland and the U.K., including a new AI lab in Britain. Other announcements include a partnership with AI startup Perplexity to bring sovereign AI models to European publishers and telecoms, a new cloud platform with Mistral AI, and work with BMW and Mercedes-Benz to train AI-powered robots for use in auto plants. The announcements underscore how central AI infrastructure has become to global strategy — and how Nvidia, now the world's most valuable chipmaker, is positioning itself as the engine behind it. As the company rolls out ever more powerful systems, critics warn the model risks creating a new kind of 'technological priesthood' — one in which only the wealthiest companies or governments can afford the compute power, energy, and elite engineering talent required to participate. That, they argue, could choke the bottom-up innovation that built the tech industry in the first place. Huang pushed back. 'Through the velocity of our innovation, we democratize,' he said, responding to a question by The Associated Press. 'We lower the cost of access to technology.' As Huang put it, these factories 'reason,' 'plan,' and 'spend a lot of time talking to' themselves, powering everything from ChatGPT to autonomous vehicles and diagnostics. But some critics warn that without guardrails, such all-seeing, self-reinforcing systems could go the way of Skynet in ' The Terminator ' movie — vast intelligence engines that outpace human control. To that, Huang offers a counter-model: layered AI governance by design. 'In the future,' he said, 'the AI that is doing the task is going to be surrounded by 70 or 80 other AIs that are supervising it, observing it, guarding it, ensuring that it doesn't go off the rails.' He likened the moment to a new industrial revolution. Just as electricity transformed the last one, Huang said, AI will power the next — and that means every country needs a national intelligence infrastructure. That's why, he explained, he's been crisscrossing the globe meeting heads of state. 'They all want AI to be part of their infrastructure,' he said. 'They want AI to be a growth manufacturing industry for them.' Europe, long praised for its leadership on digital rights, now finds itself at a crossroads. As Brussels pushes forward with world-first AI regulations, some warn that over-caution could cost the bloc its place in the global race. With the U.S. and China surging ahead and most major AI firms based elsewhere, the risk isn't just falling behind — it's becoming irrelevant. Huang has a different vision: sovereign AI. Not isolation, but autonomy — building national AI systems aligned with local values, independent of foreign tech giants. 'The data belongs to you,' Huang said. 'It belongs to your people, your country... your culture, your history, your common sense.' But fears over AI misuse remain potent — from surveillance and deepfake propaganda to job losses and algorithmic discrimination. Huang doesn't deny the risks. But he insists the technology can be kept in check — by itself. The VivaTech event was part of Huang's broader European tour. He had already appeared at London Tech Week and is scheduled to visit Germany. In Paris, he joined French President Emmanuel Macron and Mistral AI CEO Arthur Mensch to reinforce his message that AI is now a national priority.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store