logo
I'm writing a novel without using AI – and I can prove it

I'm writing a novel without using AI – and I can prove it

Spectator6 days ago
Everyone's seen stories about the creep of AI into art of all kinds. Recently the people behind the music-fabrication website Suno have been making outrageous statements to the effect that people don't enjoy learning musical instruments and writing their own songs, so why not let AI do it for them? This is very new, very disturbing and very consequential. I could talk about graphic art and video and film-making, but you'll know what's been going on there. I'll just cut to the chase and get to how AI tools are impacting and will continue to impact the writing of fiction.
I anticipate a future in which human authorship will need to be proven. A few years ago I simply wouldn't have believed that this landscape could be possible. In 2017, a team called Botnik fed the seven Harry Potter novels through their predictive text keyboard, resulting in a chapter from a new Harry Potter story: Harry Potter and the Portrait of What Looked Like a Large Pile of Ash. With some human selection what emerged were extracts such as: ''If you two can't clump happily, I'm going to get aggressive,' confessed the reasonable Hermione.' 'To Harry, Ron was a loud, slow, and soft bird.'
Things have come on since then. Now, if you ask ChatGPT or any of the other engines to write about the moon landings in the style of Finnegans Wake, which I have done, it will produce something pretty plausible, possibly not better than you could have done yourself given an hour or two, but rather compensated for by the fact that it took two seconds.
As a result, novelists are already writing novels with AI. Are they as good as human novels? No, not yet. It's a process, probably, of gradual supplantation. First the writer uses AI to brainstorm ideas, then gets the AI to write a scene based on the most promising idea, then gets AI to supply a whole chapter, then the whole of the book. Gradually human oversight is reduced and then eliminated. In 2024 the winner of Japan's most prestigious literary award, the Akutagawa prize, admitted that she had written her novel with the help of artificial intelligence, though this confession was made after she received the prize money. She was praised for her honesty. Perhaps the majority of serious current novelists are experimenting with it, because it is just too tempting. I would guess that in future most novels will be written with AI help, because authors have deadlines, they are weak, and they fear the blank screen.
There are people out there saying: never fear, AI writing is just autocomplete on steroids, it will never have emotions, it will never write creatively, it will never be original and it will never truly engage a human reader. I used to say things like that. Now I don't. AI probably can't think and probably isn't conscious – although Geoffrey Hinton, who helped make it, argues that it can and is – but that doesn't matter. All it needs to do is convincingly mimic thought and consciousness, as well as mimicking creativity and originality. After all, who's more likely to be original, a human or a machine that has access to every book every written? Is there anything new under the sun? If there is, won't an infinitely resourced machine be able to shine its own light on it? That's when human novelists will be completely, irrevocably superseded.
The terrifying thing is it doesn't matter if AI machine novelists are not very good, or even if they never get as good as a human writer, since for a majority of people they will be good enough. They will out-compete, and out-autocomplete, human writers, just as AI bands are mimicking human bands with enough success to suck revenue away from human musicians on Spotify. Writers' livelihoods are at stake because consumers won't care enough.
Except… what if there is a market for novels if they are demonstrably written by humans? What if there is, in ten years' time, a market for an artisan novel, quaintly written on the premise that no machine had a hand or a robotic arm in its creation?
How, though, could this be proven? It's possible at the moment to detect AI text, but only if the writer has been careless, and the tools to do so are clunky and sometimes inaccurate. After generating the text, the writer can 'humanise' it, either by hand, or by employing a humanising program. So I'm proposing something. I want to write one of the world's first provably, demonstrably non-AI-assisted novels. And this is how I'm going to do it. In fact, this is how I have already started doing it.
During every writing session I livestream my desktop and have an additional camera on my workspace and keyboard. I have a main novel file, some character files, a plot file and a scrap file. I may also have other files. All these files are in one folder and accessible to pull out. This bringing up of files from the main folder is viewable on screen. There is no access to the internet, and certainly nothing AI-generated. At the end of each writing session in Google Docs, I save a named version. At the next writing session I open Google Docs and identify that last version at the top of the list, date- and time-stamped as it is, demonstrating that it is the last version I worked on and hasn't been altered. Then I go back to Google Docs and start working, live-streaming and recording. At the end of the session I save the version so I can return to it.
This protocol I call Maximal Human Authorship Protocol or MaxHAP. It, or something like it, is going to be required in future, because if we don't have it, no one will ever be able to say again, and be believed: 'I'm a writer.' Does that matter? It matters to me, because I've been writing for a long time, and writing is among the things I value most in the world. I want to protect the notion of a verifiably human author, of the dignity of that author.
In future, the writer will have only a little dignity. Let's not make it none.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Harry Potter star Miriam Margolyes says 'I'm not anti-Jewish' over Gaza stance
Harry Potter star Miriam Margolyes says 'I'm not anti-Jewish' over Gaza stance

Daily Record

time14 hours ago

  • Daily Record

Harry Potter star Miriam Margolyes says 'I'm not anti-Jewish' over Gaza stance

The 84-year-old, who is herself Jewish, branded critics "chickens*** people". Harry Potter star Miriam Margolyes has defended her pleas for peace in Gaza after calls she should be stripped of her OBE. ‌ The Campaign Against Antisemitism group claims the actress, who is Jewish, should lose a gong she received in 2002 and a Bafta she won in 1993. ‌ The group accused her of spreading 'racist bilge' after she compared the Israeli occupation of Gaza to the Nazi Holocaust. ‌ The actress, who is performing at the Edinburgh Fringe, said: "I have every right to be an OBE. Join the Daily Record WhatsApp community! Get the latest news sent straight to your messages by joining our WhatsApp community today. You'll receive daily updates on breaking news as well as the top headlines across Scotland. No one will be able to see who is signed up and no one can send messages except the Daily Record team. All you have to do is click here if you're on mobile, select 'Join Community' and you're in! If you're on a desktop, simply scan the QR code above with your phone and click 'Join Community'. We also treat our community members to special offers, promotions, and adverts from us and our partners. If you don't like our community, you can check out any time you like. To leave our community click on the name at the top of your screen and choose 'exit group'. If you're curious, you can read our Privacy Notice. ‌ "It's nonsense. "I'm very proud of my OBE and I think I deserved it, and I'd be sad if it was taken away. "But I don't think it's seriously going to happen. ‌ 'I would not like to have it taken away because it would imply the government agrees that I shouldn't be an OBE.' Margoyles, 84, has been an outspoken critic of the Israeli government and its role in the humanitarian disaster in Gaza. ‌ She added: 'I'm not anti-Semitic, I'm anti-killing children. "But I am also criticising the Jewish people in the UK, the community that I belong to, which is not coming out in support of me. "I just want people not to kill each other.'

Listed: Ten most 'AI exposed' jobs AND the roles humans still dominate
Listed: Ten most 'AI exposed' jobs AND the roles humans still dominate

Daily Mirror

time14 hours ago

  • Daily Mirror

Listed: Ten most 'AI exposed' jobs AND the roles humans still dominate

The jobs market is already cooling, with companies scaling back on hiring and increasing lay-offs in response to the Chancellor's National Insurance hike and a rise in the minimum wage. One in ten graduates have already altered their career plans due to fears that artificial intelligence (AI) will jeopardise their job prospects. University leavers aiming for careers in graphic design, coding, film and art are particularly worried about the impact of AI, with many fearing the rapidly evolving technology could make their jobs redundant. ‌ These concerns arise as Britain's job market continues to cool, with firms cutting back on recruitment and increasing redundancies in response to the Chancellor's National Insurance increase and a rise in the minimum wage. According to a survey of 4,072 individuals by university and career advisers Prospects, 10 percent stated they had changed their career plans because of AI, a figure that rises to 11 percent among graduates. ‌ The primary reason given was worry that their chosen jobs could become redundant. Opportunities in the creative industries were highlighted as being particularly at risk from AI's rapid progression. ‌ Risks and opportunities Chris Rea from Prospects noted that while many graduates are avoiding certain careers due to AI, others are exploring new industries because of the opportunities the technology offers, reports the Express. Jeremy Swan, from the Association of Graduate Careers Advisory Services, said technological advances are forcing graduates to seek roles where they cannot be easily substituted by AI. ‌ He stated: "I think it's about re-framing people's thinking, so that they can see there are opportunities out there that look slightly different than what they're used to." Mr Swan said AI has left many students and graduates feeling "really uncertain about where they stand". Data from job search platform Adzuna reveals entry-level positions have plummeted by 32 percent since Chat GPT launched in November 2022. ‌ Mr Swan added: "There's a lot of uncertainty that's come off the back of AI, people worrying how it's going to affect their chosen career paths, and we would just say this is where decent career support matters more than ever." Jobs least exposed to AI: Logging equipment operators. Motorboat operators. ‌ Orderlies. Floor sanders and finishers. Pile driver operators. ‌ Rail-track laying and maintenance equipment. Foundry moulders and coremakers. Water treatment plant and system operators. ‌ Bridge and lock tenders. Dredge operators. Jobs most exposed to AI: Interpreters and translators. ‌ Historians. Passenger attendants. Sales representatives of services. ‌ Writers and authors. Customer service representatives. CNC tool programmers. ‌ Telephone operators. Ticket agents and travel clerks. Broadcast announcers and radio DJs. ‌ Recruitment has declined LinkedIn data reveals that UK hiring dropped by 6.7 percent in June compared to May, following a 3.9 percent increase the previous month. Official statistics also show that unemployment rose to a four-year high of 4.7 percent in the three months leading up to May. Bank of England Governor Andrew Bailey recently suggested that larger interest rate cuts may be necessary if the jobs market continues to slow down. City traders predict rates could be reduced from 4.25 percent to 4 percent at Thursday's Monetary Policy Committee meeting. University graduates are now facing an increasingly challenging job market as employers reduce graduate recruitment. Data from Adzuna shows that graduate job listings have plummeted by nearly 23 percent in the year to April as rising taxes lead businesses to cut back on entry-level hiring. ‌ Meanwhile, increases to the national living wage mean many graduate schemes now only offer salaries equivalent to the minimum wage, which is currently £12.21 per hour or £25,500 a year for full-time workers. Major employer KPMG has reduced its recruitment scheme, hiring just 942 graduates and school leavers last year compared with 1,399 in 2023. The company expects to hire around 1,000 this year. The competition for entry-level roles is more intense than ever, leading many graduates to utilise AI for assistance with job applications. According to a survey by Prospects, 43 percent have used AI to edit or draft a cover letter, while 26 percent have employed it for answering questions on application forms. However, Mr Swan suspects that students might be under-reporting their use of AI. He advised students to ensure they use "these tools in an ethical way", even if AI can provide a starting point for CVs or cover letters.

Using Generative AI for therapy might feel like a lifeline – but there's danger in seeking certainty in a chatbot
Using Generative AI for therapy might feel like a lifeline – but there's danger in seeking certainty in a chatbot

The Guardian

timea day ago

  • The Guardian

Using Generative AI for therapy might feel like a lifeline – but there's danger in seeking certainty in a chatbot

Tran* sat across from me, phone in hand, scrolling. 'I just wanted to make sure I didn't say the wrong thing,' he explained, referring to a recent disagreement with his partner. 'So I asked ChatGPT what I should say.' He read the chatbot-generated message aloud. It was articulate, logical and composed – almost too composed. It didn't sound like Tran. And it definitely didn't sound like someone in the middle of a complex, emotional conversation about the future of a long-term relationship. It also did not mention anywhere some of Tran's contributing behaviours to the relationship strain that Tran and I had been discussing. Like many others I've seen in therapy recently, Tran had turned to AI in a moment of crisis. Under immense pressure at work and facing uncertainty in his relationship, he'd downloaded ChatGPT on his phone 'just to try it out'. What began as a curiosity soon became a daily habit, asking questions, drafting texts, and even seeking reassurance about his own feelings. The more Tran used it, the more he began to second-guess himself in social situations, turning to the model for guidance before responding to colleagues or loved ones. He felt strangely comforted, like 'no one knew me better'. His partner, on the other hand, began to feel like she was talking to someone else entirely. ChatGPT and other generative AI models present a tempting accessory, or even alternative, to traditional therapy. They're often free, available 24/7 and can offer customised, detailed responses in real time. When you're overwhelmed, sleepless and desperate to make sense of a messy situation, typing a few sentences into a chatbot and getting back what feels like sage advice can be very appealing. But as a psychologist, I'm growing increasingly concerned about what I'm seeing in the clinic; a silent shift in how people are processing distress and a growing reliance on artificial intelligence in place of human connection and therapeutic support. AI might feel like a lifeline when services are overstretched – and make no mistake, services are overstretched. Globally, in 2019 one in eight people were living with a mental illness and we face a dire shortage of trained mental health professionals. In Australia, there has been a growing mental health workforce shortage that is impacting access to trained professionals. Clinician time is one of the scarcest resources in healthcare. It's understandable (even expected) that people are looking for alternatives. Turning to a chatbot for emotional support isn't without risk however, especially when the lines between advice, reassurance and emotional dependence become blurred. Many psychologists, myself included, now encourage clients to build boundaries around their use of ChatGPT and similar tools. Its seductive 'always-on' availability and friendly tone can unintentionally reinforce unhelpful behaviours, especially for people with anxiety, OCD or trauma-related issues. Reassurance-seeking, for example, is a key feature in OCD and ChatGPT, by design, provides reassurance in abundance. It never asks why you're asking again. It never challenges avoidance. It never says, 'let's sit with this feeling for a moment, and practice the skills we have been working on'. Tran often reworded prompts until the model gave him an answer that 'felt right'. But this constant tailoring meant he wasn't just seeking clarity; he was outsourcing emotional processing. Instead of learning to tolerate distress or explore nuance, he sought AI-generated certainty. Over time, that made it harder for him to trust his own instincts. Beyond psychological concerns, there are real ethical issues. Information shared with ChatGPT isn't protected by the same confidentiality standards as registered Ahpra professionals. Although OpenAI states that data from users is not used to train its models unless permission is given, the sheer volume of fine print in user agreements often goes unread. Users may not realise how their inputs can be stored, analysed and potentially reused. There's also the risk of harmful or false information. These large language models are autoregressive; they predict the next word based on previous patterns. This probabilistic process can lead to 'hallucinations', confident, polished answers that are completely untrue. AI also reflects the biases embedded in its training data. Research shows that generative models can perpetuate and even amplify gender, racial and disability-based stereotypes – not intentionally, but unavoidably. Human therapists also possess clinical skills; we notice when a client's voice trembles, or when their silence might say more than words. This isn't to say AI can't have a place. Like many technological advancements before it, generative AI is here to stay. It may offer useful summaries, psycho-educational content or even support in regions where access to mental health professionals is severely limited. But it must be used carefully, and never as a replacement for relational, regulated care. Tran wasn't wrong to seek help. His instincts to make sense of distress and to communicate more thoughtfully were logical. However, leaning so heavily on to AI meant that his skill development suffered. His partner began noticing a strange detachment in his messages. 'It just didn't sound like you', she later told him. It turned out: it wasn't. She also became frustrated about the lack of accountability in his correspondence to her and this caused more relational friction and communication issues between them. As Tran and I worked together in therapy, we explored what led him to seek certainty in a chatbot. We unpacked his fears of disappointing others, his discomfort with emotional conflict and his belief that perfect words might prevent pain. Over time, he began writing his own responses, sometimes messy, sometimes unsure, but authentically his. Good therapy is relational. It thrives on imperfection, nuance and slow discovery. It involves pattern recognition, accountability and the kind of discomfort that leads to lasting change. A therapist doesn't just answer; they ask and they challenge. They hold space, offer reflection and walk with you, while also offering up an uncomfortable mirror. For Tran, the shift wasn't just about limiting his use of ChatGPT; it was about reclaiming his own voice. In the end, he didn't need a perfect response. He needed to believe that he could navigate life's messiness with curiosity, courage and care – not perfect scripts. Name and identifying details changed to protect client confidentiality Carly Dober is a psychologist living and working in Naarm/Melbourne In Australia, support is available at Beyond Blue on 1300 22 4636, Lifeline on 13 11 14, and at MensLine on 1300 789 978. In the UK, the charity Mind is available on 0300 123 3393 and Childline on 0800 1111. In the US, call or text Mental Health America at 988 or chat

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store