
AI is sowing discord in the world of fanfiction
When readers asked her about her use of artificial intelligence (AI), which she did not hide, she replied: "Every twist and turn of the plot and all character decisions were my own. The AI was just a helper in the process to get the vision out of my head." "Translated from bullshit into plain English: "No, I didn't write any of it. I used a plagiarism machine to do it for me," replied a bitter internet user.
Since 2022, the rise of AI text generators has sown discord in the world of fanfiction – stories written by fans that extend, complete or transform an existing work. "AI is a very sensitive topic in the community. Discussions can quickly become heated," said Fah. At 25, this resident of Hauts-de-France (northern France) has written about 30 stories and is part of the volunteer team at the Fanfictions website.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

LeMonde
3 hours ago
- LeMonde
ChatGPT: OpenAI launches GPT-5, the latest version of its language processing tool
"GPT-3 sort of felt like talking to a high school student. (...) GPT-4 felt like you're talking to a college student. GPT-5 is the first time that it really feels like talking to a PhD-level expert." This is how Sam Altman, the CEO of OpenAI, introduced the latest version of the language processing software that powers the ChatGPT chatbot. Launched on Thursday, August 7, GPT-5 was made available to all of the chatbot's users. It was, unsurprisingly, designed to be faster and more accurate, even for the most complex questions. "I tried going back to GPT-4 and it was quite miserable," Altman said, in a press conference he held the day before. GPT-5 is clearly superior, he said, adding: "It reminds me of when the iPhone went from those giant-pixel old ones to the retina display, and then I went back to using one of those big pixelated things, and I was like, 'Wow, I can't believe how bad we had it.'" This version, the California-based company stated, will make fewer mistakes and generate fewer "hallucinations." GPT-4 would often fabricate responses when it lacked sufficient information, but, for GPT-5, they had "trained the model to be honest," said Alex Beutel, safety research lead, in the presentation. OpenAI claimed the model was now able to inform users about its limitations. They also said it would be better at producing computer code: OpenAI demonstrated how, using a prompt describing a planned website, which featured a quiz and a mini-game, the software generated 600 lines of code in a matter of seconds, and allowed users to view and test the result.


Euronews
8 hours ago
- Euronews
Here's why OpenAI's ChatGPT-5 drop matters
OpenAI is dropping the long-awaited GPT-5, the latest model of its popular artificial intelligence (AI) chatbot ChatGPT, on Thursday. The new model has advanced writing capabilities, invents answers less often when it doesn't have the right information to a query, and responds better to health-related questions by flagging potential medical concerns, according to the company. GPT-5 has 'agentic' capabilities that it will use when developing code, meaning that it will be able to perform some tasks and make decisions without human intervention. The updated model can make websites, apps, and games, and can be integrated with users' Google tools, like Gmail and Google Calendar, OpenAI said. People using a research preview version of ChatGPT will also be able to select between four 'personalities' for the chatbot. AI enthusiasts saw that the drop was coming after a since-deleted description for the upgrade on crowdsource coding platform Github was published in the early hours of Thursday morning. The launch of GPT-5 follows OpenAI's introduction of the open-source gpt‑oss series earlier this week. It's the first model where the company has also released the source code since 2019's GPT-2, so it can be fine-tuned by users to meet their specific needs. Why does this launch matter? The launch is the latest development in the road to agentic AI, where agents can take multi-step actions and use tools to browse the web without human prompts. That could look like an AI agent at a customer service centre having the ability to automatically ask questions, look up information in internal documents, respond with a solution, or refer the customer to a human if necessary, according to one example given by Amazon. Earlier this year, OpenAI CEO Sam Altman predicted that the first AI agents will soon be 'joining the workforce' and will 'materially change the output of companies'. In a June blog post, Altman used the example of an AI agent in a software engineering role, saying the agent will 'eventually be capable of doing most things a software engineer at a top company with a few years of experience could do, for tasks up to a couple of days long'. 'It will not have the biggest new ideas, it will require lots of human supervision and direction, and it will be great at some things and bad at others,' he wrote. Agents could also be considered 'real-but-relatively-junior' virtual coworkers that could be scaled to 'every field of knowledge work,' he continued. The OpenAI launch of agentic agents would not be the first from an AI company. Microsoft and Google have already launched AI agents that can be customised based on what services a company needs to provide or what tasks it wants to automate.


Euronews
13 hours ago
- Euronews
Sweden's leader uses ChatGPT. Should politicians use AI chatbots?
Swedish Prime Minister Ulf Kristersson has stirred up public debate over politicians' use of artificial intelligence (AI) after telling local media he uses ChatGPT to brainstorm and seek a 'second opinion' on how to run the country. Kristersson told the Swedish newspaper Dagens Industri that he uses ChatGPT and the French service LeChat, and that his colleagues also use AI in their everyday work. 'I use it myself quite often, if for nothing else than for a second opinion. 'What have others done? And should we think the complete opposite?' Those types of questions,' he said. The comment sparked backlash, with critics arguing that voters had elected Kristersson, not ChatGPT, to lead Sweden. Technology experts in Sweden have since raised concerns about politicians using AI tools in such a way, citing the risk of making political decisions based on inaccurate information. Large language models' (LLMs) training data can be incomplete or biased, causing chatbots to give incorrect answers or so-called 'hallucinations'. 'Getting answers from LLMs is cheap, but reliability is the biggest bottleneck,' Yarin Gal, an associate professor of machine learning at the University of Oxford, previously told Euronews Next. Experts were also concerned about sensitive state information being used to train later models of ChatGPT, which is made by OpenAI. Its servers are based in the United States. Kristersson's press team brushed aside security concerns. 'Of course, it's not security-sensitive information that ends up there. It's used more as a sounding board,' Tom Samuelsson, Kristersson's press secretary, told the newspaper Aftonbladet. Should politicians use AI chatbots? This is not the first time a politician has been placed under fire due to their use of AI – or even the first time in Sweden. Last year, Olle Thorell, a Social Democrat in Sweden's parliament, used ChatGPT to write 180 written questions to the country's ministers. He faced criticism of overburdening ministers' staff, as they are required to answer within a set time frame. Earlier this year, United Kingdom tech secretary Peter Kyle's use of ChatGPT came under fire after the British magazine, New Scientist revealed he had asked the chatbot why AI adoption is so slow in the UK business community and which podcasts he should appear on to 'reach a wide audience that's appropriate for ministerial responsibilities'. Some politicians make no secret of their AI use. In a newspaper column, Scottish Member of Parliament Graham Leadbitter said he uses AI to write speeches because it helps him sift through dense reading and gives him 'a good basis to work from' – but emphasised that he still calls the shots. 'I choose the subject matter, I choose the evidence I want it to access, I ask for a specific type of document, and I check what's coming out accords with what I want to achieve,' Leadbitter wrote in The National. And in 2024, the European Commission rolled out its own generative AI tool, called GPT@EC, to help staff draft and summarise documents on an experimental basis. ChatGPT available to US public servants Meanwhile, OpenAI announced a partnership this week with the US government to grant the country's entire federal workforce access to ChatGPT Enterprise at the nominal cost of $1 for the next year. The announcement came shortly after the Trump administration launched its AI Action Plan, which aims to expand AI use across the federal government to boost efficiency and slash time spent on paperwork, among other initiatives. In a statement, OpenAI said the programme would involve 'strong guardrails, high transparency, and deep respect' for the 'public mission' of federal government workers. The company said it has seen the benefits of using AI in the public sector through its pilot programme in Pennsylvania, where public servants reportedly saved an average of about 95 minutes per day on routine tasks using ChatGPT. 'Whether managing complex budgets, analysing threats to national security, or handling day-to-day operations of public offices, all public servants deserve access to the best technology available,' OpenAI said.