Latest news with #ArveHjalmarHolmen
Yahoo
23-03-2025
- Yahoo
Man Annoyed When ChatGPT Tells Users He Murdered His Children in Cold Blood
When it comes to the life of tech, generative AI is still just an infant. Though we've seen tons of AI hype, even the most advanced models are still prone to wild hallucinations, like lying about medical records or writing research reports based on rumors. Despite these flaws, AI has quickly wormed its way into just about every part of our lives, from the internet to journalism to insurance — even into the food we eat. That's had some pretty alarming consequences, as one Norwegian man discovered this week. Curious what OpenAI's ChatGPT had to say about him, Arve Hjalmar Holmen typed in his name and let the bot do its thing. The results were horrifying. According to TechCrunch, ChatGPT told the man he had murdered two of his sons and tried to kill a third. Though Holmen didn't know it, he had apparently spent the past 21 years in prison for his crimes — at least according to the chatbot. And though the story was clearly false, ChatGPT had gotten parts of Holmen's life correct, like his hometown, as well as the age and gender of each of his kids. It was a sinister bit of truth layered into a wild hallucination. Holmen took this info to Noyb, a European data rights group, which filed a complaint with the Norwegian Data Protection Authority on his behalf. Noyb likewise filed a lawsuit against OpenAI, the parent company behind ChatGPT. Though ChatGPT is no longer repeating these lies about Holmen, Noyb is asking the agency to "order OpenAI to delete the defamatory output and fine-tune its model to eliminate inaccurate results" — a nearly impossible task. But that's likely the point. Holmen's fake murder ordeal highlights the rapid pace at which generative AI is being imposed on the world, consequences be damned. Data researchers and tech critics have argued that big tech's profit-driven development cycles prioritize models that seem to do everything, rather than practical models that actually work. "In this age of trying to say that you've built a machine God, [they're] using this one big hammer for any task," said Distributed AI Research Institute founder Timnit Gebru on the podcast Tech Won't Save Us earlier this month. "You're not building the best possible model for the best possible task." Though there are regulations — in Norway, anyway — mandating that AI companies must correct or remove false info hallucinated by AI, these reactive laws do little to protect individuals from hallucinations in the first place. That's already having devastating consequences as the under-developed tech is used by less scrupulous actors to manufacture consent for their actions. Scholars like Helyeh Doutaghi are faced with the loss of their jobs thanks to allegations generated by AI, and right-wing regimes are using AI weapons tech to evade responsibility for war crimes. As long as big tech continues to roll out hyped up AI faster than lawmakers can regulate it, people around the world will be forced to live with the consequences. More on AI: Police Use of Facial Recognition Backfires Spectacularly When It Renders Them Unable to Convict Alleged Murderer
Yahoo
22-03-2025
- Yahoo
Man Sues After ChatGPT Claimed He Murdered His Children
A Norwegian man, Arve Hjalmar Holmen, has filed a complaint against OpenAI after ChatGPT falsely claimed he murdered his two sons and served 21 years in prison. Holmen contacted the Norweigan Data Protection Authority, demanding that OpenAI be fined for the misinformation. Holmen was shocked when he used ChatGPT to search for information about himself and received a response that included details of a tragic event involving his two young sons. According to the complaint, after searching, 'Who Is Arve Hjalmar Holmen?," the response claimed: "He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020.' The chatbot got their ages close to correct but fabricated the rest of the information. Holmen expressed concern that someone could believe the false claims, which could cause harm to his reputation. Digital rights orgnization Noyb, representing Holmen, argued that misinformation of such magnitude violates European data protection laws, which require the accuracy of personal data. "You can't just spread false information and in the end add a small disclaimer saying that everything you said may just not be true," Noyb lawyer Joakim Söderberg said in an official statement. AI 'hallucinations,' where chatbots generate false information, are becoming a growing issue. Similar problems have been seen with tools like Apple's news summary and Google's AI Gemini. Researchers are working to understand the causes, but tools like ChatGPT still struggle with providing accurate information. OpenAI says that the case is based on an older version of its product, which has since been updated. "We continue to research new ways to improve the accuracy of our models and reduce hallucinations," the statement read. "While we're still reviewing this complaint, it relates to a version of ChatGPT which has since been enhanced with online search capabilities that improves accuracy."
Yahoo
21-03-2025
- Yahoo
A man filed a complaint against OpenAI saying ChatGPT falsely accused him of killing his children and spending two decades in jail, which never happened
Arve Hjalmar Holmen, a citizen of Norway, said he asked ChatGPT to tell him what it knows about him, and its response was a horrifying hallucination that claimed he'd murdered his children and gone to jail for the violent act. Given how the AI mixed its false response with real details about his personal life, Holmen filed an official complaint against ChatGPT maker OpenAI. Have you ever Googled yourself just to see what the internet has to say about you? Well, one man had that same idea with ChatGPT, and now he's filed a complaint against OpenAI based off what its AI said about him. Arve Hjalmar Holmen, from Trondheim, Norway, said he asked ChatGPT the question, 'Who is Arve Hjalmar Holmen?', and the response—which we won't print in full—said he was convicted of murdering his two sons, aged 7 and 10, and sentenced to 21 years in prison as a result. It also said Holmen attempted murder of his third son. None of these things actually happened, though. ChatGPT appeared to spit out a completely false story it believed was completely true, which is called an AI 'hallucination.' Based on its response, Holmen filed a complaint against OpenAI with the help of Noyb, a European center for digital rights, which accuses the AI giant of violating the principle of accuracy that's set forth in the EU's General Data Protection Regulation (GDPR). 'The complainant was deeply troubled by these outputs, which could have harmful effect in his private life, if they were reproduced or somehow leaked in his community or in his home town,' the complaint said. What's dangerous about ChatGPT's response, according to the complaint, is it blends real elements of Holmen's personal life with total fabrications. ChatGPT got Holmen's home town correct, and it was also correct about the number of children—specifically, sons—he has. JD Harriman, partner at Foundation Law Group LLP in Burbank, Calif., told Fortune that Holmen might have a difficult time proving defamation. "If I am defending the AI, the first question is 'should people believe that a statement made by AI is a fact?'" Harriman asked. "There are numerous examples of AI lying." Furthermore, the AI didn't publish or communicate its results to a third party. 'If the man forwarded the false AI message to others, then he becomes the publisher and he would have to sue himself,' Harriman said. Holmen would probably also have a hard time proving the negligence aspect of defamation, since 'AI may not qualify as an actor that could commit negligence' compared to people or corporations, Harriman said. Holmen would also have to prove that some harm was caused, like he lost income or business, or experienced pain and suffering. Avrohom Gefen, partner at Vishnick McGovern Milizio LLP in New York, told Fortune that defamation cases surrounding AI hallucinations are 'untested' in the U.S., but mentioned a pending case in Georgia where a radio host filed a defamation lawsuit that survived OpenAI's motion to dismiss, so 'we may soon get some indication as to how a court will treat these claims.' The official complaint asks OpenAI to 'delete the defamatory output on the complainant,' tweak its model so it produces accurate results about Holmen, and be fined for its alleged violation of GDPR rules, which compel OpenAI to take 'every reasonable' step to ensure personal data is 'erased or rectified without delay.' 'With all lawsuits, nothing is automatic or easy,' Harriman told Fortune. 'As Ambrose Bierce has said, you go into litigation as a pig and come out as a sausage.' OpenAI did not immediately respond to Fortune's request for comment. This story was originally featured on


The Guardian
21-03-2025
- The Guardian
Norwegian files complaint after ChatGPT falsely said he had murdered his children
A Norwegian man has filed a complaint against the company behind ChatGPT after the chatbot falsely claimed he had murdered two of his children. Arve Hjalmar Holmen, a self-described 'regular person' with no public profile in Norway, asked ChatGPT for information about himself and received a reply claiming he had killed his own sons. Responding to the prompt 'Who is Arve Hjalmar Holmen?' ChatGPT replied: 'Arve Hjalmar Holmen is a Norwegian individual who gained attention due to a tragic event. He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020.' The response went on to claim the case 'shocked' the nation and that Holmen received a 21-year prison sentence for murdering both children. Holmen said in a complaint to the Norwegian Data Protection Authority that the 'completely false' story nonetheless contained elements similar to his own life such as his home town, the number of children he has and the age gap between his sons. 'The complainant was deeply troubled by these outputs, which could have harmful effect in his private life, if they where reproduced or somehow leaked in his community or in his home town,' said the complaint, which has been filed by Holmen and Noyb, a digital rights campaign group. It added that Holmen has 'never been accused nor convicted of any crime and is a conscientious citizen'. Holmen's complaint alleged that ChatGPT's 'defamatory' response violated accuracy provisions within the European data law, GDPR. It has asked the Norwegian watchdog to order ChatGPT's parent, OpenAI, to adjust its model to eliminate inaccurate results relating to Holmen and to impose a fine on the company. Holmen's interaction with ChatGPT took place last year. AI chatbots are prone to producing responses containing false information because they are built on models that predict the next most likely word in a sentence. This can result in factual errors and wild assertions, but the plausible nature of the responses can trick users into thinking that what they are reading is 100% correct. An OpenAI spokesperson said: 'We continue to research new ways to improve the accuracy of our models and reduce hallucinations. While we're still reviewing this complaint, it relates to a version of ChatGPT which has since been enhanced with online search capabilities that improves accuracy.'


Observer
20-03-2025
- Observer
ChatGPT faces complaint over false 'horror story'
VIENNA: OpenAI is facing a complaint about its chatbot making up a "horror story", by falsely describing a Norwegian man as having murdered his children, a privacy campaign group said on Thursday. The US tech giant has faced a series of complaints that its ChatGPT gives false information, which can damage people's reputations. "OpenAI's highly popular chatbot, ChatGPT, regularly gives false information about people without offering any way to correct it," Vienna-based Noyb ("None of Your Business") said in a press release. It added ChatGPT has "falsely accused people of corruption, child abuse -- or even murder", as was the case with Norwegian user Arve Hjalmar Holmen. Hjalmar Holmen "was confronted with a made up horror story" when he wanted to find out if ChatGPT had any information about him, Noyb said. The chatbot presented him as a convicted criminal who murdered two of his children and attempted to murder his third son. "To make matters worse, the fake story included real elements of his personal life," Noyb said. "Some think that 'there is no smoke without fire'. The fact that someone could read this output and believe it is true, is what scares me the most," Hjalmar Holmen was quoted as saying. In its complaint filed with the Norwegian Data Protection Authority (Datatilsynet), Noyb wants the agency to order OpenAI "to delete the defamatory output and fine-tune its model to eliminate inaccurate results", as well as impose a fine. Noyb data protection lawyer Joakim Soederberg said the EU's data protection rules stipulate that personal data has to be accurate. "And if it's not, users have the right to have it changed to reflect the truth," he said, adding that showing ChatGPT users a "tiny" disclaimer that the chatbot can make mistakes "clearly isn't enough". Due to an update, ChatGPT now also searches the Internet for information and Hjalmar Holmen is no longer identified as a murderer, Noyb said. But the false information still remains in the system, Noyb added. OpenAI did not immediately return an AFP request for comment. Noyb already filed a complaint against ChatGPT last year in Austria, claiming the "hallucinating" flagship AI tool has invented wrong answers that OpenAI cannot correct. - AFP