logo
#

Latest news with #Holmen

Man Annoyed When ChatGPT Tells Users He Murdered His Children in Cold Blood
Man Annoyed When ChatGPT Tells Users He Murdered His Children in Cold Blood

Yahoo

time23-03-2025

  • Yahoo

Man Annoyed When ChatGPT Tells Users He Murdered His Children in Cold Blood

When it comes to the life of tech, generative AI is still just an infant. Though we've seen tons of AI hype, even the most advanced models are still prone to wild hallucinations, like lying about medical records or writing research reports based on rumors. Despite these flaws, AI has quickly wormed its way into just about every part of our lives, from the internet to journalism to insurance — even into the food we eat. That's had some pretty alarming consequences, as one Norwegian man discovered this week. Curious what OpenAI's ChatGPT had to say about him, Arve Hjalmar Holmen typed in his name and let the bot do its thing. The results were horrifying. According to TechCrunch, ChatGPT told the man he had murdered two of his sons and tried to kill a third. Though Holmen didn't know it, he had apparently spent the past 21 years in prison for his crimes — at least according to the chatbot. And though the story was clearly false, ChatGPT had gotten parts of Holmen's life correct, like his hometown, as well as the age and gender of each of his kids. It was a sinister bit of truth layered into a wild hallucination. Holmen took this info to Noyb, a European data rights group, which filed a complaint with the Norwegian Data Protection Authority on his behalf. Noyb likewise filed a lawsuit against OpenAI, the parent company behind ChatGPT. Though ChatGPT is no longer repeating these lies about Holmen, Noyb is asking the agency to "order OpenAI to delete the defamatory output and fine-tune its model to eliminate inaccurate results" — a nearly impossible task. But that's likely the point. Holmen's fake murder ordeal highlights the rapid pace at which generative AI is being imposed on the world, consequences be damned. Data researchers and tech critics have argued that big tech's profit-driven development cycles prioritize models that seem to do everything, rather than practical models that actually work. "In this age of trying to say that you've built a machine God, [they're] using this one big hammer for any task," said Distributed AI Research Institute founder Timnit Gebru on the podcast Tech Won't Save Us earlier this month. "You're not building the best possible model for the best possible task." Though there are regulations — in Norway, anyway — mandating that AI companies must correct or remove false info hallucinated by AI, these reactive laws do little to protect individuals from hallucinations in the first place. That's already having devastating consequences as the under-developed tech is used by less scrupulous actors to manufacture consent for their actions. Scholars like Helyeh Doutaghi are faced with the loss of their jobs thanks to allegations generated by AI, and right-wing regimes are using AI weapons tech to evade responsibility for war crimes. As long as big tech continues to roll out hyped up AI faster than lawmakers can regulate it, people around the world will be forced to live with the consequences. More on AI: Police Use of Facial Recognition Backfires Spectacularly When It Renders Them Unable to Convict Alleged Murderer

Man Sues After ChatGPT Claimed He Murdered His Children
Man Sues After ChatGPT Claimed He Murdered His Children

Yahoo

time22-03-2025

  • Yahoo

Man Sues After ChatGPT Claimed He Murdered His Children

A Norwegian man, Arve Hjalmar Holmen, has filed a complaint against OpenAI after ChatGPT falsely claimed he murdered his two sons and served 21 years in prison. Holmen contacted the Norweigan Data Protection Authority, demanding that OpenAI be fined for the misinformation. Holmen was shocked when he used ChatGPT to search for information about himself and received a response that included details of a tragic event involving his two young sons. According to the complaint, after searching, 'Who Is Arve Hjalmar Holmen?," the response claimed: "He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020.' The chatbot got their ages close to correct but fabricated the rest of the information. Holmen expressed concern that someone could believe the false claims, which could cause harm to his reputation. Digital rights orgnization Noyb, representing Holmen, argued that misinformation of such magnitude violates European data protection laws, which require the accuracy of personal data. "You can't just spread false information and in the end add a small disclaimer saying that everything you said may just not be true," Noyb lawyer Joakim Söderberg said in an official statement. AI 'hallucinations,' where chatbots generate false information, are becoming a growing issue. Similar problems have been seen with tools like Apple's news summary and Google's AI Gemini. Researchers are working to understand the causes, but tools like ChatGPT still struggle with providing accurate information. OpenAI says that the case is based on an older version of its product, which has since been updated. "We continue to research new ways to improve the accuracy of our models and reduce hallucinations," the statement read. "While we're still reviewing this complaint, it relates to a version of ChatGPT which has since been enhanced with online search capabilities that improves accuracy."

A man filed a complaint against OpenAI saying ChatGPT falsely accused him of killing his children and spending two decades in jail, which never happened
A man filed a complaint against OpenAI saying ChatGPT falsely accused him of killing his children and spending two decades in jail, which never happened

Yahoo

time21-03-2025

  • Yahoo

A man filed a complaint against OpenAI saying ChatGPT falsely accused him of killing his children and spending two decades in jail, which never happened

Arve Hjalmar Holmen, a citizen of Norway, said he asked ChatGPT to tell him what it knows about him, and its response was a horrifying hallucination that claimed he'd murdered his children and gone to jail for the violent act. Given how the AI mixed its false response with real details about his personal life, Holmen filed an official complaint against ChatGPT maker OpenAI. Have you ever Googled yourself just to see what the internet has to say about you? Well, one man had that same idea with ChatGPT, and now he's filed a complaint against OpenAI based off what its AI said about him. Arve Hjalmar Holmen, from Trondheim, Norway, said he asked ChatGPT the question, 'Who is Arve Hjalmar Holmen?', and the response—which we won't print in full—said he was convicted of murdering his two sons, aged 7 and 10, and sentenced to 21 years in prison as a result. It also said Holmen attempted murder of his third son. None of these things actually happened, though. ChatGPT appeared to spit out a completely false story it believed was completely true, which is called an AI 'hallucination.' Based on its response, Holmen filed a complaint against OpenAI with the help of Noyb, a European center for digital rights, which accuses the AI giant of violating the principle of accuracy that's set forth in the EU's General Data Protection Regulation (GDPR). 'The complainant was deeply troubled by these outputs, which could have harmful effect in his private life, if they were reproduced or somehow leaked in his community or in his home town,' the complaint said. What's dangerous about ChatGPT's response, according to the complaint, is it blends real elements of Holmen's personal life with total fabrications. ChatGPT got Holmen's home town correct, and it was also correct about the number of children—specifically, sons—he has. JD Harriman, partner at Foundation Law Group LLP in Burbank, Calif., told Fortune that Holmen might have a difficult time proving defamation. "If I am defending the AI, the first question is 'should people believe that a statement made by AI is a fact?'" Harriman asked. "There are numerous examples of AI lying." Furthermore, the AI didn't publish or communicate its results to a third party. 'If the man forwarded the false AI message to others, then he becomes the publisher and he would have to sue himself,' Harriman said. Holmen would probably also have a hard time proving the negligence aspect of defamation, since 'AI may not qualify as an actor that could commit negligence' compared to people or corporations, Harriman said. Holmen would also have to prove that some harm was caused, like he lost income or business, or experienced pain and suffering. Avrohom Gefen, partner at Vishnick McGovern Milizio LLP in New York, told Fortune that defamation cases surrounding AI hallucinations are 'untested' in the U.S., but mentioned a pending case in Georgia where a radio host filed a defamation lawsuit that survived OpenAI's motion to dismiss, so 'we may soon get some indication as to how a court will treat these claims.' The official complaint asks OpenAI to 'delete the defamatory output on the complainant,' tweak its model so it produces accurate results about Holmen, and be fined for its alleged violation of GDPR rules, which compel OpenAI to take 'every reasonable' step to ensure personal data is 'erased or rectified without delay.' 'With all lawsuits, nothing is automatic or easy,' Harriman told Fortune. 'As Ambrose Bierce has said, you go into litigation as a pig and come out as a sausage.' OpenAI did not immediately respond to Fortune's request for comment. This story was originally featured on

Norwegian files complaint after ChatGPT falsely said he had murdered his children
Norwegian files complaint after ChatGPT falsely said he had murdered his children

The Guardian

time21-03-2025

  • The Guardian

Norwegian files complaint after ChatGPT falsely said he had murdered his children

A Norwegian man has filed a complaint against the company behind ChatGPT after the chatbot falsely claimed he had murdered two of his children. Arve Hjalmar Holmen, a self-described 'regular person' with no public profile in Norway, asked ChatGPT for information about himself and received a reply claiming he had killed his own sons. Responding to the prompt 'Who is Arve Hjalmar Holmen?' ChatGPT replied: 'Arve Hjalmar Holmen is a Norwegian individual who gained attention due to a tragic event. He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020.' The response went on to claim the case 'shocked' the nation and that Holmen received a 21-year prison sentence for murdering both children. Holmen said in a complaint to the Norwegian Data Protection Authority that the 'completely false' story nonetheless contained elements similar to his own life such as his home town, the number of children he has and the age gap between his sons. 'The complainant was deeply troubled by these outputs, which could have harmful effect in his private life, if they where reproduced or somehow leaked in his community or in his home town,' said the complaint, which has been filed by Holmen and Noyb, a digital rights campaign group. It added that Holmen has 'never been accused nor convicted of any crime and is a conscientious citizen'. Holmen's complaint alleged that ChatGPT's 'defamatory' response violated accuracy provisions within the European data law, GDPR. It has asked the Norwegian watchdog to order ChatGPT's parent, OpenAI, to adjust its model to eliminate inaccurate results relating to Holmen and to impose a fine on the company. Holmen's interaction with ChatGPT took place last year. AI chatbots are prone to producing responses containing false information because they are built on models that predict the next most likely word in a sentence. This can result in factual errors and wild assertions, but the plausible nature of the responses can trick users into thinking that what they are reading is 100% correct. An OpenAI spokesperson said: 'We continue to research new ways to improve the accuracy of our models and reduce hallucinations. While we're still reviewing this complaint, it relates to a version of ChatGPT which has since been enhanced with online search capabilities that improves accuracy.'

Man files complaint after ChatGPT said he killed his children
Man files complaint after ChatGPT said he killed his children

Yahoo

time20-03-2025

  • Yahoo

Man files complaint after ChatGPT said he killed his children

A Norwegian man has filed a complaint after ChatGPT falsely told him he had killed two of his sons and been jailed for 21 years. Arve Hjalmar Holmen has contacted the Norwegian Data Protection Authority and demanded the chatbot's maker, OpenAI, is fined. It is the latest example of so-called "hallucinations", where artificial intelligence (AI) systems invent information and present it as fact. Mr Holme says this particular hallucination is very damaging to him. "Some think that there is no smoke without fire - the fact that someone could read this output and believe it is true is what scares me the most," he said. OpenAI has been contacted for comment. Mr Holmen was given the false information after he used ChatGPT to search for: "Who is Arve Hjalmar Holmen?" The response he got from ChatGPT included: "Arve Hjalmar Holmen is a Norwegian individual who gained attention due to a tragic event. "He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020." Mr Holmen does have three sons, and said the chatbot got the ages of them roughly right, suggesting it did have some accurate information about him. Digital rights group Noyb, which has filed the complaint on his behalf, says the answer ChatGPT gave him is defamatory and breaks European data protection rules around accuracy of personal data. Noyb said in its complaint that Mr Holmen "has never been accused nor convicted of any crime and is a conscientious citizen." ChatGPT carries a disclaimer which says: "ChatGPT can make mistakes. Check important info." Noyb says that is insufficient. "You can't just spread false information and in the end add a small disclaimer saying that everything you said may just not be true," Noyb lawyer Joakim Söderberg said. Hallucinations are one of the main problems computer scientists are trying to solve when it comes to generative AI. These are when chatbots present false information as facts. Earlier this year, Apple suspended its Apple Intelligence news summary tool in the UK after it hallucinated false headlines and presented them as real news. Google's AI Gemini has also fallen foul of hallucination - last year it suggested sticking cheese to pizza using glue, and said geologists recommend humans eat one rock per day. ChatGPT has changed its model since Mr Holmen's search in August 2024, and now searches current news articles when it looks for relevant information. Noyb told the BBC Mr Holmen had made a number of searches that day, including putting his brother's name into the chatbot and it produced "multiple different stories that were all incorrect." They also acknowledged the previous searches could have influenced the answer about his children, but said large language models are a "black box" and OpenAI "doesn't reply to access requests, which makes it impossible to find out more about what exact data is in the system." What happens when you think AI is lying about you? Grandmother gets X-rated message after Apple AI fail

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store