Woman files for divorce after ChatGPT ‘exposes' husband's affair through coffee cup
In a digital landscape increasingly shared with artificial intelligence, one woman's reliance on technology for personal matters has opened up essential discussions about trust, interpretation, and the consequences of our digital interactions.
Image: Feyza Daştan/Pexels
Artificial Intelligence (AI) is seamlessly integrated into our lives, whether it's managing work tasks, controlling smart homes, or offering advice on parenting, fitness, and even mental health.
Platforms like ChatGPT have become trusted companions for many, solving everything from quick research queries to complex emotional dilemmas.
However, what happens when we place too much trust in AI?
One woman in Greece found herself at the centre of a media storm when she used ChatGPT in an unconventional way, asking it to interpret the coffee grounds at the bottom of her husband's cup.
Known as tasseography, this ancient fortune-telling practice involves interpreting patterns in coffee grounds, tea leaves, or wine sediments to predict the future.
What followed was a dramatic chain of events that led to her filing for divorce, allegedly based on ChatGPT's "reading."
Video Player is loading.
Play Video
Play
Unmute
Current Time
0:00
/
Duration
-:-
Loaded :
0%
Stream Type LIVE
Seek to live, currently behind live
LIVE
Remaining Time
-
0:00
This is a modal window.
Beginning of dialog window. Escape will cancel and close the window.
Text Color White Black Red Green Blue Yellow Magenta Cyan
Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan
Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan
Transparency Transparent Semi-Transparent Opaque
Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps
Reset
restore all settings to the default values Done
Close Modal Dialog
End of dialog window.
Advertisement
Next
Stay
Close ✕
Ad Loading
When fortune-telling meets AI
According to reports in the Greek City Times, the unnamed woman, a mother of two married for 12 years, uploaded a photograph of her husband's coffee cup to ChatGPT, asking it to analyse the patterns.
The AI's response was shocking: it suggested that her husband was considering starting an affair with a woman whose name began with the letter "E".
It went even further, claiming that the affair had already begun and that the "other woman" had plans to destroy their marriage.
For the husband, this was just another episode in his wife's long-standing fascination with the supernatural.
For individuals already inclined to believe in fortune-telling or the supernatural, AI might appear as a modern, more credible alternative.
Image: Lucas Pezeta/Pexels
He claimed on the Greek morning show To Proino that she had previously sought guidance from astrologers, only to later realise their predictions were baseless.
But this time, her belief in ChatGPT's "divine" analysis led to irreversible damage.
Despite his attempts to laugh off the situation, his wife took the accusations seriously.
Within days, she asked him to move out, informed their children of a pending divorce, and even involved a lawyer.
The husband, who denies any wrongdoing, is now contesting her claims, arguing that they are based on fictional interpretations by an AI and have no legal merit.
This bizarre story strikes at the heart of a growing concern: our increasing reliance on AI and its impact on how we perceive reality.
While tools like ChatGPT are designed to generate plausible responses based on the data they've been trained on, they don't possess true insight, intuition, or the ability to divine the future.
The AI chatbot, for instance, wasn't created to interpret coffee grounds or predict human behaviour.
It's a language model that generates responses based on probabilities, influenced by the prompt it receives.
Yet, incidents like this highlight how easily people can misinterpret AI's capabilities, especially when they lack a clear understanding of its limitations.
On Reddit, reactions to this story were a mix of humour and concern.
Some joked about AI taking over psychic jobs, while others expressed deeper fears about how tools like ChatGPT are blurring the line between reality and fiction.
One commenter aptly noted, 'We're going to see a wave of people whose ability to comprehend reality has been utterly annihilated by LLM tools.'
As AI continues to evolve, users must understand what these tools can and cannot do.
ChatGPT, for example, can provide thoughtful responses, summarise information, and even offer creative input.
But it is not omniscient, nor can it accurately predict human behaviour or events.
Moreover, this story underscores the psychological vulnerability that some people may have when interacting with AI.
For individuals already inclined to believe in fortune-telling or the supernatural, AI might appear as a modern, more credible alternative.
This misplaced trust can lead to significant emotional and relational consequences, as seen in this case.
As we continue to integrate AI into our daily lives, it's essential to remember that these tools are not infallible.
They are here to assist us, not replace critical thinking, human intuition, or professional expertise.
And perhaps most importantly, they are not fortune-tellers, no matter how convincing their responses may seem.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The South African
2 days ago
- The South African
Andrew Garfield set to portray Sam Altman in new drama ‘Artificial'
Luca Guadagnino, the acclaimed director behind Call Me By Your Name and Challengers , is gearing up to direct a gripping new film titled Artificial , starring reportedly Andrew Garfield. This film will be centred on the dramatic 2023 saga at OpenAI. The film will recount the extraordinary events surrounding the brief firing and swift rehiring of OpenAI CEO Sam Altman. Andrew Garfield is reportedly circling the lead role of Altman himself. This project marks a swift pivot for Guadagnino, who was previously slated to direct the DC Studios film Sgt. Rock , now postponed, according to The Hollywood Reporter. With Artificial , Guadagnino moves from war stories to the cutting-edge world of artificial intelligence. He will bring a sharp, cinematic eye to a real-life corporate thriller that captivated the tech world and beyond. Andrew Garfield, fresh from his role opposite Florence Pugh in We Live in Time , will soon to appear in Guadagnino's upcoming After the Hunt . The actor is no stranger to complex, layered characters. His involvement promises a compelling portrayal of Altman. Altman's 2023 ousting from OpenAI sparked a revolt among employees and investors, culminating in his reinstatement after just five days. Joining Garfield are Monica Barbaro, known for her scene-stealing role in A Complete Unknown . Monica is expected to play OpenAI's chief technology officer Mira Murati. Yura Borisov, the breakout star from Anora , will portray Ilya Sutskever, the co-founder who initially pushed for Altman's removal. Simon Rich pens the screenplay, describing it as a comedic drama that blends humour with the high-stakes tension of AI's rapidly evolving landscape The production is racing ahead at Amazon MGM Studios, with filming planned for this summer in San Francisco and Italy. This rapid timeline reflects the urgency and relevance of the story. The story can resonate strongly with people familiar with the global impact of AI technology and corporate governance. David Heyman and Jeffrey Clifford of Heyday Films are producing alongside Rich and Jennifer Fox, with Amazon MGM backing the project. Luca Guadagnino has established a strong collaboration with Amazon MGM by directing Challengers and the upcoming After the Hunt for the studio. Garfield commented on the project. 'Portraying Sam Altman offers a rare chance to explore the human side of a tech revolution that's reshaping our world. It's a story about leadership, trust, and the power of ideas — themes that resonate everywhere, including here in South Africa.' With a reported budget of around R350 million (converted from approximately $20 million USD), the filmmakers have positioned Artificial to become a major cinematic event. This film may offer a fascinating glimpse into the global AI debate, humanising the figures behind the headlines. It can also allow people to explore the ethical and personal challenges at the heart of technological innovation. Artificial promises to be a must-watch, combining star power, a topical story, and a director renowned for his emotional depth and visual flair. As filming gears up, all eyes will be on Garfield and Guadagnino to deliver a film that captures the zeitgeist of the AI era with wit and insight. Let us know by leaving a comment below, or send a WhatsApp to 060 011 021 1. Subscribe to The South African website's newsletters and follow us on WhatsApp, Facebook, X and Bluesky for the latest news.

TimesLIVE
2 days ago
- TimesLIVE
Revolution or risk? How AI is redefining broadcasting and raising red flags
Imagine watching the evening news, only to find out later that the images, voices or even the person reporting were not real. This is not fiction any more; generative artificial intelligence (GenAI) is transforming the broadcasting industry and not without consequences. Prof Nelishia Pillay, an AI expert at the University of Pretoria, says while the technology is opening exciting opportunities for content creation, it also raises serious ethical concerns. "GenAI creates new content based on what it learns from online data," she said. "While it doesn't come up with truly original ideas – that creativity is still reserved for humans – it does help reshape existing ones." Used widely, generative artificial intelligence (GenAI) has made life easier for broadcasters. Journalists can now create engaging visuals using just voice prompts, producers can create music or video clips in minutes and translate subtitles in different languages in just few clicks. Even converting text to speech using AI also helps broadcasters to do more with fewer resources. However, with this convenience comes ethical concerns, especially around what is called "deepfakes". These are digitally forged images or videos that can be convincing, posing a threat to truth and trust in the media. "A challenge that comes with GenAI is how to ensure the ethical use of these tools," she said. "Deepfakes can be used to produce fake news and harmful cybersecurity attacks on businesses." Pillay also highlighted how tailoring content through AI can reinforce biases if left unchecked. To address such risks, tools are emerging to detect GenAI misuse. According to the International New Media Association, AI has already shown success in detecting the unethical use of GenAI, with machine-learning being used to detect fake news. Tools like Checkmate, a real-time fact checking system that flags claims in videos and checks them against reliable sources and Turnitin used in the academic world to detect student plagiarism are also evolving. "Such tools need to be embedded in GenAI systems in the broadcasting industry to detect the unethical use of GenAI," said Pillay. Beyond fake news, there are deeper ethical questions. Who owns content created by machines? Is it fair to use information from social media platforms to train tools? And the impact of personalised content on audiences. As AI is making it harder to tell the difference between human and machine creation, media organisations need to come up with clear rules protecting intellectual property and privacy, especially when they use datasets. "Broadcasters need strict guidelines to respect privacy rights of individuals when creating images or video," Pillay said.

IOL News
2 days ago
- IOL News
Top scientist wants to prevent AI from going rogue
The winner of the Turing Award, also known as the Nobel Prize for computer science, has been warning for several years of the risks of AI, whether through its malicious use or the software itself going awry. Image: RON AI Concerned about the rapid spread of generative AI, a pioneer researcher is developing software to keep tabs on a technology that is increasingly taking over human tasks. Canadian computer science professor Yoshua Bengio is considered one of the godfathers of the artificial intelligence revolution and on Tuesday announced the launch of LawZero, a non-profit organization intended to mitigate the technology's inherent risks. The winner of the Turing Award, also known as the Nobel Prize for computer science, has been warning for several years of the risks of AI, whether through its malicious use or the software itself going awry. Those risks are increasing with the development of so-called AI agents, a use of the technology that tasks computers with making decisions that were once made by human workers. The goal of these agents is to build virtual employees that can do practically any job a human can, at a fraction of the cost. "Currently, AI is developed to maximize profit," Bengio said, adding it was being deployed even as it persists to show flaws. Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Advertisement Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Next Stay Close ✕ Moreover, for Bengio, giving AI human-like agency will easily be used for malicious purposes such as disinformation, bioweapons, and cyberattacks. "If we lose control of rogue super-intelligent AIs, they could greatly harm humanity," he said. One of the first objectives at LawZero will be to develop Scientist AI, a form of specially trained AI that can be used as a guardrail to ensure other AIs are behaving properly, the company said. The organization already has over 15 researchers and has received funding from Schmidt Sciences, a charity set up by former Google boss Eric Schmidt and his wife Wendy. The project comes as powerful large language models (or LLMs) from OpenAI, Google and Anthropic are deployed across all sectors of the digital economy, while still showing significant problems. These include AI models that show a capability to deceive and fabricate false information even as they increase productivity.