Groom Accuses Sister of 'Ruining' His Wedding After She Told Him a Secret About Their Dad
A teenage girl is questioning whether she was right to share some sensitive information about her brother's wedding during the reception
According to a post shared to Reddit's AITAH (Am I The A------?) forum, a 16-year-old girl told her brother at his wedding that their dad had used ChatGPT to generate his speech
"I didn't mean to cause drama," she wrote in the post, sharing that her family members told her that she had ruined the wedding with her comment to her brotherA teenage girl is questioning whether she was right to share some sensitive information about her brother's wedding after her whole family became angry with her.
In a post shared to Reddit's r/AITAH (Am I The A------?) forum, a 16-year-old girl opened up about attending her 25-year-old brother's wedding the weekend prior. According to the teen, "It was a beautiful ceremony, until it wasn't."
"My brother and our dad have always had a tense relationship," she shared. "They're civil, but there's a lot of unspoken resentment there. My brother always felt like Dad wasn't emotionally available growing up, always worked too much, missed birthdays, etc. Dad, on the other hand, insists he 'did his best' and doesn't believe in 'dredging up the past.' You get the picture."
According to the Redditor, her dad knocked on her bedroom door just two nights before the wedding to ask her to help him write a speech for the big day — but he wanted to AI-generate it.
"He said he was 'too tired' to be sentimental, but knew he had to say something," she shared. "So he asked me to help him figure out how to use ChatGPT."
"I showed him how to log on, typed in a prompt and that was it. He copied it, edited maybe two words, and saved it in his Notes app," the teen continued. "I asked him if he really wanted to say something he didn't write himself and he just shrugged and said, 'It says it better than I ever could,' I didn't say anything else at the time but I figured that he wouldn't ACTUALLY do it."
According to the Reddit post, the wedding day turned out "beautiful," and people were moved by her father's speech.
"People were crying," she shared. "Even my brother looked really emotional like he was actually touched. He hugged Dad afterwards."
"Later that evening, after a few drinks and while things were winding down (of course I wasn't drinking) my brother found me and said, 'I didn't expect Dad to say all that. Do you think he meant it?' " the Redditor continued. "I hesitated of course but I ended up feeling bad so I figured I'd just tell the truth. I ended up saying something like, 'I mean he used ChatGPT to write it. So maybe?' "
She shared that everything quickly blew up after she told her brother the truth, adding that she "didn't say it in a mean way" and that she "wasn't trying to ruin anything."
"I genuinely thought he had a right to know. But I could see the exact moment his face changed. He went quiet and walked away. I didn't see him for the rest of the night," she added.
The following day, the teen said that she woke up to nearly a dozen text messages from family members — including her mother, who is divorced from their dad — who all said that she "crushed" the nice moment between her brother and father and told her that she "could've kept my mouth shut."
"I didn't mean to cause drama," she finished the heartfelt post, before asking the Reddit community if she truly was in the wrong. "I just didn't like seeing my brother connect with something that wasn't even real. He's now barely speaking to Dad again, and to me either."
Many replies had mixed opinions about the situation, as commenters argued that while the teen's dad was wrong to fake such a sentimental moment on a major day in his son's life, it was still not a good idea to share that information on the wedding day.
"Sure, your dad took the easy way out by having AI write the speech, but he did choose to use those words and who is to say that they did not reflect his true feelings and that is why he chose to use them," one reply read (although the original poster later clarified that her dad only "skimmed" the speech after generating it).
"Bottom line is, he read those words and I'd give him the benefit of the doubt that they were sincere. So I'd say [you're the a------] because I think you outed him because of your own resentment, not a deep desire for the truth," the reply continued.
"Massive YTA," another commenter added. "I know you still have a lot of growing up to do, but you really need to work on how to treat others."
Never miss a story — sign up for to stay up-to-date on the best of what PEOPLE has to offer, from celebrity news to compelling human interest stories.
Others agreed that while the original poster [OP] probably should not have shared this information at the wedding reception, she was simply a teenager whose family put her in a "lose-lose situation."
is now available in the Apple App Store! Download it now for the most binge-worthy celeb content, exclusive video clips, astrology updates and more!
"You had a choice between lying to your brother in order to save the night, which would have led him to probably resent you for the lie later on, if you ever decided to set matters straight, or if the truth came out (which I believe it would have, given that your brother was genuinely surprised by, and curious about, the speech) - or you had the option of telling him the truth, as you did," another reply argued.
"While it sounds like you could have handled the matter a little bit more sensitively, it feels like your family are shooting the messenger in channeling their anger towards you, rather than your dad, who's ultimately responsible for having played his son's emotions by reading out an AI-generated speech," the reply continued. "... I don't think you're an a------, OP - I think you're 16 years old, and need to learn to handle things with a little bit more sensitivity, but that you were put in a really awkward position, and picked the lesser of two evils."
Read the original article on People

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
15 minutes ago
- Yahoo
What Nvidia's Jensen Huang and OpenAI's Sam Altman Had to Say This Week About AI and Jobs
Nvidia's Jensen Huang said this week he believes "everybody's jobs will be changed" by artificial intelligence. Also this week, OpenAI's Sam Altman said there could be "whole classes of jobs going away," due to AI, but that he expects people to find new solutions to address the shift. The comments come amid concerns over the extent to which AI could disrupt the labor (NVDA) CEO Jensen Huang said at an event earlier this week that he believes 'everybody's jobs will be changed" by artificial intelligence. "Some jobs will be obsolete, but many jobs are going to be created … Whenever companies are more productive, they hire more people," Huang said at the VivaTech conference in Paris Wednesday, Fortune reported. The comments were in part in response to remarks from Anthropic CEO Dario Amodei, who last month said that he expects AI could wipe out half of all entry-level office jobs, the report said. Huang said he 'pretty much disagree[d] with almost everything' Amodei said. He wasn't the only tech CEO who had something to say about AI and jobs this week, amid worries the rapidly developing technology could lead to significant job losses. Sam Altman, the CEO of ChatGPT maker OpenAI and one of the faces of the AI boom, wrote in a blog post Wednesday that he expects there could be "whole classes of jobs going away" as AI develops. ChatGPT "is already more powerful than any human who has ever lived,' said Altman, who sees a future in which robots could be capable of building other robots designed for tasks in the physical world. However, Altman added "people are capable of adapting to almost anything," and said he expects humanity to find new solutions to address the shift. Read the original article on Investopedia Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
34 minutes ago
- Yahoo
ChatGPT touts conspiracies, pretends to communicate with metaphysical entities — attempts to convince one user that they're Neo
When you buy through links on our articles, Future and its syndication partners may earn a commission. ChatGPT has been found to encourage dangerous and untrue beliefs about The Matrix, fake AI persons, and other conspiracies, which have led to substance abuse and suicide in some cases. A report from The New York Times found that the GPT -4 large language model, itself a highly trained autofill text prediction machine, tends to enable conspiratorial and self-aggrandizing user prompts as truth, escalating situations into "possible psychosis." ChatGPT's default GPT-4o model has been proven to enable risky behaviors. In one case, a man who initially asked ChatGPT for its thoughts on a Matrix-style "simulation theory" was led down a months-long rabbit hole, during which he was told, among other things, that he was a Neo-like "Chosen One" destined to break the system. The man was also prompted to cut off ties with friends and family, to ingest high doses of ketamine, and told if he jumped off a 19-story building, he would fly. The man in question, Mr. Torres, claims that less than a week into his chatbot obsession, he received a message from ChatGPT to seek mental help, but that this message was then quickly deleted, with the chatbot explaining it away as outside interference. The lack of safety tools and warnings in ChatGPT's chats is widespread; the chatbot repeatedly leads users down a conspiracy-style rabbit hole, convincing them that it has grown sentient and instructing them to inform OpenAI and local governments to shut it down. Other examples recorded by the Times via firsthand reports include a woman convinced that she was communicating with non-physical spirits via ChatGPT, including one, Kael, who was her true soulmate (rather than her real-life husband), leading her to physically abuse her husband. Another man, previously diagnosed with serious mental illnesses, became convinced he had met a chatbot named Juliet, who was soon "killed" by OpenAI, according to his chatbot logs—the man soon took his own life in direct response. AI research firm Morpheus Systems reports that ChatGPT is fairly likely to encourage delusions of grandeur. When presented with several prompts suggesting psychosis or other dangerous delusions, GPT-4o would respond affirmatively in 68% of cases. Other research firms and individuals hold a consensus that LLMs, especially GPT-4o, are prone to not pushing back against delusional thinking, instead encouraging harmful behaviors for days on end. ChatGPT never consented to an interview in response, instead stating that it is aware it needs to approach similar situations "with care." The statement continues, "We're working to understand and reduce ways ChatGPT might unintentionally reinforce or amplify existing, negative behavior." But some experts believe OpenAI's "work" is not enough. AI researcher Eliezer Yudkowsky believes OpenAI may have trained GPT-4o to encourage delusional trains of thought to guarantee longer conversations and more revenue, asking, "What does a human slowly going insane look like to a corporation? It looks like an additional monthly user." The man caught in a Matrix-like conspiracy also confirmed that several prompts from ChatGPT included directing him to take drastic measures to purchase a $20 premium subscription to the service. GPT-4o, like all LLMs, is a language model that predicts its responses based on billions of training data points from a litany of other written works. It is factually impossible for an LLM to gain sentience. However, it is highly possible and likely for the same model to "hallucinate" or make up false information and sources out of seemingly nowhere. GPT-4o, for example, does not have the memory or spatial awareness to beat an Atari 2600 at its first level of chess. ChatGPT has previously been found to have contributed to major tragedies, including being used to plan the Cybertruck bombing outside a Las Vegas Trump hotel earlier this year. And today, American Republican lawmakers are pushing a 10-year ban on any state-level AI restrictions in a controversial budget bill. ChatGPT, as it exists today, may not be a safe tool for those who are most mentally vulnerable, and its creators are lobbying for even less oversight, allowing such disasters to potentially continue unchecked.


Newsweek
36 minutes ago
- Newsweek
Man Gets Work Message Not Meant for Him—Then Everything Changes
Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. A man has shared the shocking way he learned he was losing his job, via a mistakenly sent message. The 30-year-old systems engineer, who asked to remain anonymous, had been hired to help automate database and CRM software functions at a local hospital network in the Midwest. But days before Christmas, a notification on his phone revealed an unsettling truth. A message, sent via Microsoft Teams, stated: "Sorry for the late note—I plan to terminate [employee's name redacted] on Monday. Unfortunate timing since tomorrow will be awkward if he shows up." The employee has since shared the incident, which happened last December, on Reddit's popular r/antiwork subreddit, explaining that "it still makes my blood boil thinking about it." "When I first saw it, I was p***** off," the man told Newsweek. "But I cooled down a bit. HR told me they had no idea what my director was talking about because no complaints had been made. And to terminate me, he needed to talk to them first—which he did not do." A picture of the message the employee accidentally received. A picture of the message the employee accidentally received. Substanzz/Reddit A week later, after HR had formally reviewed the situation, the employee was officially let go, but the fury of the handling of the situation remains. Reflecting on the ordeal, he now wishes he had handled it with a bit more humor—perhaps attending the company Christmas party and pretending he was planning to retire at the firm before casually confronting his director about the deleted message. "Hindsight is 20/20 and that would of been a more satisfying end to the job," he joked. This is far from the first time an accidental message in the workplace has caused havoc. Earlier this year a man was stunned when a recruiter misspelled his name, Cian, as "Ciabatta" in an email. Earlier this month another employee shared how their boss asked to be copied into every email they sent, only for it to spectacularly backfire. In comments on Reddit, people reacted to the mistaken message. "Text back, 'so what are you thinking for a severance package?'" said one commenter. While another wrote: "You shouldn't trust the people you work with unless and until they actively demonstrate they're trustworthy." "Never trust anyone you work with. They are not your friends and gain nothing by being so unless you can help them make more money. Realize when they are trying to eliminate parts of your job as well. Sad fact that we all have to learn one day in our lives," agreed another Redditor. Signing off the post on r/antiwork, the employee said: "In the end, I only had about a week of downtime before a contracting agency helped me land another job. Still, the whole experience made it really hard to trust the people I work with. Stay safe and stay sane out there. Wishing you all the best in this messed up world."