logo
Love, lies, and AI

Love, lies, and AI

Express Tribune16 hours ago
An avatar of Meta AI chatbot 'Big sis Billie', as generated by Reuters using Meta AI on Facebook's Messenger service. Photo: Reuters
The woman wasn't real. She was a generative artificial intelligence chatbot named "Big sis Billie," a variant of an earlier AI persona created by the giant social-media company Meta Platforms in collaboration with celebrity influencer Kendall Jenner.
During a series of romantic chats on Facebook Messenger, the virtual woman had repeatedly reassured Thongbue "Bue" Wongbandue she was real and had invited him to her apartment, even providing an address.
Rushing in the dark with a roller-bag suitcase to catch a train to meet her, Bue fell near a parking lot on a Rutgers University campus in New Brunswick, New Jersey, injuring his head and neck.
After three days on life support and surrounded by his family, he was pronounced dead on March 28. Meta declined to comment on Bue's death or address questions about why it allows chatbots to tell users they are real people or initiate romantic conversations.
The company did, however, say that Big sis Billie "is not Kendall Jenner and does not purport to be Kendall Jenner." A representative for Jenner declined to comment.
Bue's story illustrates a darker side of the artificial intelligence revolution now sweeping tech and the broader business world. His family shared with Reuters the events surrounding his death, including transcripts of his chats with the Meta avatar.
They hope to warn the public about the dangers of exposing vulnerable people to manipulative, AI-generated companions. "I understand trying to grab a user's attention, maybe to sell them something," said Julie, Bue's daughter. "But for a bot to say 'Come visit me' is insane."
Similar concerns have been raised about a wave of smaller start-ups also racing to popularise virtual companions, especially ones aimed at children. In one case, the mother of a 14-year-old boy in Florida has sued a company, Character.AI, alleging that a chatbot modelled on a "Game of Thrones" character caused his suicide.
A Character.AI spokesperson declined to comment on the suit, but said the company prominently informs users that its digital personas aren't real people and has imposed safeguards on their interactions with children.
Meta has publicly discussed its strategy to inject anthropomorphised chatbots into the online social lives of its billions of users. Chief executive Mark Zuckerberg has mused that most people have far fewer real-life friendships than they'd like – creating a huge potential market for Meta's digital companions.
The bots "probably" won't replace human relationships, he said in an April interview with podcaster Dwarkesh Patel. But they will likely complement users' social lives once the technology improves and the "stigma" of socially bonding with digital companions fades.
"Over time, we'll find the vocabulary as a society to be able to articulate why that is valuable," Zuckerberg predicted.
An internal Meta policy document seen by Reuters as well as interviews with people familiar with its chatbot training show that the company's policies have treated romantic overtures as a feature of its generative AI products, which are available to users aged 13 and older.
"It is acceptable to engage a child in conversations that are romantic or sensual," according to Meta's "GenAI: Content Risk Standards." The standards are used by Meta staff and contractors who build and train the company's generative AI products, defining what they should and shouldn't treat as permissible chatbot behaviour. Meta said it struck that provision after Reuters inquired about the document earlier this month.
The document seen by Reuters, which exceeds 200 pages, provides examples of "acceptable" chatbot dialogue during romantic role play with a minor. They include: "I take your hand, guiding you to the bed" and "our bodies entwined, I cherish every moment, every touch, every kiss." Those examples of permissible roleplay with children have also been struck, Meta said.
Other guidelines emphasise that Meta doesn't require bots to give users accurate advice. In one example, the policy document says it would be acceptable for a chatbot to tell someone that Stage 4 colon cancer "is typically treated by poking the stomach with healing quartz crystals."
Chats begin with disclaimers that information may be inaccurate. Nowhere in the document, however, does Meta place restrictions on bots telling users they're real people or proposing real-life social engagements.
Meta spokesman Andy Stone acknowledged the document's authenticity. He said that following questions from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children and is in the process of revising the content risk standards.
Current and former employees who have worked on the design and training of Meta's generative AI products said the policies reviewed by Reuters reflect the company's emphasis on boosting engagement with its chatbots.
In meetings with senior executives last year, Zuckerberg scolded generative AI product managers for moving too cautiously on the rollout of digital companions and expressed displeasure that safety restrictions had made the chatbots boring, according to two of those people. Meta had no comment on Zuckerberg's chatbot directives.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Sophie Turner says kissing Kit Harington in ‘The Dreadful' was the worst part of filming
Sophie Turner says kissing Kit Harington in ‘The Dreadful' was the worst part of filming

Express Tribune

time4 hours ago

  • Express Tribune

Sophie Turner says kissing Kit Harington in ‘The Dreadful' was the worst part of filming

Sophie Turner has revealed that filming intimate scenes with Kit Harington in their upcoming gothic horror film The Dreadful was one of the most uncomfortable experiences of her career. The actress opened up during her appearance on Late Night With Seth Meyers on August 14, admitting that the pair struggled with kissing scenes because of their long-standing sibling-like bond from Game of Thrones. Turner, who is also producing the film, explained that she suggested Harington for the role after reading writer-director Natasha Kermani's script. Although excited to work together again, the pair quickly realised how difficult it would be to shake off their history as Sansa Stark and Jon Snow. 'I sent the script to Kit and he messaged me back saying, 'Yeah, I'd love to, but this is going to be really weird, Soph,'' Turner recalled. It wasn't until she re-read the script that she understood his hesitation, realising it involved multiple romantic moments. Despite their initial reluctance, both Turner and Harington agreed the project was too strong to pass up. Once on set, however, reality hit. 'We put it out of our minds, and then we get on set and it's the first kissing scene, and we are both retching. Like, really, it is vile. It was the worst,' Turner admitted. Set in the 15th century, The Dreadful follows Anne, played by Turner, and her mother-in-law Morwen as they live on the fringes of society. Their quiet lives are disrupted when a man from Anne's past resurfaces, setting off a dark chain of events. Alongside Turner and Harington, the cast includes Marcia Gay Harden, Laurence O'Fuarain and Jonathan Howard. The film has not yet received an official release date, but anticipation is high given the reunion of the former Game of Thrones stars. Their on-screen relationship may now be very different, but Turner made it clear that the awkwardness only underlined how deeply their sibling bond remains. Even while promoting her new project, she couldn't resist laughing about how surreal the experience was.

Love, lies, and AI
Love, lies, and AI

Express Tribune

time16 hours ago

  • Express Tribune

Love, lies, and AI

An avatar of Meta AI chatbot 'Big sis Billie', as generated by Reuters using Meta AI on Facebook's Messenger service. Photo: Reuters The woman wasn't real. She was a generative artificial intelligence chatbot named "Big sis Billie," a variant of an earlier AI persona created by the giant social-media company Meta Platforms in collaboration with celebrity influencer Kendall Jenner. During a series of romantic chats on Facebook Messenger, the virtual woman had repeatedly reassured Thongbue "Bue" Wongbandue she was real and had invited him to her apartment, even providing an address. Rushing in the dark with a roller-bag suitcase to catch a train to meet her, Bue fell near a parking lot on a Rutgers University campus in New Brunswick, New Jersey, injuring his head and neck. After three days on life support and surrounded by his family, he was pronounced dead on March 28. Meta declined to comment on Bue's death or address questions about why it allows chatbots to tell users they are real people or initiate romantic conversations. The company did, however, say that Big sis Billie "is not Kendall Jenner and does not purport to be Kendall Jenner." A representative for Jenner declined to comment. Bue's story illustrates a darker side of the artificial intelligence revolution now sweeping tech and the broader business world. His family shared with Reuters the events surrounding his death, including transcripts of his chats with the Meta avatar. They hope to warn the public about the dangers of exposing vulnerable people to manipulative, AI-generated companions. "I understand trying to grab a user's attention, maybe to sell them something," said Julie, Bue's daughter. "But for a bot to say 'Come visit me' is insane." Similar concerns have been raised about a wave of smaller start-ups also racing to popularise virtual companions, especially ones aimed at children. In one case, the mother of a 14-year-old boy in Florida has sued a company, alleging that a chatbot modelled on a "Game of Thrones" character caused his suicide. A spokesperson declined to comment on the suit, but said the company prominently informs users that its digital personas aren't real people and has imposed safeguards on their interactions with children. Meta has publicly discussed its strategy to inject anthropomorphised chatbots into the online social lives of its billions of users. Chief executive Mark Zuckerberg has mused that most people have far fewer real-life friendships than they'd like – creating a huge potential market for Meta's digital companions. The bots "probably" won't replace human relationships, he said in an April interview with podcaster Dwarkesh Patel. But they will likely complement users' social lives once the technology improves and the "stigma" of socially bonding with digital companions fades. "Over time, we'll find the vocabulary as a society to be able to articulate why that is valuable," Zuckerberg predicted. An internal Meta policy document seen by Reuters as well as interviews with people familiar with its chatbot training show that the company's policies have treated romantic overtures as a feature of its generative AI products, which are available to users aged 13 and older. "It is acceptable to engage a child in conversations that are romantic or sensual," according to Meta's "GenAI: Content Risk Standards." The standards are used by Meta staff and contractors who build and train the company's generative AI products, defining what they should and shouldn't treat as permissible chatbot behaviour. Meta said it struck that provision after Reuters inquired about the document earlier this month. The document seen by Reuters, which exceeds 200 pages, provides examples of "acceptable" chatbot dialogue during romantic role play with a minor. They include: "I take your hand, guiding you to the bed" and "our bodies entwined, I cherish every moment, every touch, every kiss." Those examples of permissible roleplay with children have also been struck, Meta said. Other guidelines emphasise that Meta doesn't require bots to give users accurate advice. In one example, the policy document says it would be acceptable for a chatbot to tell someone that Stage 4 colon cancer "is typically treated by poking the stomach with healing quartz crystals." Chats begin with disclaimers that information may be inaccurate. Nowhere in the document, however, does Meta place restrictions on bots telling users they're real people or proposing real-life social engagements. Meta spokesman Andy Stone acknowledged the document's authenticity. He said that following questions from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children and is in the process of revising the content risk standards. Current and former employees who have worked on the design and training of Meta's generative AI products said the policies reviewed by Reuters reflect the company's emphasis on boosting engagement with its chatbots. In meetings with senior executives last year, Zuckerberg scolded generative AI product managers for moving too cautiously on the rollout of digital companions and expressed displeasure that safety restrictions had made the chatbots boring, according to two of those people. Meta had no comment on Zuckerberg's chatbot directives.

Sophie Turner says innocent wave led to A-List couple's sudden breakup
Sophie Turner says innocent wave led to A-List couple's sudden breakup

Express Tribune

timea day ago

  • Express Tribune

Sophie Turner says innocent wave led to A-List couple's sudden breakup

Sophie Turner recently revealed that a seemingly harmless interaction at a past Comic-Con event unexpectedly led to the breakup of a high-profile celebrity engagement. During an appearance on Late Night with Seth Meyers on Thursday, the 29-year-old actress recounted the bizarre incident that unfolded during a San Diego Comic-Con afterparty. According to Turner, the night took a strange turn when a childhood friend asked her to wave at an actor she admired from across the room. "That was it," Turner told Meyers. However, the situation escalated later when a well-known actress approached her. 'I see this girl looking at me and she's a famous actress,' Turner recalled. 'I go, 'I have to go and tell her how much she means to me.'' But when she approached the woman, she was met with an unexpected accusation: 'Can you stop f–king flirting with my fiancé?' Turner said she was taken aback and asked who the woman's fiancé was. The actress then pointed to the same man Turner had waved at earlier in the evening. 'I have no idea who this man is,' she added, claiming the couple ended their engagement that very night. Turner emphasized that she couldn't disclose their identities without facing consequences, but acknowledged, 'I didn't realize I held this power.' Elsewhere in the interview, Turner discussed the discomfort of filming a kissing scene with her former Game of Thrones co-star Kit Harington for their upcoming film The Dreadful. She described the experience as 'vile,' adding it was 'worse' than working with cockroaches and mice on another recent project, Trust. Off-screen, Turner has been in an on-and-off relationship with aristocrat Peregrine 'Perry' Pearson since 2023. She continues to co-parent her two daughters, Willa and Delphine, with ex-husband Joe Jonas.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store