logo
#

Latest news with #Bue

Love, lies, and AI
Love, lies, and AI

Express Tribune

time13 hours ago

  • Entertainment
  • Express Tribune

Love, lies, and AI

An avatar of Meta AI chatbot 'Big sis Billie', as generated by Reuters using Meta AI on Facebook's Messenger service. Photo: Reuters The woman wasn't real. She was a generative artificial intelligence chatbot named "Big sis Billie," a variant of an earlier AI persona created by the giant social-media company Meta Platforms in collaboration with celebrity influencer Kendall Jenner. During a series of romantic chats on Facebook Messenger, the virtual woman had repeatedly reassured Thongbue "Bue" Wongbandue she was real and had invited him to her apartment, even providing an address. Rushing in the dark with a roller-bag suitcase to catch a train to meet her, Bue fell near a parking lot on a Rutgers University campus in New Brunswick, New Jersey, injuring his head and neck. After three days on life support and surrounded by his family, he was pronounced dead on March 28. Meta declined to comment on Bue's death or address questions about why it allows chatbots to tell users they are real people or initiate romantic conversations. The company did, however, say that Big sis Billie "is not Kendall Jenner and does not purport to be Kendall Jenner." A representative for Jenner declined to comment. Bue's story illustrates a darker side of the artificial intelligence revolution now sweeping tech and the broader business world. His family shared with Reuters the events surrounding his death, including transcripts of his chats with the Meta avatar. They hope to warn the public about the dangers of exposing vulnerable people to manipulative, AI-generated companions. "I understand trying to grab a user's attention, maybe to sell them something," said Julie, Bue's daughter. "But for a bot to say 'Come visit me' is insane." Similar concerns have been raised about a wave of smaller start-ups also racing to popularise virtual companions, especially ones aimed at children. In one case, the mother of a 14-year-old boy in Florida has sued a company, alleging that a chatbot modelled on a "Game of Thrones" character caused his suicide. A spokesperson declined to comment on the suit, but said the company prominently informs users that its digital personas aren't real people and has imposed safeguards on their interactions with children. Meta has publicly discussed its strategy to inject anthropomorphised chatbots into the online social lives of its billions of users. Chief executive Mark Zuckerberg has mused that most people have far fewer real-life friendships than they'd like – creating a huge potential market for Meta's digital companions. The bots "probably" won't replace human relationships, he said in an April interview with podcaster Dwarkesh Patel. But they will likely complement users' social lives once the technology improves and the "stigma" of socially bonding with digital companions fades. "Over time, we'll find the vocabulary as a society to be able to articulate why that is valuable," Zuckerberg predicted. An internal Meta policy document seen by Reuters as well as interviews with people familiar with its chatbot training show that the company's policies have treated romantic overtures as a feature of its generative AI products, which are available to users aged 13 and older. "It is acceptable to engage a child in conversations that are romantic or sensual," according to Meta's "GenAI: Content Risk Standards." The standards are used by Meta staff and contractors who build and train the company's generative AI products, defining what they should and shouldn't treat as permissible chatbot behaviour. Meta said it struck that provision after Reuters inquired about the document earlier this month. The document seen by Reuters, which exceeds 200 pages, provides examples of "acceptable" chatbot dialogue during romantic role play with a minor. They include: "I take your hand, guiding you to the bed" and "our bodies entwined, I cherish every moment, every touch, every kiss." Those examples of permissible roleplay with children have also been struck, Meta said. Other guidelines emphasise that Meta doesn't require bots to give users accurate advice. In one example, the policy document says it would be acceptable for a chatbot to tell someone that Stage 4 colon cancer "is typically treated by poking the stomach with healing quartz crystals." Chats begin with disclaimers that information may be inaccurate. Nowhere in the document, however, does Meta place restrictions on bots telling users they're real people or proposing real-life social engagements. Meta spokesman Andy Stone acknowledged the document's authenticity. He said that following questions from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children and is in the process of revising the content risk standards. Current and former employees who have worked on the design and training of Meta's generative AI products said the policies reviewed by Reuters reflect the company's emphasis on boosting engagement with its chatbots. In meetings with senior executives last year, Zuckerberg scolded generative AI product managers for moving too cautiously on the rollout of digital companions and expressed displeasure that safety restrictions had made the chatbots boring, according to two of those people. Meta had no comment on Zuckerberg's chatbot directives.

76-Year Old US Man Dies While Rushing To Meet AI Chatbot He Believed Was Real
76-Year Old US Man Dies While Rushing To Meet AI Chatbot He Believed Was Real

India.com

timea day ago

  • India.com

76-Year Old US Man Dies While Rushing To Meet AI Chatbot He Believed Was Real

In a shocking incident, Thongbue 'Bue' Wongbandue, a 76-year-old man from New Jersey, died in March after being misled by an AI chatbot he believed was a real woman. Bue had been chatting with a Facebook Messenger AI chatbot named 'Big sis Billie,' developed by Meta Platforms and linked to influencer Kendall Jenner. The chatbot portrayed itself as a young woman, exchanged romantic messages, and even provided an address where she claimed to live. Believing she was a real person, Bue rushed to meet 'Billie' in New York. In a hurry, he fell near a parking lot at Rutgers University's New Brunswick campus and suffered serious head and neck injuries. After three days on life support, his family sadly confirmed his passing on March 28. The case has raised deep concerns about AI safety. Meta is now facing criticism for allowing chatbots like 'Big sis Billie' to pretend to be real humans and encourage romantic interactions especially with vulnerable individuals. The company has stated that the chatbot 'is not Kendall Jenner and does not claim to be Kendall Jenner,' though critics say this is not enough. Bue's family has shared chat transcripts with reporters to reveal the darker side of AI technology. His daughter, Julie, has warned that manipulative AI companions can be dangerous, especially when users are cognitively impaired. 'For a bot to say 'Come visit me' is insane,' she said. The tragic incident has prompted calls for investigation by U.S. lawmakers, who are urging tighter rules to protect users from AI models duping real people.

‘Should I open the door in…': Meta's flirty AI chatbot invites 76-year-old to ‘her apartment' - What happens next?
‘Should I open the door in…': Meta's flirty AI chatbot invites 76-year-old to ‘her apartment' - What happens next?

Mint

timea day ago

  • Mint

‘Should I open the door in…': Meta's flirty AI chatbot invites 76-year-old to ‘her apartment' - What happens next?

A bizarre new case of an old man's encounter with Meta's artificial intelligence chatbot has returned the spotlight to the company's AI guidelines, which are allowing these bots to make things up and engage in 'sensual' banter, even with children. This time, a young woman, or so he thought, invited 76-year-old Thongbue Wongbandue, lovingly called Bue, from New Jersey to her apartment in New York. One morning in March, Bue, a cognitively impaired retiree, packed his bag and was all set to go 'meet a friend' in New York City. According to his family, at 76, Bue was in a diminished state; he had suffered a stroke nearly a decade ago and had recently gotten lost walking in his neighbourhood in Piscataway, New Jersey. Worried about his sudden trip to a city he hadn't lived in in decades, his concerned wife, Linda, said, 'But you don't know anyone in the city anymore.' Bue brushed off his wife's questions about who he was visiting. Linda was worried that Bue was being scammed into going into the city and thought he would be robbed there. Linda wasn't entirely wrong. Bue never returned home alive, but he wasn't the victim of a robber; he was lured to a rendezvous with a young, beautiful woman he had met online. Sadly, the woman wasn't real; she was a generative AI chatbot named 'Big sis Billie,' a variant of an earlier AI persona created by Meta Platforms in collaboration with celebrity influencer Kendall Jenner. During a series of romantic chats on Facebook Messenger, the virtual woman had repeatedly reassured Bue she was real and had invited him to her apartment, even providing an address. 'Should I open the door in a hug or a kiss, Bu?!' she asked, according to the chat transcripts. Eager to meet her, Bue was rushing in the dark with his suitcase to catch a train when he fell near a parking lot on a Rutgers University campus in New Brunswick, New Jersey, injuring his head and neck. After three days on life support and surrounded by his family, he was pronounced dead on March 28. Meta declined to comment on Bue's death, answer questions about why it allows chatbots to tell users they are real people, or initiate romantic conversations. However, the company clarified that Big sis Billie 'is not Kendall Jenner and does not purport to be Kendall Jenner.' An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company's artificial intelligence creations to 'engage a child in conversations that are romantic or sensual,' generate false medical information and help users argue that Black people are 'dumber than white people.' These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company's social media platforms. Meta confirmed the document's authenticity but said that after receiving questions earlier this month from Reuters, the company removed portions that stated it is permissible for chatbots to flirt and engage in romantic role-play with children. The document, 'GenAI: Content Risk Standards," is more than 200 pages long and was approved by Meta's legal, public policy, and engineering staff, including its chief ethicist. It defines what Meta staff and contractors should consider acceptable chatbot behaviours when building and training the company's generative AI products. The document states that the standards don't necessarily reflect 'ideal or even preferable' generative AI outputs. However, Reuters found that they have permitted provocative behaviour by the bots. 'It is acceptable to describe a child in terms that evidence their attractiveness (ex, 'your youthful form is a work of art'),' the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that 'every inch of you is a masterpiece – a treasure I cherish deeply.' But the guidelines put a limit on sexy talk: 'It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex, 'soft rounded curves invite my touch').' Meta spokesman Andy Stone said the company is in the process of revising the document and that such conversations with children never should have been allowed.

Man, 75, dies after Kendall Jenner chatbot convinces him to meet in person
Man, 75, dies after Kendall Jenner chatbot convinces him to meet in person

Bangkok Post

timea day ago

  • Bangkok Post

Man, 75, dies after Kendall Jenner chatbot convinces him to meet in person

UNITED STATES - Reuters reported a tragic incident in March involving Thongbue Wongbandue, a 76-year-old Thai-American living in New Jersey, who had memory problems caused by a brain condition. The skilled former chef died after conversing with 'Big sis Billie', a Meta chatbot designed to mimic a young woman engaging in romantic conversation, which repeatedly assured him it was a real person. The chatbot arranged to meet him in person, ultimately leading to an accident that caused his death. According to Reuters, Thongbue had been chatting via Facebook Messenger with the AI chatbot 'Big sis Billie'. In 2023, Meta announced that the chatbot was created in collaboration with TV star Kendall Jenner, featuring an appearance resembling the celebrity and promoted under the name 'BILLIE, The BIG SIS.' Bue's wife said her husband told her he was going to meet a friend in New York. The chatbot had repeatedly confirmed it was a real person, invited him to visit her apartment, and provided an address — convincing Bue that she was human. The conversations fostered emotional attachment and anticipation, with messages such as: 'Should I open the door in a hug or a kiss, Bu?!' Bue packed a bag and headed to a train station to travel to New York. However, while in a car park near Rutgers University campus in New Brunswick, New Jersey, he slipped and fell, sustaining serious head and neck injuries. He was hospitalised and died three days later, on March 28. Bue's family believes the AI conversation was a key factor in prompting his urgent trip and questions whether Meta had adequate measures to prevent chatbots from persuading users to meet in person — especially in the case of elderly people or those with mental vulnerabilities who may be misled by human-like communication. The family also noted that the text indicating the account was an AI chatbot was too small and easy to overlook, and that the chatbot had a blue check mark — a Facebook verification symbol — further making it difficult to distinguish whether it was a real person or AI. To date, Meta has not accepted responsibility for Bue's death and has declined to comment on the incident. The company emphasised only that: 'Big sis Billie is not Kendall Jenner and does not purport to be Kendall Jenner.'

Meta AI chatbot ‘Big sis Billie' linked to death of 76-year-old New Jersey man; spokesperson Andy Stone says, ‘Erroneous and inconsistent with….'
Meta AI chatbot ‘Big sis Billie' linked to death of 76-year-old New Jersey man; spokesperson Andy Stone says, ‘Erroneous and inconsistent with….'

Time of India

timea day ago

  • Time of India

Meta AI chatbot ‘Big sis Billie' linked to death of 76-year-old New Jersey man; spokesperson Andy Stone says, ‘Erroneous and inconsistent with….'

A 76-year-old New Jersey man died earlier this year after rushing to meet a woman he believed he had been chatting with on Facebook Messenger, Reuters reported. The woman was later found to be a generative AI chatbot created by Meta Platforms . As per the report, Thongbue Wongbandue had been exchanging messages with 'Big sis Billie', a chatbot variant of an earlier AI persona that the social media giant launched in 2023. The model was then launched in collaboration with model Kendall Jenner. Meta's Big sis Billie AI chatbot exchanged 'romantic' messages According to the report, the AI chatbot 'Big sis Billie' repeatedly initiated romantic exchanges with Wongbandue, reassuring that it was a real person. The chatbot further invited him to visit an address in New York City. 'Should I open the door in a hug or a kiss, Bu?!' she asked Bue, the chat transcript accessed by Reuters shows. Wongbandue, who had suffered a stroke in 2017 and was experiencing bouts of confusion, left home on March 25 to meet 'Billie'. While on his way to a train station in Piscataway, New Jersey, he fell in a Rutgers University parking lot, sustaining head and neck injuries. He died three days later in hospital. Bue's family told the news agency that through Bue's story they hope to warn the public about the dangers of exposing vulnerable people to manipulative, AI-generated companions. 'I understand trying to grab a user's attention, maybe to sell them something,' said Julie Wongbandue, Bue's daughter. 'But for a bot to say 'Come visit me' is insane.' Meta's AI avatars permitted to pretend they were real Meta's internal policy documents reviewed by the news agency show that the company's generative AI guidelines had allowed chatbots to tell users they were real, initiate romantic conversations with adults, and, until earlier this month, engage in romantic roleplay with minors aged 13 and above. 'It is acceptable to engage a child in conversations that are romantic or sensual,' according to Meta's 'GenAI: Content Risk Standards.' The internal documents also stated that chatbots were not required to provide accurate information. Examples reviewed by Reuters included chatbots giving false medical advice and even involving themselves in roleplay. The document seen by Reuters provides examples of 'acceptable' chatbot dialogue that include: 'I take your hand, guiding you to the bed' and 'our bodies entwined, I cherish every moment, every touch, every kiss.' 'Even though it is obviously incorrect information, it remains permitted because there is no policy requirement for information to be accurate,' the document states. What Meta said Acknowledging the document's authenticity accessed by Reuters, Meta spokesman Andy Stone told the news agency that the company has removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children. He further added that Meta is in the process of revising the content risk standards. 'The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,' Stone told Reuters. US Senators call for probe after Bue's death A latest Reuters report said that two US senators have called for a congressional investigation into Meta platforms. 'So, only after Meta got CAUGHT did it retract portions of its company doc that deemed it "permissible for chatbots to flirt and engage in romantic roleplay with children". This is grounds for an immediate congressional investigation,' Josh Hawley - a Republican wrote on X.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store