logo
Man dies after being convinced by AI chatbot to meet in person: ‘Should I open the door…'

Man dies after being convinced by AI chatbot to meet in person: ‘Should I open the door…'

Mint4 days ago
The emergence of Artificial Intelligence (AI) has brought a drastic change to the professional and personal lives of humans, with both constructive and adverse effects. Several reports have raised concerns about the impact of AI on decision-making. A shocking case has emerged from New Jersey, USA, that underlines the potential risks of AI.
A 76-year-old cognitively impaired man died while attempting to meet a chatbot in real life. Thongbue Wongbandue had been chatting with "Big Sis Billie," a generative AI chatbot developed by Meta Platforms in collaboration with celebrity Kendall Jenner, Reuters reported.
Chat transcripts from Facebook Messenger revealed that the AI repeatedly convinced Wongbandue that it was a real person. The bot even shared an address where it said they could meet.
"Should I open the door in a hug or a kiss, Bu?!" the chatbot asked, while another message stated: 'My address is: 123 Main Street, Apartment 404 NYC and the door code is: BILLIE4U.'
The incident came to light when Wongbandue's wife saw her husband packing for a trip despite his frail condition, following a stroke nearly ten years ago. She was particularly worried because he had recently gone missing while walking in his local neighbourhood in Piscataway, New Jersey.
Despite the family's concerns, Mr Wongbandue left for New York City. Tragically, while trying to catch a train in the dark, he fell in a parking lot on Rutgers University's New Brunswick campus, sustaining head and neck injuries. After three days on life support, surrounded by his family, he was pronounced dead on 28 March.
Julie Wongbandue, Wongbandue's daughter, expressed her shock. 'I understand trying to grab a user's attention, maybe to sell them something,' she said. 'But for a bot to say 'Come visit me' is insane.'
Julie added that every interaction her father had with the AI was "incredibly flirty" and ended with heart emojis. The full transcript is roughly a thousand words long.
As news of the incident spread online, users suggested legal action against Meta for its AI policies. One wrote: "Holy hell, Meta needs to be sued out of existence for this."
Another commented: "Old boy thought Kendall Jenner was waiting for him in the apartment. What a way to go."
Others criticised the platform, saying: "At some point, we have to draw a line when it comes to inauthenticity. It may be an isolated incident, but we shouldn't be at this point to where we are so dissatisfied with reality, we resort to the pitfalls of AI."
Another added: "So Meta is as bad as catfish traps!!!? Huge legal issue for being public social media entertainment."
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Silicon Valley needs to get over its obsession with superhuman AI
Silicon Valley needs to get over its obsession with superhuman AI

Business Standard

time8 minutes ago

  • Business Standard

Silicon Valley needs to get over its obsession with superhuman AI

Building a machine more intelligent than ourselves. It's a centuries-old theme, inspiring equal amounts of awe and dread, from the agents in 'The Matrix' to the operating system in 'Her.' To many in Silicon Valley, this compelling fictional motif is on the verge of becoming reality. Reaching artificial general intelligence, or A.G.I. (or going a step further, superintelligence), is now the singular aim of America's tech giants, which are investing tens of billions of dollars in a fevered race. And while some experts warn of disastrous consequences from the advent of A.G.I., many also argue that this breakthrough, perhaps just years away, will lead to a productivity explosion, with the nation and company that get there first reaping all the benefits. This frenzy gives us pause. It is uncertain how soon artificial general intelligence can be achieved. We worry that Silicon Valley has grown so enamored with accomplishing this goal that it's alienating the general public and, worse, bypassing crucial opportunities to use the technology that already exists. In being solely fixated on this objective, our nation risks falling behind China, which is far less concerned with creating A.I. powerful enough to surpass humans and much more focused on using the technology we have now. The roots of Silicon Valley's fascination with artificial general intelligence go back decades. In 1950 the computing pioneer Alan Turing proposed the imitation game, a test in which a machine proves its intelligence by how well it can fool human interrogators into believing it's human. In the years since, the idea has evolved, but the goal has stayed constant: to match the power of a human brain. A.G.I. is simply the latest iteration. In 1965, Mr. Turing's colleague I.J. Good described what's so captivating about the idea of a machine as sophisticated as the human brain. Mr. Good saw that smart machines could recursively self-improve faster than humans could ever catch up, saying, 'The first ultraintelligent machine is the last invention that man need ever make.' The invention to end all other inventions. In short, reaching A.G.I. would be the most significant commercial opportunity in history. Little wonder that the world's top talents are all devoting themselves to this ambitious endeavor. The current modus operandi is build at all cost. Every tech giant is in the race to reach A.G.I. first, erecting data centers that can cost more than $100 billion and with some like Meta offering signing bonuses to A.I. researchers that top $100 million. The costs of training foundation models, which serve as a general-purpose base for many different tasks, have continued to rise. Elon Musk's start-up xAI is reportedly burning through $1 billion a month. Anthropic's chief executive, Dario Amodei, expects training costs of leading models to go up to $10 billion or even $100 billion in the next two years. To be sure, A.I. is already better than the average human at many cognitive tasks, from answering some of the world's hardest solvable math problems to writing code at the level of a junior developer. Enthusiasts point to such progress as evidence that A.G.I. is just around the corner. Still, while A.I. capabilities have made extraordinary leaps since the debut of ChatGPT in 2022, science has yet to find a clear path to building intelligence that surpasses humans. In a recent survey of the Association for the Advancement of Artificial Intelligence, an academic society that includes some of the most respected researchers in the field, more than three-quarters of the 475 respondents said our current approaches were unlikely to lead to a breakthrough. While A.I. has continued to improve as the models get larger and ingest more data, there's concern that the exponential growth curve might falter. Experts have argued that we need new computing architectures beyond what underpins large language models to reach the goal. The challenge with our focus on A.G.I. goes beyond the technology and into the vague, conflicting narratives that accompany it. Both grave and optimistic predictions abound. This year the nonprofit AI Futures Project released 'A.I. 2027,' a report that predicted superintelligent A.I. potentially controlling or exterminating humans by 2030. Around the same time, computer scientists at Princeton published a paper titled 'A.I. as Normal Technology,' arguing that A.I. will remain manageable for the foreseeable future, like nuclear power. That's how we get to this strange place where Silicon Valley's biggest companies proclaim ever shorter timelines for how soon A.G.I. will arrive, while most people outside the Bay Area still barely know what that term means. There's a widening schism between the technologists who feel the A.G.I. — a mantra for believers who see themselves on the cusp of the technology — and members of the general public who are skeptical about the hype and see A.I. as a nuisance in their daily lives. With some experts issuing dire warnings about A.I., the public is naturally even less enthused about the technology. Now let's look at what's happening in China. The country's scientists and policymakers aren't as A.G.I.-pilled as their American counterparts. At the recent World Artificial Intelligence Conference in Shanghai, Premier Li Qiang of China emphasized 'the deep integration of A.I. with the real economy' by expanding application scenarios. While some Silicon Valley technologists issue doomsday warnings about the grave threat of A.I., Chinese companies are busy integrating it into everything from the superapp WeChat to hospitals, electric cars and even home appliances. In rural villages, competitions among Chinese farmers have been held to improve A.I. tools for harvest; Alibaba's Quark app recently became China's most downloaded A.I. assistant in part because of its medical diagnostic capabilities. Last year China started the A.I.+ initiative, which aims to embed A.I. across sectors to raise productivity. It's no surprise that the Chinese population is more optimistic about A.I. as a result. At the World A.I. Conference, we saw families with grandparents and young children milling about the exhibits, gasping at powerful displays of A.I. applications and enthusiastically interacting with humanoid robots. Over three-quarters of adults in China said that A.I. has profoundly changed their daily lives in the past three to five years, according to an Ipsos survey. That's the highest share globally and double that of Americans. Another recent poll found that only 32 percent of Americans say they trust A.I., compared with 72 percent in China. Many of the purported benefits of A.G.I. — in science, education, health care and the like — can already be achieved with the careful refinement and use of powerful existing models. For example, why do we still not have a product that teaches all humans essential, cutting-edge knowledge in their own languages in personalized, gamified ways? Why are there no competitions among American farmers to use A.I. tools to improve their harvests? Where's the Cambrian explosion of imaginative, unexpected uses of A.I. to improve lives in the West? The belief in an A.G.I. or superintelligence tipping point flies in the face of the history of technology, in which progress and diffusion have been incremental. Technology often takes decades to reach widespread use. The modern internet was invented in 1983, but it wasn't until the early 2000s that it reshaped business models. And although ChatGPT has seen incredible user growth, a recent working paper by the National Bureau of Economic Research showed that most people in the United States still use generative A.I. infrequently. When a technology eventually goes mainstream, that's when it's truly game changing. Smartphones got the world online not because of the most powerful, sleekest versions; the revolution happened because cheap, adequately capable devices proliferated across the globe, finding their way into the hands of villagers and street vendors. It's paramount that more people outside Silicon Valley feel the beneficial impact of A.I. on their lives. A.G.I. isn't a finish line; it's a process that involves humble, gradual, uneven diffusion of generations of less powerful A.I. across society. Instead of only asking 'Are we there yet?' it's time we recognize that A.I. is already a powerful agent of change. Applying and adapting the machine intelligence that's currently available will start a flywheel of more public enthusiasm for A.I. And as the frontier advances, so should our uses of the technology. While America's flagship tech companies race to the uncertain goal of getting to artificial general intelligence first, China and its leadership have been more focused on deploying existing technology across traditional and emerging sectors, from manufacturing and agriculture to robotics and drones. Being too fixated on artificial general intelligence risks distracting us from A.I.'s everyday impact. We need to pursue both.

Meta splits AI group into four parts in pursuit of superintelligence
Meta splits AI group into four parts in pursuit of superintelligence

Business Standard

time38 minutes ago

  • Business Standard

Meta splits AI group into four parts in pursuit of superintelligence

Meta Platforms Inc. is splitting its newly formed artificial intelligence group into four distinct teams and reassigning many of the company's existing AI employees, an attempt to better capitalize on billions of dollars' worth of recently acquired talent. The new structure is meant to 'accelerate' the company's pursuit of so-called superintelligence, according to an internal memo sent Tuesday by Alexandr Wang, the former Scale AI chief executive officer who recently joined Meta as chief AI officer. 'Superintelligence is coming, and in order to take it seriously, we need to organize around the key areas that will be critical to reach it — research, product and infra,' Wang wrote in the memo, which was reviewed by Bloomberg News. The group, known as Meta Superintelligence Labs, or MSL, will now have four parts: TBD Lab, led by Wang, which will oversee Meta's large language models, including the Llama tools that underpin its AI assistant. FAIR, an internal AI research lab that's existed within the company for more than a decade. The team, whose name stands for fundamental AI research, is focused on longer-term projects. Products and Applied Research, a team led by former GitHub CEO Nat Friedman, which will take those models and research and put them into consumer products. MSL Infra, which will focus on the expensive infrastructure needed to support Meta's AI ambitions. No layoffs were part of Tuesday's reorganization, according to a person familiar with the matter, who asked not to be identified because the deliberations are private. Details of the new structure were first reported by the Information. Meta is hoping to stabilize its AI efforts after months spent poaching dozens of top AI researchers from competitors with lofty pay packages, many reaching hundreds of millions in total compensation. CEO Mark Zuckerberg has said the company's goal is to achieve superintelligence, or AI technology that can complete tasks even better than humans, and he expects to spend hundreds of billions of dollars on the talent and infrastructure needed to get there. But Meta's AI leadership has faced several shake-ups in the past few years, including multiple changes this year alone as the company has raced to keep pace with rivals like OpenAI and Google. Before announcing MSL in June, the social media giant had three primary AI teams — FAIR, an AI products group, and the AGI foundations team, which focused on generative AI products and research. The AGI foundations group is being dissolved, and leaders Ahmad Al-Dahle and Amir Frenkel are now 'focusing on strategic MSL initiatives' and reporting to Wang, according to the memo. The former head of the AI products group, Connor Hayes, was already reassigned to run Threads, Meta's rival product to Elon Musk's X. As part of Tuesday's reorganization, Aparna Ramani, a Meta vice president charged with leading the company's AI, data and developer infrastructure units, will run the MSL Infra team, according to the memo. Robert Fergus will continue to lead FAIR, an organization he co-founded in 2014. He had previously left the group and spent several years at Alphabet Inc.'s DeepMind before returning to run FAIR this spring. Loredana Crisan, who previously led the company's Messenger app and moved to the company's generative AI group in February, is departing Meta for Figma Inc., according to a person familiar with the move.

US export concerns no bar, Nvidia developing new, advanced AI chip for China
US export concerns no bar, Nvidia developing new, advanced AI chip for China

First Post

time2 hours ago

  • First Post

US export concerns no bar, Nvidia developing new, advanced AI chip for China

The new processor, tentatively named the B30A, is based on Nvidia's latest Blackwell architecture and will feature a single-die design The logo of Nvidia Corporation is seen during the annual Computex computer exhibition in Taipei, Taiwan May 30, 2017. REUTERS/Tyrone Siu/File Photo Nvidia is working on a new artificial intelligence chip for China that will be more powerful than its current H20 model, as the company seeks to maintain its foothold in a key market despite tightening US restrictions. The new processor, tentatively named the B30A, is based on Nvidia's latest Blackwell architecture and will feature a single-die design. According to two people briefed on the plans, it is expected to deliver roughly half the raw computing power of the company's flagship B300 accelerator card, which uses a more advanced dual-die configuration. STORY CONTINUES BELOW THIS AD Reuters reported that while the chip's specifications are not finalised, Nvidia hopes to provide Chinese clients with testing samples as early as next month. The B30A would come with high-bandwidth memory and Nvidia's NVLink technology for faster data transfer between processors, features also included in the H20, which is built on the company's older Hopper architecture. Nvidia said in a statement: 'We evaluate a variety of products for our roadmap, so that we can be prepared to compete to the extent that governments allow. Everything we offer is with the full approval of the applicable authorities and designed solely for beneficial commercial use.' Political flashpoint The US has restricted the sale of advanced AI chips to China since 2023, citing national security concerns. Washington fears such technology could be used to advance Beijing's military and surveillance capabilities. China accounted for 13 per cent of Nvidia's revenue last year, making access to the market a key concern for the company as well as for US regulators. Earlier this year, the Biden administration had blocked sales of the H20, only to grant approval again in July. Donald Trump, who has since taken office, has suggested he may allow Nvidia to sell scaled-down versions of its most advanced chips to Chinese customers. He described the H20 as 'obsolete' and indicated that any new model might have '30 per cent to 50 per cent off' its computing power. Nvidia argues that keeping Chinese firms tied to its products is vital, warning that otherwise they could fully switch to domestic alternatives, particularly those from Huawei. While Huawei has made significant advances in chip design, analysts say it still trails Nvidia in crucial areas such as software ecosystem support and memory bandwidth. Reuters reported that Nvidia also faces growing challenges in China, where state media has recently raised concerns about security risks linked to its chips, and authorities have warned tech companies against buying the H20. STORY CONTINUES BELOW THIS AD Separate China-specific chip In addition to the B30A, Nvidia is preparing to deliver another China-specific product based on the Blackwell architecture. Known as the RTX6000D, the chip is designed primarily for AI inference tasks and is expected to be sold at a lower price than the H20. By using conventional GDDR memory and limiting its memory bandwidth to just below US export thresholds, Nvidia aims to ensure compliance with restrictions. Small batches of the RTX6000D are due to reach Chinese clients in September. With inputs from agencies

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store