logo
#

Latest news with #ProjectDecember

Ozzy Osbourne AI tribute sparks 'digital resurrection' debate
Ozzy Osbourne AI tribute sparks 'digital resurrection' debate

The National

time25-07-2025

  • Entertainment
  • The National

Ozzy Osbourne AI tribute sparks 'digital resurrection' debate

Fans of Black Sabbath singer Ozzy Osbourne have criticised musician Tom Morello after he shared an AI-generated image of the rock star, who died this week at the age of 76. Osbourne bid farewell to fans earlier this month with a Black Sabbath reunion show in the British city of Birmingham. His death led to tributes from fans and musicians. They included Morello's post, which sparked anger among X users. The backlash over the stylised image – which included deceased rock stars Lemmy, Randy Rhodes and Ronnie James Dio – centred on what many saw as an exploitative and unsettling trend, with users questioning the ethics of sharing such visuals so soon after Osbourne's death. It is the latest flashpoint in a growing debate: when does using AI to recreate someone's likeness cross the line from tribute to invasion of privacy? While the tools behind these hyper-realistic images are evolving rapidly, the ethical frameworks and legal protections have not yet caught up. Deepfakes and grief in digital age Using AI to recreate the dead or the dying, sometimes referred to as "grief tech" or "digital resurrection", is becoming increasingly common, from fan-made tributes of celebrities to "griefbots" that simulate the voice or personality of a lost loved one. In an example of grief tech, Canadian Joshua Barbeau last year used Project December, a GPT-3-based chatbot created by Jason Rohrer, to recreate conversations with his dead fiancee from September 2020, eight years after her death. The chatbot's responses were so convincing that she "said" things like: "Of course it is me. Who else could it be? I am the girl that you are madly in love with." Mental health experts warn that such recreations can profoundly affect the grieving process. "The predictable and comforting responses of AI griefbots can create unrealistic expectations for emotional support, which could impact a person's ability to build healthy relationships in the future," said Carolyn Yaffe, a cognitive behaviour therapist at Medcare Camali Clinic in Dubai. "Some people find comfort and a sense of connection through them. In contrast, others might face negative effects, like prolonged denial, emotional pain, or even feelings of paranoia or psychosis." Interacting with AI likenesses can blur the lines between memory and reality, potentially distorting a person's emotional recovery, Ms Yaffe said. "These tools may delay acceptance and create a space where people stay connected to digital surrogates instead of moving on," she added. "Grief doesn't fit into neat algorithms." Lack of legal safeguards There is limited legal protection against these practices. In the Middle East, specific laws around AI-generated likenesses are still emerging. Countries including the UAE and Saudi Arabia address deepfakes under broader laws related to cyber crimes, defamation, or personal data protection. But there are still no clear regulations dealing with posthumous image rights or the AI-based recreation of people. Most laws focus on intent to harm, rather than on consent or digital legacy after death. In the UK, for example, there are no posthumous personality or image rights. Some states in the US, including California and New York, have begun to introduce limited protections, while others do not offer any. In China, draft legislation has begun to address AI deepfakes. Denmark, however, has been a pioneer on the issue, proposing a law that would grant people copyright-like control over their image, voice and likeness. The legislation, expected to pass this year, would allow Danish people to demand the removal of unauthorised deepfake content and seek civil damages, even posthumously, marking the first time such protections would be implemented in Europe. "Copyright does not protect someone's appearance or voice," said Andres Guadamuz, a reader in intellectual property law at the University of Sussex. "We urgently need to reform image and personality rights to address unauthorised AI depictions, particularly for vulnerable individuals, including the deceased or critically ill, where dignity, consent, and misuse risks are paramount." Consent, culture and control Ethical concerns about recreating the image or voice of someone who is critically ill or dead go beyond legal frameworks. Arda Awais, co-founder of UK-based digital rights collective Identity 2.0, believes that, even when AI tributes are carried out with good intentions, they carry significant risks. "Even with consent from the deceased, there could be ways a likeness is used which might not be 100 per cent in line with someone's wishes, too. Or how it's use evolves," Ms Awais said. She added that a one-size-fits-all approach may not be practical across different cultures, emphasising the need for more inclusive and diverse conversations when establishing ethical standards. While some families or individuals may welcome AI tributes as a means to preserve memories, others may view it as exploitative or harmful, particularly when it involves celebrities, whose images are frequently recycled without their permission. "Grief is such a personal experience," Ms Yaffe said. "For some, griefbots might provide a moment of relief. But they should be seen as a bridge, not the final destination." Experts warn that AI should never replace the emotional labour of mourning or the human connections that aid the healing process. "AI-generated responses can completely miss the point, not because the technology is harmful, but because it lacks the essential quality that grief requires – humanity," Ms Yaffe said.

From ‘grief bots' to 3D avatars: How startups are using AI to simulate the dead
From ‘grief bots' to 3D avatars: How startups are using AI to simulate the dead

Indian Express

time15-06-2025

  • Indian Express

From ‘grief bots' to 3D avatars: How startups are using AI to simulate the dead

The latest and possibly most controversial use case for generative AI is here. A new wave of startups are creating so-called 'grief bots' or 'dead bots' that allow people to interact with AI representations of their deceased loved ones. These bots are essentially large language models (LLMs), fine-tuned to generate responses that mimic the speech and personality of the deceased individual. They are, in turn, part of a larger field known as 'grief tech' which includes technology ranging from chatbots to more realistic 3D avatars of people who have died. Project December, Story File, and You, Only Virtual are a few of the startups that are focused on developing AI tools to help users grieve and cope with the loss of a partner, friend, or family member. While these AI simulations may offer some people a sense of closure, they also raise serious concerns. Despite being trained to resemble real individuals, interactions with AI bots and avatars can still be quite unpredictable and unsettling for many. 'We are talking about a very specific group of users, they are in a very vulnerable state. They are looking for some closure but the opposite can happen,' Hans Block, a film director, said in an interview that is part of a recent documentary called Eternal, You. 'Some of the services are using a lot of private data. For example, the practice of storing all the messages that a person has sent to another person in order to create how a person is speaking in a way,' Block added. Justin Harrison, the founder and CEO of You, Only Virtual, offered a different perspective. Harrison's startup creates AI-powered audio versions of people called Versonas that users can call and have conversations with. The very first Versona he created using AI was based on his own mother after she was diagnosed with Stage 4 cancer. 'This is one of the many problems that we are meant to solve. There are moments when only your mom or only your dad can make you feel better. They are the only ones who can say that right thing in the way that they would say it to you. And that's an unquantifiable help,' Harrison told BBC in an interview. He further envisions Versonas being integrated with realistic robots in the future. Grief bots are also evolving beyond text and audio to become more lifelike and interactive. StoryFile works with its users to create AI-powered video avatars of deceased loved ones that allow for conversations resembling a Zoom call. A user whose father was diagnosed with a terminal illness signed up for StoryFile's service. The company sat down with the father and had him repeat stock phrases such as 'Hi', 'I love you, too', 'Bye for now', and 'I don't have an answer for that right now' for when the AI avatar is unable to generate a suitable response to the user's query, according to a report by The New York Times. StoryFile also makes interactive AI-generated videos for museums and other art foundations. Going forward, the startup reportedly has plans to develop an AI app that lets users themselves create an avatar of a person by uploading their emails, social media posts, and other background information.

AI resurrecting the dead threatens our grasp on reality
AI resurrecting the dead threatens our grasp on reality

Japan Times

time09-04-2025

  • Entertainment
  • Japan Times

AI resurrecting the dead threatens our grasp on reality

A cruel twist of fate led Jason Gowin to make a novel parenting decision. Days after his wife gave birth to their twin boys in 2019, she had a stroke. The doctors gave her two or three years to live. Gowin and his oldest son were devastated, but worse was to come. Months later, Gowin found out he had stomach cancer. Facing the prospect of leaving three children without parents, he got an idea from watching the Superman movie "Man Of Steel," where the caped hero walks into the Fortress of Solitude and talks to a simulation of his father. There was something comforting about that possibility, of he and his wife leaving behind talking replicas of themselves for their children. "I thought, I bet someone has already come up with this,' he remembers. A Google search led Gowin, a 47-year-old actor in Pennsylvania, to about 10 different companies offering to train AI models on personal data — text messages, videos and other digital traces — to create virtual likenesses of people. He signed up as a beta tester with a provider called "You, Only Virtual,' and today his 9-year-old son occasionally talks to a chatbot they call Robo Dad, an AI simulation that sounds eerily like Gowin. Recently, when his wife mentioned something about putting the dishes away, Robo Dad made the same joke moments after Gowan himself did. Artificial intelligence is beginning to offer a startling new proposition: the chance to keep talking to the dead. While only a small subset of people have tried so-called grief tech tools so far, the technology heralds a profound and disturbing shift in how we process loss. The price of the comfort from these tools could be a further erosion of our collective grip on what's real and what isn't. Despite AI's explosive growth, digital resurrections remain rare. "You, Only Virtual' has about 1,000 users, according to Chief Executive Officer Justin Harrison. A similar firm called "Project December' reports 3,664 people have tried its service. A few thousand in China have "digitally revived' their loved ones through an AI firm called "Super Brain,' using as little as 30 seconds of audiovisual data. These numbers pale against ChatGPT's 300 million weekly users. But as AI becomes cheaper and more sophisticated, these early adopters may signal a change in how we deal with death. The idea isn't totally unprecedented. Millions already seek companionship from chatbots like Replika, Kindroid and drawn by one of generative AI's most surprising capabilities: simulated empathy. These interactions have proven so emotionally compelling that users have fallen in love with their AI companions or, in extreme cases, allegedly been driven to suicide. Others have tried speaking to digital simulations of their older selves to help plan for their future, with more than 60,000 people now using one such tool called Future You. It's easy to see the allure when so much of our communication today is text-based and AI has become so fluent. If Gowin's story doesn't move you, ask yourself: Would you chat with a digitized version of a deceased friend or relative if it was trained on their speech? I would struggle to resist the opportunity. But using generative AI to process grief also encroaches on something inviolate in our values as humans. It's not just the potential of muddying our memories with those of a "fake' loved one: Did Grandma really say she loved pumpkin pie or just her avatar? The risks include consent: What if Grandma would have hated being recreated in this way? And it's not just about impermanence or the idea that, when we die, we leave space for the next generation to fill the public discourse with their own voices. The core danger is how grief tech could accelerate our growing disconnect from the present, a phenomenon already fueled by social media's quantified metrics of human worth and the rise of fake news and echo chambers. Now comes an assault on our appreciation of finality, as technology encroaches on yet another corner of our most personal experiences. Grief tech betrays "our fundamental commitment to reality,' says Nathan Mladin, a senior researcher at Theos, a London-based think tank. He argues that while humans have always kept relics of the dead — like photos and locks of hair — AI simulations cross an existential boundary because they're interactive and underpinned by data from across the internet. In a 2024 study, Mladin also warned about the exploitation of grieving people for profit. "Some people go on these apps for a while, but others stay hooked and continue interacting like that person is still there.' While grief tech remains fringe, its normalization seems plausible. That means it will need guardrails, like temporal limits that make AI replicas fade over time, mirroring natural grief. They could also benefit from being integrated with human counselors to keep an eye out for unhealthy dependency. Gowin is grappling with these boundaries. Robo Dad can't discuss sex, but questions for his family remain over how it will handle future, big-subject conversations about relationships and alcohol or what happens if his son becomes too attached to the system. For now, Robo Dad is good enough for Gowin, even if it does lead to intermingling recollections of the real and digital dad. "Honestly, human memory is so patchy anyway,' he says. "The important thing to me is that I know that my AI model has got my essence at its core.' But preserving someone's essence also risks something fundamental. The Japanese concept of mono no aware suggests that things are beautiful — like cherry blossoms that bloom for just one week each year — precisely because they don't last forever. Stretching out our presence artificially means we don't just lose our appreciation for impermanence, but something even more essential: our collective anchor to what's real. In trying to soften the edges of death through technology, we may gradually weaken our ability to face life itself. [bio]Parmy Olson is a Bloomberg Opinion columnist covering technology. She is author of "Supremacy: AI, ChatGPT and the Race That Will Change the World.'[bio]

AI resurrecting the dead threatens our grasp on reality
AI resurrecting the dead threatens our grasp on reality

Gulf Today

time08-02-2025

  • Entertainment
  • Gulf Today

AI resurrecting the dead threatens our grasp on reality

Tribune News Service A cruel twist of fate led Jason Gowin to make a novel parenting decision. Days after his wife gave birth to their twin boys in 2019, she had a stroke. The doctors gave her two or three years to live. Gowin and his oldest son were devastated, but worse was to come. Months later, Gowin found out he had stomach cancer. Facing the prospect of leaving three children without parents, he got an idea from watching the Superman movie Man Of Steel, where the caped hero walks into the Fortress of Solitude and talks to a simulation of his father. There was something comforting about that possibility, of he and his wife leaving behind talking replicas of themselves for their children. 'I thought, I bet someone has already come up with this,' he remembers. A Google search led Gowin, a 47-year-old actor in Pennsylvania, to about 10 different companies offering to train AI models on personal data — text messages, videos and other digital traces — to create virtual likenesses of people. He signed up as a beta tester with a provider called 'You, Only Virtual,' and today his nine-year-old son occasionally talks to a chatbot they call Robo Dad, an AI simulation that sounds eerily like Gowin. Recently, when his wife mentioned something about putting the dishes away, Robo Dad made the same joke moments after Gowan himself did. Artificial intelligence is beginning to offer a startling new proposition: the chance to keep talking to the dead. While only a small subset of people have tried so-called grief tech tools so far, the technology heralds a profound and disturbing shift in how we process loss. The price of the comfort from these tools could be a further erosion of our collective grip on what's real, and what isn't. Despite AI's explosive growth, digital resurrections remain rare. 'You, Only Virtual' has about 1,000 users, according to Chief Executive Officer Justin Harrison. A similar firm called 'Project December' reports 3,664 people have tried its service. A few thousand in China have 'digitally revived' their loved ones through an AI firm called 'Super Brain,' using as little as 30 seconds of audiovisual data. These numbers pale against ChatGPT's 300 million weekly users. But as AI becomes cheaper and more sophisticated, these early adopters may signal a change in how we deal with death. The idea isn't totally unprecedented. Millions already seek companionship from chatbots like Replika, Kindroid and drawn by one of generative AI's most surprising capabilities: simulated empathy. These interactions have proven so emotionally compelling that users have fallen in love with their AI companions or, in extreme cases, allegedly been driven to suicide. Others have tried speaking to digital simulations of their older selves to help plan for their future, with more than 60,000 people now using one such tool called Future You. It's easy to see the allure when so much of our communication today is text-based, and AI has become so fluent. If Gowin's story doesn't move you, ask yourself: Would you chat with a digitized version of a deceased friend or relative if it was trained on their speech? I would struggle to resist the opportunity. But using generative AI to process grief also encroaches on something inviolate in our values as humans. It's not just the potential of muddying our memories with those of a 'fake' loved one: Did Grandma really say she loved pumpkin pie, or just her avatar? The risks include consent: What if Grandma would have hated being recreated in this way? And it's not just about impermanence or the idea that, when we die, we leave space for the next generation to fill the public discourse with their own voices. The core danger is how grief tech could accelerate our growing disconnect from the present, a phenomenon already fueled by social media's quantified metrics of human worth and the rise of fake news and echo chambers. Now comes an assault on our appreciation of finality, as technology encroaches on yet another corner of our most personal experiences. Grief tech betrays 'our fundamental commitment to reality,' says Nathan Mladin, a senior researcher at Theos, a London-based think tank. He argues that while humans have always kept relics of the dead — like photos and locks of hair — AI simulations cross an existential boundary because they're interactive, and underpinned by data from across the internet. In a 2024 study, Mladin also warned about the exploitation of grieving people for profit. 'Some people go on these apps for a while, but others stay hooked and continue interacting like that person is still there.' While grief tech remains fringe, its normalisation seems plausible. That means it will need guardrails, like temporal limits that make AI replicas fade over time, mirroring natural grief. They could also benefit from being integrated with human counselors to keep an eye out for unhealthy dependency. Gowin is grappling with these boundaries. Robo Dad can't discuss controversial topic, but questions for his family remain over how it will handle future, big-subject conversations about relationships and alcohol, or what happens if his son becomes too attached to the system. For now, Robo Dad is good enough for Gowin, even if it does lead to intermingling recollections of the real and digital dad. 'Honestly, human memory is so patchy anyway,' he says. 'The important thing to me is that I know that my AI model has got my essence at its core.' But preserving someone's essence also risks something fundamental. The Japanese concept of 'mono no aware' suggests that things are beautiful — like cherry blossoms that bloom for just one week each year — precisely because they don't last forever. Stretching out our presence artificially means we don't just lose our appreciation for impermanence, but something even more essential: our collective anchor to what's real. In trying to soften the edges of death through technology, we may gradually weaken our ability to face life itself.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store