logo
#

Latest news with #DanielFaggella

Top AI Researchers Meet to Discuss What Comes After Humanity
Top AI Researchers Meet to Discuss What Comes After Humanity

Yahoo

time16-06-2025

  • Business
  • Yahoo

Top AI Researchers Meet to Discuss What Comes After Humanity

A group of the top minds in AI gathered over the weekend to discuss the "posthuman transition" — a mind-bending exercise in imagining a future in which humanity willfully hands over power, or perhaps bequeaths existence entirely, to some sort of superhuman intelligence. As Wired reports, the lavish party was organized by generative AI entrepreneur Daniel Faggella. Attendees included "AI founders from $100 million to $5 billion valuations" and "most of the important philosophical thinkers on AGI," Faggella enthused in a LinkedIn post. He organized the soirée at a $30 million mansion in San Francisco because the "big labs, the people that know that AGI is likely to end humanity, don't talk about it because the incentives don't permit it," Faggella told Wired. The symposium allowed attendees and speakers alike to steep themselves in a largely fantastical vision of a future where artificial general intelligence (AGI) was a given, rather than some distant dream of tech that isn't even close to existing. AI companies, most notably OpenAI, have talked at length about wanting to realize AGI, though often without clearly defining the term. The risks of racing toward a superhuman intelligence have remained hotly debated, with billionaire Elon Musk once arguing that unregulated AI could be the "biggest risk we face as a civilization." OpenAI Sam Altman has also warned of dangers facing humanity, including increased inequality and population control through mass surveillance, as a result of realizing AGI — which also happens to be his firm's number one priority. But for now, those are largely moot points made by individuals who are billions of dollars deep in reassuring investors that AGI is mere years away. Given the current state of wildly hallucinating large language models that still fail at the most basic tasks, we are seemingly still a long way from a point at which AI could surpass the intellectual capabilities of humans. Just last week, researchers at Apple released a damning paper that threw cold water on the "reasoning" capabilities of the latest and most powerful LLMs, arguing they "face a complete accuracy collapse beyond certain complexities." However, to insiders and believers in the tech, AGI is mostly a matter of when, not if. Speakers at this weekend's event talked about how AI can seek out deeper, universal values that humanity hasn't even been privy to, and that machines should be taught to pursue "the good," or risk enslaving an entity capable of suffering. As Wired reports, Faggella similarly invoked philosophers including Baruch Spinoza and Friedrich Nietzsche, calling on humanity to seek out the yet-undiscovered value in the universe. "This is an advocacy group for the slowing down of AI progress, if anything, to make sure we're going in the right direction," he told the publication. More on AGI: OpenAI's Top Scientist Wanted to "Build a Bunker Before We Release AGI"

Inside the AI Party at the End of the World
Inside the AI Party at the End of the World

WIRED

time11-06-2025

  • Science
  • WIRED

Inside the AI Party at the End of the World

Jun 11, 2025 7:00 AM At a mansion overlooking the Golden Gate Bridge, a group of AI insiders met to debate one unsettling question: If humanity ends, what comes next? Photo-Illustration:In a $30 million mansion perched on a cliff overlooking the Golden Gate Bridge, a group of AI researchers, philosophers, and technologists gathered to discuss the end of humanity. The Sunday afternoon symposium, called 'Worthy Successor,' revolved around a provocative idea from entrepreneur Daniel Faggella: The 'moral aim' of advanced AI should be to create a form of intelligence so powerful and wise that 'you would gladly prefer that it (not humanity) determine the future path of life itself.' Faggella made the theme clear in his invitation. 'This event is very much focused on posthuman transition,' he wrote to me via X DMs. 'Not on AGI that eternally serves as a tool for humanity.' A party filled with futuristic fantasies, where attendees discuss the end of humanity as a logistics problem rather than a metaphorical one, could be described as niche. If you live in San Francisco and work in AI, then this is a typical Sunday. About 100 guests nursed nonalcoholic cocktails and nibbled on cheese plates near floor-to-ceiling windows facing the Pacific ocean before gathering to hear three talks on the future of intelligence. One attendee sported a shirt that said 'Kurzweil was right,' seemingly a reference to Ray Kurzweil, the futurist who predicted machines will surpass human intelligence in the coming years. Another wore a shirt that said 'does this help us get to safe AGI?' accompanied by a thinking face emoji. Faggella told WIRED that he threw this event because 'the big labs, the people that know that AGI is likely to end humanity, don't talk about it because the incentives don't permit it' and referenced early comments from tech leaders like Elon Musk, Sam Altman, and Demis Hassabis, who 'were all pretty frank about the possibility of AGI killing us all.' Now that the incentives are to compete, he says, 'they're all racing full bore to build it.' (To be fair, Musk still talks about the risks associated with advanced AI, though this hasn't stopped him from racing ahead). On LinkedIn, Faggella boasted a star-studded guest list, with AI founders, researchers from all the top Western AI labs, and 'most of the important philosophical thinkers on AGI.' The first speaker, Ginevera Davis, a writer based in New York, warned that human values might be impossible to translate to AI. Machines may never understand what it's like to be conscious, she said, and trying to hard-code human preferences into future systems may be shortsighted. Instead, she proposed a lofty-sounding idea called 'cosmic alignment'—building AI that can seek out deeper, more universal values we haven't yet discovered. Her slides often showed a seemingly AI-generated image of a techno-utopia, with a group of humans gathered on a grass knoll overlooking a futuristic city in the distance. Critics of machine consciousness will say that large language models are simply stochastic parrots—a metaphor coined by a group of researchers, some of whom worked at Google, who wrote in a famous paper that LLMs do not actually understand language and are only probabilistic machines. But that debate wasn't part of the symposium, where speakers took as a given the idea that superintelligence is coming, and fast. By the second talk, the room was fully engaged. Attendees sat cross-legged on the wood floor, scribbling notes. A philosopher named Michael Edward Johnson took the mic and argued that we all have an intuition that radical technological change is imminent, but we lack a principled framework for dealing with the shift—especially as it relates to human values. He said that if consciousness is 'the home of value,' then building AI without fully understanding consciousness is a dangerous gamble. We risk either enslaving something that can suffer or trusting something that can't. (This idea relies on a similar premise to machine consciousness and is also hotly debated.) Rather than forcing AI to follow human commands forever, he proposed a more ambitious goal: teaching both humans and our machines to pursue 'the good.' (He didn't share a precise definition of what 'the good' is, but he insists it isn't mystical and hopes it can be defined scientifically.) Philosopher Michael Edward Johnson Photograph: Kylie Robison Entrepreneur and speaker Daniel Faggella Photograph: Kylie Robison Finally, Faggella took the stage. He believes humanity won't last forever in its current form and that we have a responsibility to design a successor, not just one that survives but one that can create new kinds of meaning and value. He pointed to two traits this successor must have: consciousness and 'autopoiesis,' the ability to evolve and generate new experiences. Citing philosophers like Baruch Spinoza and Friedrich Nietzsche, he argued that most value in the universe is still undiscovered and that our job is not to cling to the old but to build something capable of uncovering what comes next. This, he said, is the heart of what he calls 'axiological cosmism,' a worldview where the purpose of intelligence is to expand the space of what's possible and valuable rather than merely serve human needs. He warned that the AGI race today is reckless and that humanity may not be ready for what it's building. But if we do it right, he said, AI won't just inherit the Earth—it might inherit the universe's potential for meaning itself. During a break between panels and the Q&A, clusters of guests debated topics like the AI race between the US and China. I chatted with the CEO of an AI startup who argued that, of course , there are other forms of intelligence in the galaxy. Whatever we're building here is trivial compared to what must already exist beyond the Milky Way. At the end of the event, some guests poured out of the mansion and into Ubers and Waymos, while many stuck around to continue talking. "This is not an advocacy group for the destruction of man,' Faggella told me. 'This is an advocacy group for the slowing down of AI progress, if anything, to make sure we're going in the right direction.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store