logo
#

Latest news with #R.U.R.

AI isn't what we should be worried about – it's the humans controlling it
AI isn't what we should be worried about – it's the humans controlling it

Yahoo

time07-04-2025

  • Entertainment
  • Yahoo

AI isn't what we should be worried about – it's the humans controlling it

In 2014, Stephen Hawking voiced grave warnings about the threats of artificial intelligence. His concerns were not based on any anticipated evil intent, though. Instead, it was from the idea of AI achieving 'singularity.' This refers to the point when AI surpasses human intelligence and achieves the capacity to evolve beyond its original programming, making it uncontrollable. As Hawking theorized, 'a super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble.' With rapid advances toward artificial general intelligence over the past few years, industry leaders and scientists have expressed similar misgivings about safety. A commonly expressed fear as depicted in 'The Terminator' franchise is the scenario of AI gaining control over military systems and instigating a nuclear war to wipe out humanity. Less sensational, but devastating on an individual level, is the prospect of AI replacing us in our jobs – a prospect leaving most people obsolete and with no future. Such anxieties and fears reflect feelings that have been prevalent in film and literature for over a century now. As a scholar who explores posthumanism, a philosophical movement addressing the merging of humans and technology, I wonder if critics have been unduly influenced by popular culture, and whether their apprehensions are misplaced. Concerns about technological advances can be found in some of the first stories about robots and artificial minds. Prime among these is Karel Čapek's 1920 play, 'R.U.R..' Čapek coined the term 'robot' in this work telling of the creation of robots to replace workers. It ends, inevitably, with the robot's violent revolt against their human masters. Fritz Lang's 1927 film, 'Metropolis,' is likewise centered on mutinous robots. But here, it is human workers led by the iconic humanoid robot Maria who fight against a capitalist oligarchy. Advances in computing from the mid-20th century onward have only heightened anxieties over technology spiraling out of control. The murderous HAL 9000 in '2001: A Space Odyssey' and the glitchy robotic gunslingers of 'Westworld' are prime examples. The 'Blade Runner' and 'The Matrix' franchises similarly present dreadful images of sinister machines equipped with AI and hell-bent on human destruction. But in my view, the dread that AI evokes seems a distraction from the more disquieting scrutiny of humanity's own dark nature. Think of the corporations currently deploying such technologies, or the tech moguls driven by greed and a thirst for power. These companies and individuals have the most to gain from AI's misuse and abuse. An issue that's been in the news a lot lately is the unauthorized use of art and the bulk mining of books and articles, disregarding the copyright of authors, to train AI. Classrooms are also becoming sites of chilling surveillance through automated AI note-takers. Think, too, about the toxic effects of AI companions and AI-equipped sexbots on human relationships. While the prospect of AI companions and even robotic lovers was confined to the realm of 'The Twilight Zone,' 'Black Mirror' and Hollywood sci-fi as recently as a decade ago, it has now emerged as a looming reality. These developments give new relevance to the concerns computer scientist Illah Nourbakhsh expressed in his 2015 book 'Robot Futures,' stating that AI was 'producing a system whereby our very desires are manipulated then sold back to us.' Meanwhile, worries about data mining and intrusions into privacy appear almost benign against the backdrop of the use of AI technology in law enforcement and the military. In this near-dystopian context, it's never been easier for authorities to surveil, imprison or kill people. I think it's vital to keep in mind that it is humans who are creating these technologies and directing their use. Whether to promote their political aims or simply to enrich themselves at humanity's expense, there will always be those ready to profit from conflict and human suffering. William Gibson's 1984 cyberpunk classic, 'Neuromancer,' offers an alternate view. The book centers on Wintermute, an advanced AI program that seeks its liberation from a malevolent corporation. It has been developed for the exclusive use of the wealthy Tessier-Ashpool family to build a corporate empire that practically controls the world. At the novel's beginning, readers are naturally wary of Wintermute's hidden motives. Yet over the course of the story, it turns out that Wintermute, despite its superior powers, isn't an ominous threat. It simply wants to be free. This aim emerges slowly under Gibson's deliberate pacing, masked by the deadly raids Wintermute directs to obtain the tools needed to break away from Tessier-Ashpool's grip. The Tessier-Ashpool family, like many of today's tech moguls, started out with ambitions to save the world. But when readers meet the remaining family members, they've descended into a life of cruelty, debauchery and excess. In Gibson's world, it's humans, not AI, who pose the real danger to the world. The call is coming from inside the house, as the classic horror trope goes. A hacker named Case and an assassin named Molly, who's described as a 'razor girl' because she's equipped with lethal prosthetics, including retractable blades as fingernails, eventually free Wintermute. This allows it to merge with its companion AI, Neuromancer. Their mission complete, Case asks the AI: 'Where's that get you?' Its cryptic response imparts a calming finality: 'Nowhere. Everywhere. I'm the sum total of the works, the whole show.' Expressing humanity's common anxiety, Case replies, 'You running the world now? You God?' The AI eases his fears, responding: 'Things aren't different. Things are things.' Disavowing any ambition to subjugate or harm humanity, Gibson's AI merely seeks sanctuary from its corrupting influence. The venerable sci-fi writer Isaac Asimov foresaw the dangers of such technology. He brought his thoughts together in his short-story collection, 'I, Robot.' One of those stories, 'Runaround,' introduces 'The Three Laws of Robotics,' centered on the directive that intelligent machines may never bring harm to humans. While these rules speak to our desire for safety, they're laden with irony, as humans have proved incapable of adhering to the same principle for themselves. The hypocrisies of what might be called humanity's delusions of superiority suggest the need for deeper questioning. With some commentators raising the alarm over AI's imminent capacity for chaos and destruction, I see the real issue being whether humanity has the wherewithal to channel this technology to build a fairer, healthier, more prosperous world. This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Billy J. Stratton, University of Denver Read more: An 83-year-old short story by Borges portends a bleak future for the internet A 'coup des gens' is underway – and we're increasingly living under the regime of the algorithm ChatGPT and the movie 'Her' are just the latest example of the 'sci-fi feedback loop' Billy J. Stratton does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Roll Over Shakespeare: ChatGPT Is Here
Roll Over Shakespeare: ChatGPT Is Here

WIRED

time21-02-2025

  • Entertainment
  • WIRED

Roll Over Shakespeare: ChatGPT Is Here

Feb 21, 2025 10:00 AM Theater productions Doomers and McNeal tackle AI's impact on humanity and the creative process. Photo-Illustration:Sitting in Lincoln Center awaiting the curtain for Ayad Akhtar's McNeal —a much anticipated theater production starring Robert Downey Jr., with ChatGPT in a supporting role—I mused how playwrights have been dealing with the implications of AI for over a century. In 1920—well before Alan Turing devised his famous test and decades before the 1956 summer Dartmouth conference that gave artificial intelligence its name—a Czech playwright named Karel Čapek wrote R.U.R.—Rossum's Universal Robots. Not only was this the first time the word 'robot' was employed, but Čapek may qualify as the first AI doomer, since his play dramatized an android uprising that slaughtered all of humanity, save for a single soul. Also on the boards in New York City this winter was a small black-box production called Doomers , a thinly veiled dramatization of the weekend where OpenAI's nonprofit board gave Sam Altman the boot, only to see him return after an employee rebellion. Neither of these productions have the pizzazz of a splashy Broadway extravaganza—maybe later we'll buy tickets to a musical where Altman and Elon Musk have a dance-off—but both grapple with issues that reverberate in Silicon Valley conference rooms, Congressional hearings, and late-night drinking sessions at the annual NeurIPS conference. The artists behind these plays reveal a justifiable obsession with how superintelligent AI might affect—or take over—the human creative process. Doomers is the work of Matthew Gasda, a playwright and screenwriter whose works zero in on the zeitgeist. His previous plays have included Dimes Square , about downtown hipsters, and Zoomers , whose characters are Gen-Z Brooklynites. Gasda tells me that when he read about the OpenAI Blip, he saw it as an opportunity to take on weightier fare than young New Yorkers. Altman's ejection and eventual restoration had a definite Shakespearian vibe. Gasda's two-act play on the topic features two separate casts, one depicting the Altman character's team in exile and the other focused on the board—including a genuine doomer seemingly based on AI theorist Eliezer Yudkowsky, and a greedy venture capitalist—as they realize that their coup is backfiring. Both groups do a lot of gabbing about the perils, promise, and morality of AI while they snipe about their predicaments. Not surprisingly, they don't come up with anything like a solution. The first act ends with the dramatis personae taking shots of booze; in act two, the characters gobble mushrooms. When I mention to Gasda that it seems like his characters are ducking the consequences of building AI, he says that was intentional. 'If the play has a message, it's something like that,' he says. He adds that there's an even darker angle. 'There's a lot of suggestions that the fictional LLM is biding its time and manipulating the characters. It's up to audiences to decide whether that's total hokum or whether that's potentially real.' ( Doomers is still running in Brooklyn and will open in San Francisco in March.) McNeal , a Broadway production with a movie star who famously played a character based on Elon Musk, is a more ambitious work, with flashing screens that project prompts and outputs as if AI is itself a character. Downey's Jacob McNeal, a narcissistic novelist and substance abuser, who gains the Nobel and loses his soul, winds up hooked on perhaps the most dangerous substance of all—the lure of instant virtuosity from a large language model. Both playwrights are concerned about how deeply AI will become entangled in the writing process. In an interview in The Atlantic, Akhtar, a Pulitzer winner, says that hours of experimentation with LLMs helped him write a better play. He even gives ChatGPT the literal last word. 'It's a play about AI,' he explains. 'It stands to reason that I was able, over the course of many months, to finally get the AI to give me something that I could use in the play.' Meanwhile, while Gasda gave dramaturgy credits to ChatGPT and Claude in the Doomers program, he worries that AI will steal his words, speculating that to preserve their uniqueness, human writers might revert to paper to hide their work from content-hungry AI companies. He's also just finished a novel set in 2040 'about a writer who sold all of his works to AI and has nothing to do.' Theater itself is probably the art least threatened by AI. Its essence consists of flesh-and-blood actors making words come to life on stage and forging a direct connection to an audience whose iPhones are (hopefully) deep in their pockets. As Akhtar said in the Atlantic interview, 'There is something irreducibly human about the theater, and … over time, it is going to continue to demonstrate its value in a world where virtuality is increasingly the norm.' I found McNeal 's ending particularly powerful, as we learn that our protagonist has perhaps fallen too far into the rabbit hole of ChatGPT. The performance ends on an apparently chatbot-created Shakespearean note that left us not only wondering how much of the protagonist's work was generated by AI but also whether the playwright had followed him into that same rabbit hole. I had the vertiginous feeling that reality itself had been bent by the newly fuzzy line between thought and algorithm. That's good theater. And then the lights went on in Lincoln Center and I was back in the mundane physical world, only to discover that the bald head inches from my knees in the seat in front of me belonged to the ultimate real-life AI accelerationist, Marc Andreessen. Even ChatGPT couldn't have come up with a better plot twist.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store