Latest news with #Asimov


CBC
4 days ago
- Science
- CBC
Can AI safeguard us against AI? One of its Canadian pioneers thinks so
When Yoshua Bengio first began his work developing artificial intelligence, he didn't worry about the sci-fi-esque possibilities of them becoming self-aware and acting to preserve their existence. That was, until ChatGPT came out. "And then it kind of blew [up] in my face that we were on track to build machines that would be eventually smarter than us, and that we didn't know how to control them," Bengio, a pioneering AI researcher and computer science professor at the Université de Montréal, told As It Happens host Nil Köksal. The world's most cited AI researcher is launching a new research non-profit organization called LawZero to "look for scientific solutions to how we can design AI that will not turn against us." "We need to figure this out as soon as possible before we get to machines that are dangerous on their own or with humans behind [them]," he said. "Currently, the forces of market — the competition between companies, between countries — is such that there's not enough research to try to find solutions." Meet LawZero's conception: Scientist AI Bengio started LawZero using $40 million of donor funding. Its name references science fiction writer Isaac Asimov's Three Laws of Robotics, a set of guidelines outlining the ethical behaviour of robots that prevents them from harming or opposing humans. In Asimov's 1985 novel Robots and Empire, the author introduced the Zeroth Law: "A robot cannot cause harm to mankind or, by inaction, allow mankind to come to harm." With this in mind, Bengio said LawZero's goal is to protect people. "Our mission is really to work towards AI that is aligned with the flourishing of humanity," he said. WATCH | Advocates call for better AI regulation: Why more needs to be done to regulate the use of AI 1 year ago Duration 6:07 New research out of Western University is shining a light on the federal government's use of artificial intelligence through a Tracking Automated Government Register. Joanna Redden, an associate professor of Information and Media Studies and co-director at Starling: Just Technologies. Just Societies. and Data Justice Lab, joined London Morning to talk about the data and concerns about AI use. Several AI technologies in recent months have been reported to undermine, deceive, and even manipulate people. For example, a study earlier this year found that some AIs will refuse to admit defeat after a chess match, and instead hack the computer to cheat the results. AI firm Anthropic detailed last month that during a systems test, its AI tool Claude Opus 4 tried to blackmail the engineer so that it would not be replaced by a newer update. These are the kind of scenarios that drove Bengio to design LawZero's guardian artificial intelligence, Scientist AI. According to a proposal by Bengio and his colleagues, Scientist AI is a "safe" and "trustworthy" artificial intelligence that would function as a gatekeeper and protective system for humans to continue to benefit from this technology's innovation with intentional safety. It's also "non-agentic," which Bengio and his colleagues define as having "no built-in situational awareness and no persistent goals that can drive actions or long-term plans." In other words, what differentiates agentic and non-agentic AI is their autonomous capacities to act in the world. How would Scientist AI work? Can it work? Scientist AI, Bengio says, would be paired with other AIs, and act as a kind of "guardrail." It would estimate the "probability that an [AI]'s actions will lead to harm," he told U.K. newspaper, the Guardian. If that chance is above a certain threshold, Scientist AI will reject its counterpart's suggested action. WATCH | A 2024 feature interview with Yoshua Bengio at his home in Montreal: Artificial intelligence 'godfather' Yoshua Bengio opens up about his hopes and concerns 1 year ago Duration 18:00 But can we guarantee that this guardian AI will also not turn against us? David Duvenaud, an AI safety researcher who will act as an advisor for LawZero, says it's a rational concern. "If you're skeptical about our ability to control AI with other AI, or really be sure that they're going to be acting in our best interest in the long run, you are absolutely right to be worried," Duvenaud, an assistant professor of computer science and statistics at the University of Toronto, told CBC. Still, he says, we have to try. "I think Yoshua's plan is less reckless than everyone else's plan," he said. AI researcher Jeff Clune agrees. "There are many research challenges we need to solve in order to make AI safe. The important thing is that we are trying, including allocating significant resources to this critical issue," Clune, a University of British Columbia computer scientist, said in an email. "That is one reason the creation of LawZero is so important." According to Bengio's announcement for LawZero,"the Scientist AI is trained to understand, explain and predict, like a selfless idealized and platonic scientist." Resembling the work of a psychologist, Scientist AI "tries to understand us, including what can harm us. The psychologist can study a sociopath without acting like one." Bengio says he hopes this widespread reckoning on the rapid, yet alarming, evolution of AI will catalyze a political movement to start "putting pressure on governments" worldwide to regulate it. "I often get the question of whether I'm optimistic or pessimistic," he said. "What I say is that it doesn't really matter. What matters is what each of us can do to move the needle towards a better world."


Geek Girl Authority
07-05-2025
- Entertainment
- Geek Girl Authority
FOUNDATION: Get First Look and Premiere Date for Season 3
Highlights Apple TV+ has blessed us with an official teaser trailer for Foundation Season 3, which you can watch below. Foundation In addition, the streamer has unveiled eight first-look photos for the upcoming season. Expect the critically acclaimed sci-fi series to return this summer with 10 new episodes. Foundation Season 3 War is upon us, and it's coming sooner than we think. Apple TV+ has unleashed the first teaser, eight new images and, more importantly, the premiere date for the third season of its lauded sci-fi epic, Foundation . The show is based on Isaac Asimov's award-winning series of the same name. RELATED: What's New on TV This Week (May 4 – 10) A New Threat Here's a synopsis for Season 3 per Apple: 'Set 152 years after the events of Season 2, The Foundation has become increasingly established far beyond its humble beginnings, while the Cleonic Dynasty's Empire has dwindled. As both of these galactic powers forge an uneasy alliance, a threat to the entire galaxy appears in the fearsome form of a warlord known as 'The Mule,' whose sights are set on ruling the universe by use of physical and military force as well as mind control. It's anyone's guess who will win, who will lose, who will live and who will die as Hari Seldon, Gaal Dornick, the Cleons and Demerzel play a potentially deadly game of intergalactic chess.' RELATED: Read our Foundation recaps The Cast and Crew Foundation stars Jared Harris, Lee Pace, Lou Llobell, Laura Birn, Cassian Bilton, Terrence Mann and Rowena King. New cast additions for Season 3 include Cherry Jones, Brandon P. Bell, Synnøve Karlsen, Cody Fern, Tómas Lemarquis, Alexander Siddig, Troy Kotsur and Pilou Asbæk. The series hails from David S. Goyer, who also serves as executive producer alongside Jane Espenson and Robyn Asimov. Foundation Season 3 premieres on Friday, July 11, 2025, only on Apple TV+. Before you go, check out the first-look photos and teaser below. On Location: The Lighterman in Apple TV+'s SLOW HORSES Contact: [email protected] What I do: I'm GGA's Managing Editor, a Senior Contributor, and Press Coordinator. I manage, contribute, and coordinate. Sometimes all at once. Joking aside, I oversee day-to-day operations for GGA, write, edit, and assess interview opportunities/press events. Who I am: Before moving to Los Angeles after studying theater in college, I was born and raised in Amish country, Ohio. No, I am not Amish, even if I sometimes sport a modest bonnet. Bylines in: Tell-Tale TV, Culturess, Sideshow Collectibles, and inkMend on Medium. Critic: Rotten Tomatoes, CherryPicks, and the Hollywood Creative Alliance.


Gizmodo
07-05-2025
- Entertainment
- Gizmodo
Foundation Season 3 Shares a First Look for Lee Pace Fans (and Everyone Else)
Foundation season two wrapped up in September 2023—a thrilling, thoroughly entertaining outing that improved upon season one in many ways. Will that trend continue for the third season of Apple TV+'s Isaac Asimov adaptation? It seems highly likely considering the teaser and first-look images the streamer just shared, along with a premiere date: July 11. Despite some behind-the-scenes duty-shifting that saw David S. Goyer stepping back as showrunner, though he's still very much involved with the series—as well as delays related to the 2023 Hollywood strikes and some budget concerns—season three still looks as large-scale and epic as the previous two. Here's the official description for the season: 'Set 152 years after the events of season two, the Foundation has become increasingly established far beyond its humble beginnings while the Cleonic Dynasty's Empire has dwindled. As both of these galactic powers forge an uneasy alliance, a threat to the entire galaxy appears in the fearsome form of a warlord known as 'The Mule' whose sights are set on ruling the universe by use of physical and military force, as well as mind control. It's anyone's guess who will win, who will lose, who will live and who will die as Hari Seldon, Gaal Dornick, the Cleons, and Demerzel play a potentially deadly game of intergalactic chess.' Jared Harris stars as Hari, with Lou Llobell as Gaal and Lee Pace as the middle-aged (and hottest, duh) Cleon. Laura Birn is the crafty android Demerzel; Cassian Bilton and Terrence Mann play the younger and older Cleons, respectively. Rowena King also returns as the legendary mathematician Kalle. New characters in season three—necessary due to that time jump, as well as the fact that a lot of folks did not survive season two—will be played by Cherry Jones, Brandon P. Bell, Synnøve Karlsen, Cody Fern, Tómas Lemarquis, Alexander Siddig, and Troy Kotsur. Pilou Asbæk, a fan favorite from Game of Thrones, is playing the terrifying Mule—introduced in season two as more of a shadowy future promise—which feels like perfect casting. Here are some first-look season three images, including our pal the Mule: Foundation returns Friday, July 11 to Apple TV+. It'll run 10 episode with a weekly rollout through September 12.


The Verge
07-05-2025
- Entertainment
- The Verge
Apple's sci-fi epic Foundation is back for season 3 in July
One of Apple's biggest series will start streaming again this summer. The company announced that season 3 of Foundation will hit Apple TV Plus on July 11th, with new episodes weekly through September 12th. As part of the announcement, Apple also released the first teaser for the third season. The show originally premiered in 2021, and is an adaptation of Isaac Asimov's classic series of sci-fi novels, which span multiple centuries and follow the rise and fall of an empire, and a group intent on saving civilization in the aftermath. It stars Jared Harris, Lee Pace, and Lou Llobell, all of whom are returning. In addition to the teaser, Apple also put out a series of new images for the season: The new episodes will jump forward more than a century from the events of season 2, which streamed in 2023. Here's the official logline: Set 152 years after the events of season two, The Foundation has become increasingly established far beyond its humble beginnings while the Cleonic Dynasty's Empire has dwindled. As both of these galactic powers forge an uneasy alliance, a threat to the entire galaxy appears in the fearsome form of a warlord known as 'The Mule' whose sights are set on ruling the universe by use of physical and military force, as well as mind control. It's anyone's guess who will win, who will lose, who will live and who will die as Hari Seldon, Gaal Dornick, the Cleons and Demerzel play a potentially deadly game of intergalactic chess. The show was renewed for a third season in 2023, though a few months later showrunner David S. Goyer stepped down from the role, reportedly over issues with the series' budget. Goyer still remains involved with the show, however, and is currently listed as an executive producer.


Forbes
02-04-2025
- Science
- Forbes
Why We Should Expand Asimov's Three Laws Of Robotics With A 4th Law
In 1942, Isaac Asimov introduced a visionary framework — the Three Laws of Robotics — that has influenced both science fiction and real-world ethical debates surrounding artificial intelligence. Yet, more than 80 years later, these laws demand an urgent revisit and revamp to address a fundamentally transformed world, one in which humans coexist intimately with AI-(em)powered robots. Central to this revision is the need for a 4th foundational law rooted in hybrid intelligence — a blend of human natural intelligence and artificial intelligence — aimed explicitly at bringing out the best in and for people and planet. Asimov's original Three Laws are elegantly concise: While insightful, these laws presuppose a clear hierarchy and a simplified, somewhat reductionist relationship between humans and robots. Today's reality, however, is distinctly hybrid, characterized by interwoven interactions and mutual dependencies between humans and advanced, learning-capable robots. Consequently, relying solely on Asimov's original triad is insufficient. The essential question we must ask is: Are Asimov's laws still relevant, and if so, how can we adapt them to serve today's intertwined, complex society? Asimov's laws assume humans are entirely in charge, capable of foresight, wisdom, and ethical consistency. In reality, human decision-makers often grapple with biases, limited perspectives, and inconsistent ethical standards. Thus, robots and AI systems reflect — and amplify — the strengths and weaknesses of their human creators. The world does not exist in binaries of human versus robot but in nuanced hybrid intelligence ecosystems where interactions are reciprocal, dynamic, and adaptive. AI today is increasingly embedded in our daily lives — from healthcare to education, via shopping to environmental sustainability and governance. Algorithms influence what we buy, write, read, think about, and look at. (In) directly they have begun to influence every step of the decision-making process, and hence shaping our behavior. Gradually this is altering societal norms that had been taken for granted. I.e in the past AI-generated artworks were considered as less valuable than those made by humans; this perception is not only shifting in terms of appreciation for the final product (partially due to the vastly improved performance of AI in that regard). The integration of AI is also influencing our perception of ethical values – what was considered as cheating in 2022 is increasingly acknowledged as a given. In the near future multimodal AI-driven agentic robots will not merely execute isolated tasks; they will be present throughout the decision making process, preceding the human intent, and actually executing off-screen what might not even matured yet in the human mind. If these complex interactions continue without careful ethical oversight, the potential for unintended consequences multiplies exponentially. And neither humans nor machines alone are sufficient to address the dynamic that has been set in motion. Hybrid intelligence arises from the complementarity of natural and artificial intelligences. HI is more than NI+AI, it brings out the best in both and curates added value that allows us to not just do more of the same, but something that is entirely new. It is the only path to adequately address an ever faster evolving hybrid world and the multifaceted challenges that it is characterized by. Humans possess creativity, compassion, intuition, and moral reasoning; whereas AI -empowered robots offer consistency, data analysis, speed, and scalability combined with superhuman stamina and immunity toward many of the physiological factors that the human organism struggles to cope with, from lack of sleep to the need for love. A synthesis of these strengths constitutes the core of hybrid intelligence. Consider climate change as a tangible example. Humans understand and empathize with ecological loss and social impact, while AI systems excel at predictive modeling, data aggregation, and identifying efficient solutions. Merging these distinct yet complementary capabilities can significantly enhance our capacity to tackle global crises, offering solutions that neither humans nor AI alone could devise. To secure a future in which every being has a fair chance to thrive we need all the assets that we can muster, which encompasses hybrid intelligence. On this premise an addition to Asimov's threesome is required — a Fourth Law — that may serve as the foundational bedrock for revisiting and applying Asimov's original three in an AI-saturated society: This 4th law goes beyond mere harm reduction; it proactively steers technological advancement toward universally beneficial outcomes. It repositions ethical responsibility squarely onto humans — not just engineers, but policymakers, business leaders, educators, and community stakeholders — to collectively shape the purpose and principles underlying AI development, and by extension AI-empowered robotics. Historically, technological innovation has often been driven by reductionist self-interest, emphasizing efficiency, profit, and competitive advantage at the expense of broader social and environmental considerations. Hybrid intelligence, underpinned by the proposed fourth law, shifts the narrative from individualistic to collective aspirations. It fosters a world where technological development and ethical stewardship move hand-in-hand, enabling long-term collective flourishing. This shift requires policymakers and leaders to prioritize systems thinking over isolated problem-solving. It is time to ask: How does a specific AI or robotic implementation affect the broader ecosystem, including human health, social cohesion, environmental resilience, and ethical governance? Only by integrating these considerations into decision-making processes from the outset can we ensure that technology genuinely benefits humanity and the environment they depend on. Implementing the 4th law means to embed explicit ethical benchmarks into AI design, development, testing, and deployment. These benchmarks should emphasize transparency, fairness, inclusivity, and environmental sustainability. For example, healthcare robots must be evaluated not merely by efficiency metrics but also by their ability to enhance patient well-being, dignity, and autonomy. Likewise, environmental robots should prioritize regenerative approaches that sustain ecosystems rather than short-term fixes that yield unintended consequences. Educational institutions and corporate training programs must cultivate double literacy — equipping future designers, users, and policymakers with literacy in both natural and artificial intelligences. Double literacy enables individuals to critically evaluate, ethically engage with, and innovatively apply AI technologies within hybrid intelligence frameworks. Differently put, the 4th law looks for proscial A, AI-systems that are tailored, trained, tested and targeted to bring out the best in and for people and planet. Social benefits are aimed for as a priority, rather than a collateral benefit in the pursuit of commercial success. That requires humans who are fluent in double literacy. The rapid integration of AI into our social fabric demands immediate and proactive ethical revision. Written over eight decades ago Asimov's laws provide an essential starting point for today; their adaptation to contemporary reality requires a holistic lens. The 4th law explicitly expands their scope and steeps them in humanity's collective responsibility to design AI systems that nurture our best selves and sustain our shared environment. In a hybrid era, human decision-makers (each of us) do not have the luxury of reductionist self-interest. Revisiting and revamping Asimov's laws through the lens of hybrid intelligence is not just prudent — it is imperative for our collective survival