
Why We Should Expand Asimov's Three Laws Of Robotics With A 4th Law
In 1942, Isaac Asimov introduced a visionary framework — the Three Laws of Robotics — that has influenced both science fiction and real-world ethical debates surrounding artificial intelligence. Yet, more than 80 years later, these laws demand an urgent revisit and revamp to address a fundamentally transformed world, one in which humans coexist intimately with AI-(em)powered robots. Central to this revision is the need for a 4th foundational law rooted in hybrid intelligence — a blend of human natural intelligence and artificial intelligence — aimed explicitly at bringing out the best in and for people and planet.
Asimov's original Three Laws are elegantly concise:
While insightful, these laws presuppose a clear hierarchy and a simplified, somewhat reductionist relationship between humans and robots. Today's reality, however, is distinctly hybrid, characterized by interwoven interactions and mutual dependencies between humans and advanced, learning-capable robots. Consequently, relying solely on Asimov's original triad is insufficient.
The essential question we must ask is: Are Asimov's laws still relevant, and if so, how can we adapt them to serve today's intertwined, complex society?
Asimov's laws assume humans are entirely in charge, capable of foresight, wisdom, and ethical consistency. In reality, human decision-makers often grapple with biases, limited perspectives, and inconsistent ethical standards. Thus, robots and AI systems reflect — and amplify — the strengths and weaknesses of their human creators. The world does not exist in binaries of human versus robot but in nuanced hybrid intelligence ecosystems where interactions are reciprocal, dynamic, and adaptive.
AI today is increasingly embedded in our daily lives — from healthcare to education, via shopping to environmental sustainability and governance. Algorithms influence what we buy, write, read, think about, and look at. (In) directly they have begun to influence every step of the decision-making process, and hence shaping our behavior. Gradually this is altering societal norms that had been taken for granted. I.e in the past AI-generated artworks were considered as less valuable than those made by humans; this perception is not only shifting in terms of appreciation for the final product (partially due to the vastly improved performance of AI in that regard). The integration of AI is also influencing our perception of ethical values – what was considered as cheating in 2022 is increasingly acknowledged as a given.
In the near future multimodal AI-driven agentic robots will not merely execute isolated tasks; they will be present throughout the decision making process, preceding the human intent, and actually executing off-screen what might not even matured yet in the human mind.
If these complex interactions continue without careful ethical oversight, the potential for unintended consequences multiplies exponentially. And neither humans nor machines alone are sufficient to address the dynamic that has been set in motion.
Hybrid intelligence arises from the complementarity of natural and artificial intelligences. HI is more than NI+AI, it brings out the best in both and curates added value that allows us to not just do more of the same, but something that is entirely new. It is the only path to adequately address an ever faster evolving hybrid world and the multifaceted challenges that it is characterized by.
Humans possess creativity, compassion, intuition, and moral reasoning; whereas AI -empowered robots offer consistency, data analysis, speed, and scalability combined with superhuman stamina and immunity toward many of the physiological factors that the human organism struggles to cope with, from lack of sleep to the need for love. A synthesis of these strengths constitutes the core of hybrid intelligence.
Consider climate change as a tangible example. Humans understand and empathize with ecological loss and social impact, while AI systems excel at predictive modeling, data aggregation, and identifying efficient solutions. Merging these distinct yet complementary capabilities can significantly enhance our capacity to tackle global crises, offering solutions that neither humans nor AI alone could devise.
To secure a future in which every being has a fair chance to thrive we need all the assets that we can muster, which encompasses hybrid intelligence. On this premise an addition to Asimov's threesome is required — a Fourth Law — that may serve as the foundational bedrock for revisiting and applying Asimov's original three in an AI-saturated society:
This 4th law goes beyond mere harm reduction; it proactively steers technological advancement toward universally beneficial outcomes. It repositions ethical responsibility squarely onto humans — not just engineers, but policymakers, business leaders, educators, and community stakeholders — to collectively shape the purpose and principles underlying AI development, and by extension AI-empowered robotics.
Historically, technological innovation has often been driven by reductionist self-interest, emphasizing efficiency, profit, and competitive advantage at the expense of broader social and environmental considerations. Hybrid intelligence, underpinned by the proposed fourth law, shifts the narrative from individualistic to collective aspirations. It fosters a world where technological development and ethical stewardship move hand-in-hand, enabling long-term collective flourishing.
This shift requires policymakers and leaders to prioritize systems thinking over isolated problem-solving. It is time to ask: How does a specific AI or robotic implementation affect the broader ecosystem, including human health, social cohesion, environmental resilience, and ethical governance? Only by integrating these considerations into decision-making processes from the outset can we ensure that technology genuinely benefits humanity and the environment they depend on.
Implementing the 4th law means to embed explicit ethical benchmarks into AI design, development, testing, and deployment. These benchmarks should emphasize transparency, fairness, inclusivity, and environmental sustainability. For example, healthcare robots must be evaluated not merely by efficiency metrics but also by their ability to enhance patient well-being, dignity, and autonomy. Likewise, environmental robots should prioritize regenerative approaches that sustain ecosystems rather than short-term fixes that yield unintended consequences.
Educational institutions and corporate training programs must cultivate double literacy — equipping future designers, users, and policymakers with literacy in both natural and artificial intelligences. Double literacy enables individuals to critically evaluate, ethically engage with, and innovatively apply AI technologies within hybrid intelligence frameworks.
Differently put, the 4th law looks for proscial A, AI-systems that are tailored, trained, tested and targeted to bring out the best in and for people and planet. Social benefits are aimed for as a priority, rather than a collateral benefit in the pursuit of commercial success. That requires humans who are fluent in double literacy.
The rapid integration of AI into our social fabric demands immediate and proactive ethical revision. Written over eight decades ago Asimov's laws provide an essential starting point for today; their adaptation to contemporary reality requires a holistic lens. The 4th law explicitly expands their scope and steeps them in humanity's collective responsibility to design AI systems that nurture our best selves and sustain our shared environment.
In a hybrid era, human decision-makers (each of us) do not have the luxury of reductionist self-interest. Revisiting and revamping Asimov's laws through the lens of hybrid intelligence is not just prudent — it is imperative for our collective survival

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Business Wire
19-05-2025
- Business Wire
Duality AI Partners with CoVar on DARPA Program to Define Ethical Standards for Future Military Autonomous Systems
SAN MATEO, Calif.--(BUSINESS WIRE)--Duality AI, the company behind Falcon, a leading digital twin simulation platform, today announced its partnership with CoVar to support the Defense Advanced Research Projects Agency's (DARPA) Autonomy Standards and Ideals with Military Operational Values (ASIMOV) program. The timely goal of ASIMOV is to establish a common framework to evaluate the ethical alignment of future autonomous systems with established U.S. military operational values, including military ethics, the laws of war, and the commander's intent. "Falcon's capability to ingest knowledge graphs and procedurally generate simulation-ready scenarios is vital for a framework designed to evaluate the ethical readiness of an autonomous system,' says Dr. Pete Torrione, CTO of CoVar. ASIMOV's mission is to define a way for assessing the ethical challenges posed by AI and autonomous system decision-making in complex scenarios by creating a robust, quantitative framework for evaluating their ethical readiness. As these systems rapidly transition from research labs into real-world deployment, the need for such a framework is quickly becoming imperative. To address this challenge, the program incorporates an Ethical, Legal, and Social Implications (ELSI) advisory group that will aid in establishing a common language for analyzing the ethical performance of autonomous systems. Duality AI joins CoVar's multidisciplinary team as a key contributor to their ethical testing infrastructure for autonomous systems called GEARS or Gauging Ethical Autonomous Reliable Systems. Vital to GEAR's success is Falcon's capability to simulate autonomous system performance with precision, and output comprehensive data needed to analyze the efficacy of its actions against ground truth knowledge. To support these efforts, Duality is providing high-fidelity digital twins of environments, systems, and agents—both human and vehicular—enabling the creation of a wide range of operationally relevant scenarios. Falcon's seamless integration with autonomous system software allows for in-the-loop simulations, driven by synthetic data streaming from multi-modal virtual sensors including electro-optical, IR, LiDAR, and RADAR. Additionally, Falcon's flexibility to procedurally create environments and scenarios from diverse input sources is a key feature for the collaboration. 'Falcon will let us rapidly create and simulate a large variety of required ethical scenarios,' says Dr. Pete Torrione, CTO of CoVar. 'With GEARS we are defining a new mathematics of ethics, where ethical scenarios and commander's intent are represented by knowledge graphs. Falcon's capability to ingest these graphs and procedurally generate simulation-ready scenarios is vital for a framework designed to evaluate the ethical readiness of an autonomous system.' Since its founding, Duality has been helping customers leverage digital twin simulation as the safe, ethical, and viable way for AI and autonomous systems to transition from the lab into the real world. 'We're thrilled to be partnering with CoVar on the ASIMOV program,' says Apurva Shah, CEO of Duality AI. 'As a leader in developing responsible AI/ML solutions, CoVar is the ideal partner with whom to advance one of our main goals: helping to transition AI into the real world safely, responsibly, and reliably. During a time of justifiable apprehension about the increasing role of AI in our world, leveraging Falcon towards evolving more ethical AI is a critical endeavor and we're honored to partner with CoVar and contribute to this important program.' With ASIMOV, Duality joins a team of professors of Ethics, widely published authors in the fields of Ethics and Artificial Intelligence/Machine Learning Trust, engineers, and ethicists with combat command experience. Through the learnings from this program, the GEARS quantitative approach will have a broader impact throughout the autonomy community. About Duality AI Duality AI is the software company behind the Falcon digital twin simulation platform. Organizations today are leveraging Falcon to help solve complex problems in AI, robotics, and smart system engineering. By bringing high-fidelity digital twins of environments and operating systems into Falcon, Duality's customers generate accurate data and predictive behavior modeling that enables them to deploy automated systems robustly and at scale. Duality's multidisciplinary team includes world-class engineers, simulation specialists, AI/ML experts and award-winning technical artists with over 70 patents across robotics, simulation, and visualization. About CoVar CoVar has over ten years of experience innovating and implementing advanced artificial intelligence and machine learning (AI/ML) solutions for defense, industrial, commercial, and medical sectors. CoVar's research team of principal investigators are recognized subject matter experts in the field and lead teams capable of combining big-company reliability and traceability with small company agility and flexibility. The company has developed and delivered state-of-the-art implementations of advanced algorithms in easy-to-use software packages across multiple platforms and stacks. CoVar has repeatedly demonstrated the ability to provide end-to-end solutions that bring game-changing AI/ML technology to implemented software that reliably and effectively improve customer capability.
Yahoo
19-05-2025
- Yahoo
Duality AI Partners with CoVar on DARPA Program to Define Ethical Standards for Future Military Autonomous Systems
Teams will leverage digital twin simulation as part of a novel approach for benchmarking AI's ethical alignment with established ethics principles SAN MATEO, Calif., May 19, 2025--(BUSINESS WIRE)--Duality AI, the company behind Falcon, a leading digital twin simulation platform, today announced its partnership with CoVar to support the Defense Advanced Research Projects Agency's (DARPA) Autonomy Standards and Ideals with Military Operational Values (ASIMOV) program. The timely goal of ASIMOV is to establish a common framework to evaluate the ethical alignment of future autonomous systems with established U.S. military operational values, including military ethics, the laws of war, and the commander's intent. ASIMOV's mission is to define a way for assessing the ethical challenges posed by AI and autonomous system decision-making in complex scenarios by creating a robust, quantitative framework for evaluating their ethical readiness. As these systems rapidly transition from research labs into real-world deployment, the need for such a framework is quickly becoming imperative. To address this challenge, the program incorporates an Ethical, Legal, and Social Implications (ELSI) advisory group that will aid in establishing a common language for analyzing the ethical performance of autonomous systems. Duality AI joins CoVar's multidisciplinary team as a key contributor to their ethical testing infrastructure for autonomous systems called GEARS or Gauging Ethical Autonomous Reliable Systems. Vital to GEAR's success is Falcon's capability to simulate autonomous system performance with precision, and output comprehensive data needed to analyze the efficacy of its actions against ground truth knowledge. To support these efforts, Duality is providing high-fidelity digital twins of environments, systems, and agents—both human and vehicular—enabling the creation of a wide range of operationally relevant scenarios. Falcon's seamless integration with autonomous system software allows for in-the-loop simulations, driven by synthetic data streaming from multi-modal virtual sensors including electro-optical, IR, LiDAR, and RADAR. Additionally, Falcon's flexibility to procedurally create environments and scenarios from diverse input sources is a key feature for the collaboration. "Falcon will let us rapidly create and simulate a large variety of required ethical scenarios," says Dr. Pete Torrione, CTO of CoVar. "With GEARS we are defining a new mathematics of ethics, where ethical scenarios and commander's intent are represented by knowledge graphs. Falcon's capability to ingest these graphs and procedurally generate simulation-ready scenarios is vital for a framework designed to evaluate the ethical readiness of an autonomous system." Since its founding, Duality has been helping customers leverage digital twin simulation as the safe, ethical, and viable way for AI and autonomous systems to transition from the lab into the real world. "We're thrilled to be partnering with CoVar on the ASIMOV program," says Apurva Shah, CEO of Duality AI. "As a leader in developing responsible AI/ML solutions, CoVar is the ideal partner with whom to advance one of our main goals: helping to transition AI into the real world safely, responsibly, and reliably. During a time of justifiable apprehension about the increasing role of AI in our world, leveraging Falcon towards evolving more ethical AI is a critical endeavor and we're honored to partner with CoVar and contribute to this important program." With ASIMOV, Duality joins a team of professors of Ethics, widely published authors in the fields of Ethics and Artificial Intelligence/Machine Learning Trust, engineers, and ethicists with combat command experience. Through the learnings from this program, the GEARS quantitative approach will have a broader impact throughout the autonomy community. About Duality AI Duality AI is the software company behind the Falcon digital twin simulation platform. Organizations today are leveraging Falcon to help solve complex problems in AI, robotics, and smart system engineering. By bringing high-fidelity digital twins of environments and operating systems into Falcon, Duality's customers generate accurate data and predictive behavior modeling that enables them to deploy automated systems robustly and at scale. Duality's multidisciplinary team includes world-class engineers, simulation specialists, AI/ML experts and award-winning technical artists with over 70 patents across robotics, simulation, and visualization. About CoVar CoVar has over ten years of experience innovating and implementing advanced artificial intelligence and machine learning (AI/ML) solutions for defense, industrial, commercial, and medical sectors. CoVar's research team of principal investigators are recognized subject matter experts in the field and lead teams capable of combining big-company reliability and traceability with small company agility and flexibility. The company has developed and delivered state-of-the-art implementations of advanced algorithms in easy-to-use software packages across multiple platforms and stacks. CoVar has repeatedly demonstrated the ability to provide end-to-end solutions that bring game-changing AI/ML technology to implemented software that reliably and effectively improve customer capability. View source version on Contacts Media Contact Mish Sukharevmish@


Geek Girl Authority
07-05-2025
- Geek Girl Authority
FOUNDATION: Get First Look and Premiere Date for Season 3
Highlights Apple TV+ has blessed us with an official teaser trailer for Foundation Season 3, which you can watch below. Foundation In addition, the streamer has unveiled eight first-look photos for the upcoming season. Expect the critically acclaimed sci-fi series to return this summer with 10 new episodes. Foundation Season 3 War is upon us, and it's coming sooner than we think. Apple TV+ has unleashed the first teaser, eight new images and, more importantly, the premiere date for the third season of its lauded sci-fi epic, Foundation . The show is based on Isaac Asimov's award-winning series of the same name. RELATED: What's New on TV This Week (May 4 – 10) A New Threat Here's a synopsis for Season 3 per Apple: 'Set 152 years after the events of Season 2, The Foundation has become increasingly established far beyond its humble beginnings, while the Cleonic Dynasty's Empire has dwindled. As both of these galactic powers forge an uneasy alliance, a threat to the entire galaxy appears in the fearsome form of a warlord known as 'The Mule,' whose sights are set on ruling the universe by use of physical and military force as well as mind control. It's anyone's guess who will win, who will lose, who will live and who will die as Hari Seldon, Gaal Dornick, the Cleons and Demerzel play a potentially deadly game of intergalactic chess.' RELATED: Read our Foundation recaps The Cast and Crew Foundation stars Jared Harris, Lee Pace, Lou Llobell, Laura Birn, Cassian Bilton, Terrence Mann and Rowena King. New cast additions for Season 3 include Cherry Jones, Brandon P. Bell, Synnøve Karlsen, Cody Fern, Tómas Lemarquis, Alexander Siddig, Troy Kotsur and Pilou Asbæk. The series hails from David S. Goyer, who also serves as executive producer alongside Jane Espenson and Robyn Asimov. Foundation Season 3 premieres on Friday, July 11, 2025, only on Apple TV+. Before you go, check out the first-look photos and teaser below. On Location: The Lighterman in Apple TV+'s SLOW HORSES Contact: [email protected] What I do: I'm GGA's Managing Editor, a Senior Contributor, and Press Coordinator. I manage, contribute, and coordinate. Sometimes all at once. Joking aside, I oversee day-to-day operations for GGA, write, edit, and assess interview opportunities/press events. Who I am: Before moving to Los Angeles after studying theater in college, I was born and raised in Amish country, Ohio. No, I am not Amish, even if I sometimes sport a modest bonnet. Bylines in: Tell-Tale TV, Culturess, Sideshow Collectibles, and inkMend on Medium. Critic: Rotten Tomatoes, CherryPicks, and the Hollywood Creative Alliance.