logo
#

Latest news with #ThreeLawsofRobotics

Why We Should Expand Asimov's Three Laws Of Robotics With A 4th Law
Why We Should Expand Asimov's Three Laws Of Robotics With A 4th Law

Forbes

time02-04-2025

  • Science
  • Forbes

Why We Should Expand Asimov's Three Laws Of Robotics With A 4th Law

In 1942, Isaac Asimov introduced a visionary framework — the Three Laws of Robotics — that has influenced both science fiction and real-world ethical debates surrounding artificial intelligence. Yet, more than 80 years later, these laws demand an urgent revisit and revamp to address a fundamentally transformed world, one in which humans coexist intimately with AI-(em)powered robots. Central to this revision is the need for a 4th foundational law rooted in hybrid intelligence — a blend of human natural intelligence and artificial intelligence — aimed explicitly at bringing out the best in and for people and planet. Asimov's original Three Laws are elegantly concise: While insightful, these laws presuppose a clear hierarchy and a simplified, somewhat reductionist relationship between humans and robots. Today's reality, however, is distinctly hybrid, characterized by interwoven interactions and mutual dependencies between humans and advanced, learning-capable robots. Consequently, relying solely on Asimov's original triad is insufficient. The essential question we must ask is: Are Asimov's laws still relevant, and if so, how can we adapt them to serve today's intertwined, complex society? Asimov's laws assume humans are entirely in charge, capable of foresight, wisdom, and ethical consistency. In reality, human decision-makers often grapple with biases, limited perspectives, and inconsistent ethical standards. Thus, robots and AI systems reflect — and amplify — the strengths and weaknesses of their human creators. The world does not exist in binaries of human versus robot but in nuanced hybrid intelligence ecosystems where interactions are reciprocal, dynamic, and adaptive. AI today is increasingly embedded in our daily lives — from healthcare to education, via shopping to environmental sustainability and governance. Algorithms influence what we buy, write, read, think about, and look at. (In) directly they have begun to influence every step of the decision-making process, and hence shaping our behavior. Gradually this is altering societal norms that had been taken for granted. I.e in the past AI-generated artworks were considered as less valuable than those made by humans; this perception is not only shifting in terms of appreciation for the final product (partially due to the vastly improved performance of AI in that regard). The integration of AI is also influencing our perception of ethical values – what was considered as cheating in 2022 is increasingly acknowledged as a given. In the near future multimodal AI-driven agentic robots will not merely execute isolated tasks; they will be present throughout the decision making process, preceding the human intent, and actually executing off-screen what might not even matured yet in the human mind. If these complex interactions continue without careful ethical oversight, the potential for unintended consequences multiplies exponentially. And neither humans nor machines alone are sufficient to address the dynamic that has been set in motion. Hybrid intelligence arises from the complementarity of natural and artificial intelligences. HI is more than NI+AI, it brings out the best in both and curates added value that allows us to not just do more of the same, but something that is entirely new. It is the only path to adequately address an ever faster evolving hybrid world and the multifaceted challenges that it is characterized by. Humans possess creativity, compassion, intuition, and moral reasoning; whereas AI -empowered robots offer consistency, data analysis, speed, and scalability combined with superhuman stamina and immunity toward many of the physiological factors that the human organism struggles to cope with, from lack of sleep to the need for love. A synthesis of these strengths constitutes the core of hybrid intelligence. Consider climate change as a tangible example. Humans understand and empathize with ecological loss and social impact, while AI systems excel at predictive modeling, data aggregation, and identifying efficient solutions. Merging these distinct yet complementary capabilities can significantly enhance our capacity to tackle global crises, offering solutions that neither humans nor AI alone could devise. To secure a future in which every being has a fair chance to thrive we need all the assets that we can muster, which encompasses hybrid intelligence. On this premise an addition to Asimov's threesome is required — a Fourth Law — that may serve as the foundational bedrock for revisiting and applying Asimov's original three in an AI-saturated society: This 4th law goes beyond mere harm reduction; it proactively steers technological advancement toward universally beneficial outcomes. It repositions ethical responsibility squarely onto humans — not just engineers, but policymakers, business leaders, educators, and community stakeholders — to collectively shape the purpose and principles underlying AI development, and by extension AI-empowered robotics. Historically, technological innovation has often been driven by reductionist self-interest, emphasizing efficiency, profit, and competitive advantage at the expense of broader social and environmental considerations. Hybrid intelligence, underpinned by the proposed fourth law, shifts the narrative from individualistic to collective aspirations. It fosters a world where technological development and ethical stewardship move hand-in-hand, enabling long-term collective flourishing. This shift requires policymakers and leaders to prioritize systems thinking over isolated problem-solving. It is time to ask: How does a specific AI or robotic implementation affect the broader ecosystem, including human health, social cohesion, environmental resilience, and ethical governance? Only by integrating these considerations into decision-making processes from the outset can we ensure that technology genuinely benefits humanity and the environment they depend on. Implementing the 4th law means to embed explicit ethical benchmarks into AI design, development, testing, and deployment. These benchmarks should emphasize transparency, fairness, inclusivity, and environmental sustainability. For example, healthcare robots must be evaluated not merely by efficiency metrics but also by their ability to enhance patient well-being, dignity, and autonomy. Likewise, environmental robots should prioritize regenerative approaches that sustain ecosystems rather than short-term fixes that yield unintended consequences. Educational institutions and corporate training programs must cultivate double literacy — equipping future designers, users, and policymakers with literacy in both natural and artificial intelligences. Double literacy enables individuals to critically evaluate, ethically engage with, and innovatively apply AI technologies within hybrid intelligence frameworks. Differently put, the 4th law looks for proscial A, AI-systems that are tailored, trained, tested and targeted to bring out the best in and for people and planet. Social benefits are aimed for as a priority, rather than a collateral benefit in the pursuit of commercial success. That requires humans who are fluent in double literacy. The rapid integration of AI into our social fabric demands immediate and proactive ethical revision. Written over eight decades ago Asimov's laws provide an essential starting point for today; their adaptation to contemporary reality requires a holistic lens. The 4th law explicitly expands their scope and steeps them in humanity's collective responsibility to design AI systems that nurture our best selves and sustain our shared environment. In a hybrid era, human decision-makers (each of us) do not have the luxury of reductionist self-interest. Revisiting and revamping Asimov's laws through the lens of hybrid intelligence is not just prudent — it is imperative for our collective survival

Don't be a Luddite, embrace artificial intelligence
Don't be a Luddite, embrace artificial intelligence

Arab News

time06-02-2025

  • Science
  • Arab News

Don't be a Luddite, embrace artificial intelligence

The 20th-century British science fiction writer Arthur C. Clarke famously observed that any sufficiently advanced technology was indistinguishable from magic. Clarke spent much of his life foretelling, with unerring accuracy, the nature of the world in which we now live. In 1945, for example, he proposed a system of satellites in geostationary orbits ringing the Earth, upon which we now rely for communication and navigation. In 1964, he suggested that the workers of the future 'will not commute ... they will communicate.' Sound familiar? And again in 1964, Clarke predicted that, in the world of the future, 'the most intelligent inhabitants ... won't be men or monkeys, they'll be machines, the remote descendants of today's computers. Now, the present-day electronic brains are complete morons. But this will not be true in another generation. They will start to think, and eventually they will completely outthink their makers.' It is the accuracy of that last prediction — what Clarke called 'machine learning,' now usually referred to as artificial intelligence — that most exercises those who feel threatened by it. It would be fair to say that AI, or more accurately the exponential speed at which it is acquiring new and innovative capabilities, is not being universally welcomed. There are two main areas of concern, the first of which may be summarized as: 'AI will eventually kill us all.' This may seem far-fetched, but the thought process that leads to the doomsday conclusion is not without logic. Broadly, it is that a superior intelligence must eventually reach the inevitable conclusion that humanity is an inferior species, destroying the planet on which it relies for its very existence, and should therefore be eliminated for the protection of everything else. Elon Musk worked this out a long time ago. Why do you think he wants to go to Mars? Fortunately, humanity is not reliant on Musk for its survival: for that we must thank another great exponent of the science fiction genre, Isaac Asimov. In 1942, he formulated the Three Laws of Robotics, which broadly regulate the relationship between us and machines, and in 1986 he added another law to precede the first three. It states: 'A robot may not injure humanity or, through inaction, allow humanity to come to harm.' Asimov's laws apply to fictional machines, of course, but they still influence the ethics that underpin the creation and programming of all artificial intelligence. So, on the whole, I think we are safe. AI, or more accurately the exponential speed at which it is acquiring new and innovative capabilities, is not being universally welcomed The second area of concern may be broadly summarized as: 'AI is coming for all our jobs.' While this one may have more traction, it is not a new fear and it predates AI by centuries. It is not difficult to imagine the inventor of the wheel, showing off his creation but being greeted with skepticism by his Neolithic friends: 'No good will come of this. Our legs will become redundant, and those of future generations will wither away and die. This contraption must be destroyed.' Before the first Industrial Revolution in the 18th and 19th centuries, most people in Europe and North America lived in agrarian communities and worked by hand. The advent of the water mill and the steam engine threw many out of work, as traditional crafts such as spinning and weaving cotton became redundant. However, jobs that had not previously existed were created for boiler makers, ironsmiths and mechanics. It happened again in the late 19th century, when steam power was superseded by electricity and steam mechanics retrained to become electricians. And again in the 1980s, with the advent of the computer age and the end of repetitive manual tasks, but the creation of new jobs for hardware and software engineers. Will AI have the same net beneficial effect? There is evidence that it already is. In the UK last week, health chiefs began screening 700,000 women for signs of breast cancer, using AI that can detect changes in breast tissue in a mammogram that even an expert radiologist would miss. In addition, the technology allows screening with only one human specialist instead of the usual two, releasing hundreds of radiologists for other vital work. This AI will save lives. However, when one door opens, another closes. Also last week, the Authors Guild, the US body that represents writers, created a logo for books to show readers that a work 'emanates from human intellect' and not from ­artificial intelligence. Authors argue that AI work has no merit, since it merely copies words and phrases that have already been used by another writer You can understand their angst. Large language models, the version of AI that is the authors' target, create the databases from which they produce content by scraping online sources for every word ever published, mostly without the formality of bothering to pay the original author. Many journalists have the same complaint. Some major media outlets — including the Associated Press, Axel Springer, the Financial Times, News Corp and The Atlantic — have reached licensing agreements with AI creators. Others, notably The New York Times, have gone down the lawsuit route for breach of copyright. Perhaps, especially for authors, this is a can of worms best left unopened. It used to be said that a monkey sitting at a keyboard typing at random for an infinite amount of time would eventually produce the complete works of Shakespeare. Mathematicians dispute this, but there is no disputing that AI has made it more likely. For example, if you were to ask a large language model such as ChatGPT to write a 27,000-word story in the style of Ernest Hemingway about an elderly fisherman and his long struggle to catch a giant marlin, it would almost certainly come up with 'The Old Man and the Sea' — especially since the original is already in the AI's database. Authors argue that the AI work would have no merit, since it merely copies words and phrases that have already been used by another writer. But does that argument not apply to every new literary work? With the exception of Shakespeare, who coined about 1,700 written neologisms — from 'accommodation' to 'suspicious' — among a total of about 20,000 words in his plays and poems, almost every writer uses words and phrases that have been used by others before them: any literary or artistic merit derives from how a writer deploys those words and phrases. But if a book needs a special logo to distinguish a human author from an AI, what is the point in making the distinction? In England in the early 19th century, gangs of men called Luddites — after Ned Ludd, a weaver who lost his traditional manual job to mechanization — roamed towns and cities smashing the new machines in the textile industry that they believed were depriving them of employment. They initially enjoyed widespread support, but this melted away when it became clear that the age of steam was creating more jobs than it destroyed. Let that be a lesson for the anti-AI Luddites of the 21st century. - Ross Anderson is associate editor of Arab News.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store