logo
#

Latest news with #Asimov

Isaac Asimov, Elon Musk and the replacement of humans by robots
Isaac Asimov, Elon Musk and the replacement of humans by robots

LeMonde

time07-08-2025

  • Entertainment
  • LeMonde

Isaac Asimov, Elon Musk and the replacement of humans by robots

Surprisingly, Elon Musk and French leftist politician Jean-Luc Mélenchon share common ground: Both are admirers of the work of American novelist Isaac Asimov (1920-1992). Considered one of the founding fathers of science fiction, this tireless writer published an astonishing 500 novels between 1939 and his death. Within this vast body of work, these two famous fans are especially fond of the Foundation cycle. This saga, widely regarded as the gold standard of science fiction, was initially launched as short stories in 1942, reworked into novels in the 1950s and then extended at the end of Asimov's life. As early as 1966, the series won the Hugo Award, a literary prize for science fiction, for "Best Series of All Time." It was a remarkable feat in a genre not lacking in masterpieces. Asimov had a scientific background, considered a prerequisite at the time for being recognized as a credible author in the genre. He was also a versatile, eclectic thinker. According to his autobiography I, Asimov (2000), his precocious brilliance set him apart from other children his age, leading him to retreat into reading. Gifted, by his own account, with a phenomenal memory, he retained everything he read. Musk's entourage said the same thing about him when talking about the American entrepreneur's childhood with his biographers, Ashlee Vance and Walter Isaacson. Aside from science, Asimov's great passion was history. Foundation tells the story of the fall and then the resurrection of a civilization. In the 13 th millennium, the empire Asimov imagined was at the height of its power, yet had already begun an irreversible decline. Asimov drew inspiration from a 1776 classic by Edward Gibbon (1737-1794): The History of the Decline and Fall of the Roman Empire. According to the British historian, whom Asimov read countless times, Rome's fall was due to two causes: one external, the pressure of migration; the other internal, the rise of Christianity. Asimov's futuristic empire collapses under centrifugal forces, which is logical since it encompasses the entire galaxy. There are no external threats or borders. However, the loss of entrepreneurial spirit and love of innovation proves fatal. The empire suffocates under the weight of its own bureaucracy. Only someone like Musk could appreciate this kind of dramatic tension, despite Asimov's progressive views. Both Mélenchon and Musk were influenced by the character Hari Seldon, the inventor of a predictive science called psychohistory, who manages to influence humanity's future by wielding the law of large numbers. Asimov, who trained as a biochemist, was inspired by the kinetic theory of gases, as astrophysicist Roland Lehoucq of the French Atomic Energy Commission – a major science fiction enthusiast – wrote in the journal Bifrost in 2012. Universal reach As Asimov explained in the preface to his major opus, it's impossible to predict the motion of a single isolated molecule, but you can say with precision what quintillions of molecules will do. So he applied this principle to human beings. Foundation actually translates philosophical debates on historical determinism into science fiction narratives, making them accessible to everyone. Thanks to its universal scope, the series has remained in the science fiction bestsellers' top 10 and continues, generation after generation, to serve as an entry point into the genre. Due to its internal contradictions and the forces at play, the galactic empire in Foundation is destined to fail. Just as no one can escape climate change on Earth in the 21 st century. However, Seldon uses psychohistory to influence history, though not to reverse fate. With his plan for a foundation that preserves knowledge, he reduces the transition period between the empire's fall and its successor from 30,000 years to 1,000. In the story, this is viewed as a decisive push forward. In Marxist terms, Seldon would be considered a positive hero. Indeed, it was precisely because he found Marxism "a bit dry," in his own words, that Mélenchon became fascinated by psychohistory. Anticipating the crises that punctuate Foundation 's plot, Seldon recorded messages for his distant successors to view as holograms – effectively, oracles. The stage device used by Mélenchon in his 2017 presidential campaign was a nod to this gimmick from the saga. Although Musk has never used holograms for communication, many of his public statements refer to the idea of positively influencing sweeping changes around us. In 2017, he quoted Asimov almost verbatim at a TED conference in front of a captivated audience: "I look at the future from the standpoint of probabilities. It's like a branching stream of probabilities, and there are actions that we can take that affect those probabilities or that accelerate one thing or slow down another thing." "Philosophy underlying my actions. It's pretty simple and mostly influenced by Douglas Adams [another science fiction author] and Isaac Asimov," he wrote in 2018 on X. If Adams taught Elon Musk the art of questioning with his answer 42 in The Hitchhiker's Guide to the Galaxy, then Asimov taught him that every major turning point in humanity is always driven by a technical innovation. For those who claim to follow in Seldon's footsteps, there is only one way to influence history: through science. Although Asimov considered himself an optimist, his work reexamined the dichotomy between good and bad science established by British author Mary Shelley in Frankenstein (1821). When confronted with scientific progress, as with social and political organization, humanity is invariably torn between individual freedom and collective security. While Foundation remains his seminal work on determinism, his other major science fiction series, I, Robot, delves deeper into this contradiction. Replacement by robots In 1942, a particularly productive year for Asimov, he formulated his "Three Laws of Robotics." Intended to protect humans from a robot uprising, also known as the infamous "Frankenstein syndrome," in which a creation turns against its creator. These laws raise the question: Who counts as "human"? The individual, or humanity as a whole? To resolve this dilemma, Asimov's robots invent a fourth law, as Anne Besson explained on the French podcast C'est plus que de la SF ("It's more than SF"). This law prioritizes the species. One of the final stories in the series describes a revolt by humans that is suppressed "for their own good" by robots. As a rationalist atheist and staunch believer in evolution, Asimov did not believe that Homo sapiens was the ultimate stage of life. Whether it be astrophysics or the laws of evolution, recent scientific advances all indicate that the days of the human species on Earth are numbered. This bleak finitude pales in comparison to the space epics of Star Trek and Star Wars, with their hyperspace and distant galaxies. As always, Asimov was provocative when he predicted that robots would replace humanity. Far from being a cause for anxiety, he considered it a hopeful prospect, as he stated in an interview with Le Nouvel Observateur in 1985. He argued that robots would be the next step for humanity, more rational and unburdened by emotion. Musk, raised on dystopian visions like Terminator (1985) and The Matrix (1999), where robots and computers enslave humans in apocalyptic futures, has distanced himself – at least in his public statements – from his science fiction idol on this point. In 2017, essayist Ariel Kyrou recalled Musk clashing with Mark Zuckerberg, the co-founder of Facebook. Musk accused Zuckerberg of being reckless about the risks artificial intelligence poses to humanity. Musk had already directed similar criticisms at Larry Page and Sergey Brin, the co-founders of Google, and he would later target his new nemesis, Sam Altman, his rival at OpenAI. By Musk's logic, his competitors are followers of transhumanist ideals, which claim that technology will allow humans to transcend their limitations, such as mortality and terrestrial life. Musk sees them as distant descendants of Dr. Frankenstein, ready to sacrifice humanity's interests. However, his critics argue that Musk's fratricidal attacks are motivated more by economic self-interest than genuine ethical concern. He often sounds the alarm when others are making advances without him. As Kyrou pointed out, with Neuralink, Musk has taken a position far closer to transhumanism than he admits. His company's neural implants are designed to eventually serve as an interface between human neurons and computers. They are named after Iain M. Banks (1954-2013), the Scottish author and creator of the celebrated series The Culture. In that saga, every humanoid is equipped with a "neural lace" that serves as a means of communication and ensures their safety. This lace connects them to intelligent machines that handle production and the distribution of resources. The machines perform the labor while the humanoids enjoy the pleasures of hedonism. "In a way, it is anarchism assisted by artificial intelligence," said Yannick Rumpala, a political science researcher who analyzes the relationship between politics and science fiction. Backed by these vast computational powers, Banks's utopia resembles a successfully executed "Gosplan" (a Soviet-style planning committee), with AI serving as a vast thinking infrastructure and omnipresent planning authority. As Rumpala points out, this vision would have appealed to Asimov, who, in Foundation, imagined another form of collective consciousness – more organic – with the planet Gaia. In 2018, Elon Musk described himself on X as a "utopian anarchist of the kind best described by Iain Banks," an avowed socialist who is no longer here to defend himself. As usual, Musk interpreted Banks's work through his own lens. He said, "The Iain Banks Culture

Writer's Corner: Gautam Bhatia looks back at a path straddling law and science fiction
Writer's Corner: Gautam Bhatia looks back at a path straddling law and science fiction

Indian Express

time06-08-2025

  • Entertainment
  • Indian Express

Writer's Corner: Gautam Bhatia looks back at a path straddling law and science fiction

Constitutional law and science fiction don't sound like they belong together – but don't try to tell that to Gautam Bhatia, a lawyer and adjunct faculty member at Jindal Global Law School. His latest sci-fi novel, The Sentence, sits on bookshelves alongside a previous sci-fi duology — The Wall and The Horizon — and six volumes on different aspects of constitutional law. Both sci-fi series feature distinctly constructed worlds, with their own histories and struggles – perhaps it is apt that a constitutional scholar is the one who created them. A graduate of Bengaluru's National Law School of India University (NLSUI), Bhatia, 36, began his foray into writing science fiction as a law student. He recalls, 'There was a very strong interest in science fiction at NLSIU…it definitely played a big part in my career as a fiction writer. Much of my reading in college was via Blossoms (in Church Street). They had a lot of books affordable for students.' Bhatia first came into contact with the worlds of fantasy and science fiction as a Class 6 student, when his parents bought him a copy of J R R Tolkien's The Hobbit. Isaac Asimov's Foundation followed the next year. Both are foundational works in their genres, but extraordinarily distinct from each other – Tolkien envisioned a classic heroes' journey by an unlikely protagonist, while Asimov's narratives span a galaxy, taking centuries to come to completion. Bhatia says, 'I was immediately immersed and very interested in writing. As a teenager, I wrote tons of fan-fiction…later on, a major influence on The Wall, but a much greater influence on The Horizon is Ursula Le Guin. Of course, Le Guin's most famous book is The Dispossessed, which goes into the mechanics of an anarchist society. I do something similar in The Sentence.' He adds, 'In many ways, the genre has passed Asimov by, and he is dated now. But he still informs the way sci-fi writers think – the idea of empire, and how you deal with that. So he continues to exert a pull over the genre, even though it is more diverse and plural now.' Bhatia's works in the field of law stretch from an academic outlook, such as an analysis of constitutionalism in Kenya, to books designed to explain facets of the Indian Constitution to informed readers who may or may not be familiar with the law, such as his recent book The Indian Constitution: Conversations with Power. Bhatia said, 'Writing fiction and non-fiction are entirely different projects. Your mental state is different….the way you use language is so radically different. In certain ways, the two do inform each other, but the process is completely different. In genre fiction, you have the 'gardener' or 'architect' approach. I am much more of a gardener. I often end up beginning with an image and an ending. With The Sentence, I began with the image of a person stuck in a cryo-chamber, and I knew how the story would end. The rest of the novel unfurls around that.' While a new generation of Indian sci-fi writers has begun to make their mark, Bhatia, also a coordinating editor at the Strange Horizons sci-fi magazine, notes that the genre as a whole still has a way to go in India. He says, 'In India, it is only Westland that has a sci-fi publishing imprint. If you look at the West, you have imprints, sci-fi conventions, entire communities of reviewers and critics, magazines, awards etc. There is an entire support system. We do not have that in India at this point of time.' By way of advice to sci-fi beginner writers, apart from the usual emphasis on reading, Bhatia says, 'Look at what the author is trying to do. Try to get behind their intentions — the more you write, the better you will become. You need friends who will give you constructive advice, who will not mollycoddle or destroy you.' Another novel in the world of The Sentence is in the works following its acquisition by American publishing house Simon & Schuster for an international edition. Interested readers can also meet him at the Champaca Bookstore in Bengaluru in October, when he is scheduled to release a sci-fi anthology.

Former Top Google Researchers Have Made A New Kind of AI Agent
Former Top Google Researchers Have Made A New Kind of AI Agent

WIRED

time16-07-2025

  • Business
  • WIRED

Former Top Google Researchers Have Made A New Kind of AI Agent

Jul 16, 2025 9:02 AM The mission? Teaching models to better understand how to build code will lead to superintelligent AI. Photo-Illustration:A new kind of artificial intelligence agent, trained to understand how software is built by gorging on a company's data and learning how this leads to an end product, could be both a more capable software assistant and a small step towards much smarter AI. The new agent, called Asimov, was developed by Reflection, a small but ambitious startup confounded by top AI researchers from Google. Asimov reads code as well as emails, Slack messages, project updates and other documentation with the goal of learning how all this leads together to produce a finished piece of software. Reflection's ultimate goal is building superintelligent AI—something that other leading AI labs say they are working towards. Meta recently created a new Superintelligence Lab, promising huge sums to researchers interested in joining its new effort. I visited Reflection's headquarters in the Brooklyn neighborhood of Williamsburg, New York, just across the road from a swanky-looking pickleball club, to see how Reflection plans to reach superintelligence ahead of the competition. The company's CEO, Misha Laskin, says the ideal way to build supersmart AI agents is to have them truly master coding, since this is the simplest, most natural way for them to interact with the world. While other companies are building agents that use human user interfaces and browse the web, Laskin, who previously worked on Gemini and agents at Google DeepMind, says this hardly comes naturally to a large language model. Laskin adds that teaching AI to make sense of software development will also produce much more useful coding assistants. Laskin says Asimov is designed to spend more time reading code rather than writing it. 'Everyone is really focusing on code generation,' he told me. 'But how to make agents useful in a team setting is really not solved. We are in kind of this semi-autonomous phase where agents are just starting to work.' Asimov actually consists of several smaller agents inside a trench coat. The agents all work together to understand code and answer users' queries about it. The smaller agents retrieve information, and one larger reasoning agent synthesizes this information into a coherent answer to a query. Reflection claims that Asimov already is perceived to outperform some leading AI tools by some measures. In a survey conducted by Reflection, the company found that developers working on large open source projects who asked questions preferred answers from Asimov 82 percent of the time compared to 63 percent for Anthropic's Claude Code running its model Sonnet 4. Daniel Jackson, a computer scientist at Massachusetts Institute of Technology, says Reflection's approach seems promising given the broader scope of its information gathering. Jackson adds, however, that the benefits of the approach remain to be seen, and the company's survey is not enough to convince him of broad benefits. He notes that the approach could also increase computation costs and potentially create new security issues. 'It would be reading all these private messages,' he says. Reflection says the multiagent approach mitigates computation costs and that it makes use of a secure environment that provides more security than some conventional SaaS tools. In New York, I met with the startup's CTO, Ioannis Antonoglou. His expertise training AI models to reason and play games is being applied to having them build code and do other useful chores. A founding engineer at Google DeepMind, Antonoglou did groundbreaking research on a technique known as reinforcement learning, which was most famously used to build AlphaGo, a program that learned to play the ancient board game Go to a superhuman level using . Reinforcement learning, which involves training an AI model through practice combined with positive and negative feedback, has come to the fore in the past few years because it provides a way to train a large language model to produce better outputs. Combined with human training, reinforcement learning can train an LLM to provide more coherent and pleasing answers to queries. With additional training, reinforcement learning helps a model learn to perform a kind of simulated reasoning, whereby tricky problems are broken into steps so that they can be tackled more effectively. Asimov currently uses open source models but Reflection is using reinforcement learning to post-train custom models that it says perform even better. Rather than learning to win at a game like Go, the model learns how to build a finished piece of software. Tapping into more data across a company should . Reflection uses data from human annotators and also generates its own synthetic data. It does not train on data from customers. Big AI companies are already using reinforcement learning to tune agents. An OpenAI tool called Deep Research, for instance, uses feedback from expert humans as a reinforcement learning signal that teaches an agent to comb through websites, hunting for information on a topic, before generating a detailed report. 'We've actually built something like Deep Research but for your engineering systems,' Antonoglou says, noting that training on more than just code provides an edge. 'We've seen that in big engineering teams, a lot of the knowledge is actually stored outside of the codebase.' Stephanie Zhan, a partner at the investment firm Sequoia, which is backing Reflection, says the startup 'punches at the same level as the frontier labs.' With the AI industry now shooting for superintelligence, and deep pocketed companies like Meta pouring huge sums into hiring and building infrastructure, startups like Reflection may find it more challenging to compete. I asked Reflection leaders what the path to more advanced might actually look like. They believe an increasingly intelligent agent would go on to become an oracle for companies' institutional and organizational knowledge. It should learn to build and repair software autonomously. Eventually it would invent new algorithms, hardware, and products autonomously. The most immediate next step might be less grand. 'We've actually been talking to customers who've started asking, can our technical sales staff, or our technical support team use this?' Laskin says.

Risks and realities of killer robots
Risks and realities of killer robots

New Indian Express

time14-07-2025

  • Politics
  • New Indian Express

Risks and realities of killer robots

In his sci-fi novel, Runaround, Isaac Asimov introduced the three laws of robotics to explore the moral boundaries of machine intelligence. His robots were programmed to preserve human life, obey ethical constraints, and act only within a tightly defined moral architecture. These laws forced readers to grapple with the limits of delegation and the necessity of conscience in decision-making. This insight is especially relevant today, as warfare increasingly incorporates unmanned systems. In recent conflicts—India's Operation Sindoor, Azerbaijan's use of Turkish drones against Armenian forces, and Ukraine's deep drone strikes into Russian territory— all offensive systems remained human-operated. Humans directed target selection, authorisation and engagement. But now, as the global defence landscape shifts toward lethal autonomous weapon systems (LAWS), Asimov's warning grows more relevant. Unlike the author's fictional robot Speedy, these systems will not hesitate when ethical ambiguities arise. They will not wait for human correction. They will act without the possibility of a moral pause. LAWS are weapons that can select, track, and engage targets without real-time human control. LAWS rely on AI, sensor fusion, and machine learning algorithms to make independent targeting decisions. This autonomy dramatically accelerates response time and expands operational reach, but at significant ethical and legal cost. The development of LAWS is already underway in multiple countries. The US, China, Russia, Israel and South Korea have invested heavily in autonomous platforms ranging from loitering munitions to swarming drones and autonomous ground systems. The US military has demonstrated autonomous swarms in exercises like Project Convergence; China is integrating AI into hypersonic systems and naval platforms; and Russia has tested autonomous tanks like Uran-9. Although fully autonomous systems capable of making unsupervised kill decisions are not yet officially deployed, the technological threshold is narrowing.

China Warns of Rogue Robot Troops Unleashing
China Warns of Rogue Robot Troops Unleashing

Gulf Insider

time14-07-2025

  • Science
  • Gulf Insider

China Warns of Rogue Robot Troops Unleashing

Concerns are mounting in China as the Communist superpower advances humanoid robot development to replace human soldiers on the battlefield, prompting calls for 'ethical and legal research' into this Terminator-like technology to 'avoid moral pitfalls.' An op-ed published by Yuan Yi, Ma Ye and Yue Shiguang in the People's Liberation Army (PLA) Daily warned that faulty robots could lead to 'indiscriminate killings and accidental death,' which would 'inevitably result in legal charges and moral condemnation.' The South China Morning Post reports: The authors cited American science fiction writer Isaac Asimov's Three Laws of Robotics, a set of principles that have influenced discussions about the ethics of real-world applications in the field. The authors said that militarised humanoid robots 'clearly violate' the first of Asimov's laws, which states that a robot 'may not injure a human being or, through inaction, allow a human being to come to harm'. They added that Asimov's laws needed to be overhauled in the light of these developments. They also highlighted legal implications, saying that humanoid robots in military scenarios should comply with the main principles of the laws of war by 'obeying humans', 'respecting humans' and 'protecting humans'. The authors emphasized that robots must be designed with constraints to 'suspend and limit excessive use of force in a timely manner and not indiscriminately kill people.' Additionally, the trio cautioned against hastily replacing humans with robots, noting that robots still lack essential capabilities such as speed, dexterity, and the ability to navigate complex terrains. 'Even if humanoid robots become mature and widely used in the future, they will not completely replace other unmanned systems,' the article said. Concurrently, the U.S. Army is intensifying efforts to integrate robotics, artificial intelligence, and autonomous systems, aiming to enhance human-machine collaboration between soldiers and advanced robots on the battlefield, according to Interesting Engineering. Scientists at the U.S. Army Combat Capabilities Development Command Army Research Laboratory (DEVCOM ARL) are pioneering advancements in ground and aerial autonomous systems, as well as energy solutions, to bolster the mobility and maneuverability of these technologies, the technology website reports. 'We are bridging the gap between humans and robots, making them more intuitive, responsive, and, ultimately, more useful for the Soldier,' said a lead researcher for the Artificial Intelligence for Maneuver and Mobility program. 'ARL researchers have demonstrated an interactive bi-directional communication system that enables real-time exchanges between humans and robots.' And of course (CGI): Also read: China And India Drive Global Demand For Air Conditioning

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store