logo
#

Latest news with #TheMoralCircle

Should we give human rights to worms? Don't be absurd
Should we give human rights to worms? Don't be absurd

Telegraph

time22-03-2025

  • General
  • Telegraph

Should we give human rights to worms? Don't be absurd

The Roman emperor Domitian, if you believe Suetonius, spent his younger days killing flies with a sharpened pen. We're given this anecdote as a précis of what will follow: Domitian will become a ruler of 'savage cruelty', 'an object of terror and hatred to all', someone who 'turn[s] the virtues also into vices'. Beware the child who pulls wings from insects. Modern readers may think of Rome's genocidal wars and human bloodsports, its throwing of Christians to lions and its pitiless slave economy, and wonder whether Suetonius missed the point. The empire of the Caesars was great in many ways, but 'virtuous' it was not. Everyone was a Domitian-in-waiting; they just didn't have the pens, or the time. Looking back on our forebears, we may thank our lucky stars that we have moved on to better things. American philosopher Jeff Sebo comes out of this tradition of thinking about historical moral improvement, and in The Moral Circle, he aims to give the story a new chapter. The 'circle' of his title contains the beings considered worthy of rights, protections, freedoms and concern. Those in the circle matter; those without do not. 'The history of thinking about the moral circle,' Sebo claims, 'has been one of... expansion.' We used to think that various human subjects, like Rome's slaves, gladiators and barbarians, were not worthy of moral consideration, but we 'corrected these mistakes gradually over time'. This sunny view of human progress – suspend your doubts for now – suggests to Sebo that we should keep expanding the circle. He wants us to include non-human creatures, covering not only great apes, dolphins, elephants and domestic pets, but also insects, microbes and – most troublingly of all – hypothesised future beings such as artificial intelligences. Sebo's argument hinges on the difficulty inherent in identifying what actually distinguishes us from non-humans. Homo sapiens possesses capacities of consciousness, sentience and agency that we generally take to be (as he puts it) 'jointly sufficient for moral standing'. We can experience pain and pleasure, understand ourselves as having lives worth living and act accordingly: this is what grants us the rights and responsibilities we call 'morality'. But this picture is overly simple. Not all human beings fulfil all these criteria at all times; conversely, certain non-humans appear to share some or all of these capacities. Who's to say that some level of consciousness, and thus moral import, isn't available to an amoeba? Thus, Sebo insists, we should 'proceed with caution and humility' when assigning moral worth to some beings and denying it to others. Even if we can reasonably doubt that, say, an ant's life is of much innate value, or that there'll ever in fact be machines capable of replicating or exceeding human cognition, we ought to take the possibility seriously. The metaphor of the 'moral circle', meanwhile, reminds us that some beings are more central and some more peripheral to our moral concern. While we can never do everything, we can often do something: if you're reading this at home in Britain, you may owe more to your family than you do to a wasp colony in the Amazon, simply because there's more you can do for the former – but you don't necessarily owe the wasps nothing. The implications are enormous. 'Many beings might matter,' Sebo writes, 'and we might owe them a lot.' Yet that 'might', which is emblematic of The Moral Circle, entails both the book's greatest virtue – its cautious, undogmatic approach – and its most serious philosophical vice. Sebo is in effect asking us to make a bet – in fact, a titanic moral wager – on the basis of what, by his own admission, are very long odds. 'All of the beings discussed,' he writes, 'have at least a non-zero chance of being conscious.' A one-in-10,000 chance that microbes or roundworms might be conscious is 'non-negligible', we're told. I wouldn't risk a tenner on those likelihoods, never mind our entire moral order. Adding to the suspect nature of this jaunt to the moral bookies is Sebo's insistence that long odds are ameliorated by the multitudes who might be affected. We might have 'a weak duty to sextillions of current and future non-humans', he argues: if so, we should overlook the fact that most of them are likely not moral subjects, because a tiny proportion, which is nonetheless a huge number, might be. If you buy enough lottery tickets, you'll surely win in the end. And yet it isn't clear what such a winning ticket would buy you, since the practical implications of moral-circle expansion are here so abstractly drawn. The Moral Circle is a kind of romance of the big number. It repeatedly evokes them as a foil to its highly speculative – in some cases, frankly fantastical – account of the inner nature of nonhuman creatures. (Questions about the moral status of silicon-based replicants or the labour conditions in hypothetical space-colonies work well for Philip K Dick; they're less compelling as moves in a philosophical argument.) But despite Sebo's frequent appeals to nameless 'experts', his numbers seem ultimately concocted: why is one-in-10,000 a significant threshold, rather than one-in-9,999? It's never explained. In fairness, Sebo isn't demanding that we immediately give computers rights, or grant toucans a seat at the UN. 'Morality,' he writes, 'is a marathon, not a sprint.' And yes, as we face the loss of species and environmental disintegration, any action is better than none: Sebo is right that the moral status of non-humans is worthy of consideration. No doubt spearing flies with a stylus is a bad way to have fun. But having read The Moral Circle, I still think the Christians mattered more than the lions.

Can A.I. Heal Our Souls?
Can A.I. Heal Our Souls?

New York Times

time07-02-2025

  • Entertainment
  • New York Times

Can A.I. Heal Our Souls?

In a literary flourish long ago, Shantideva, an eighth-century Indian monastic, divulged what he called the 'holy secret' of Buddhism: The key to personal happiness lies in the capacity to reject selfishness and accustom oneself to accepting others. A cornerstone of the Buddhist worldview ever since, Shantideva's verse finds new, albeit unacknowledged, expression in two recent books: Jeff Sebo's provocative, if didactic, 'The Moral Circle' and Webb Keane's captivating 'Animals, Robots, Gods.' Much like Shantideva, both authors make a selfish case for altruism: asking the reader, in Keane's words, 'to broaden — and even deepen — your understanding of moral life and its potential for change.' Sebo, an associate professor of environmental studies at N.Y.U. and an animal-rights activist, centers his argument on human exceptionalism and our sometimes contradictory desire to live an ethical life. Those within the 'moral circle' — be it ourselves, families, friends, clans or countrymen — matter to us, while those on the outside do not. In asking us to expand our circles, Sebo speeds past pleas to consider other people's humanity, past consideration of chimpanzees, elephants, dolphins, octopuses, cattle or pets and heads straight to our moral responsibility for insects, microbes and A.I. systems. A cross between a polemic and that introductory philosophy course you never took, Sebo's tract makes liberal use of italics to emphasize his reasoning. Do A.I. systems have a 'non-negligible' — that is, at least a one in 10,000 — chance of being sentient? he asks. If so (and Sebo isn't clear that there is such a chance), we owe them moral consideration. The feeling in reading his argument, however, is of being talked at rather than to. That is too bad, because we are in new territory here, and it could be interesting. People are falling in love with their virtual companions, getting advice from their virtual therapists and fearing that A.I. will take over the world. We could use a good introductory humanities course on the overlap of the human and the nonhuman and the ethics therein. Luckily, Webb Keane, a professor in the department of anthropology at the University of Michigan, is here to fill the breach. Keane explores all kinds of fascinating material in his book, most of it taking place 'at the edge of the human.' His topics range from self-driving cars to humans tethered to life support, animal sacrifice to humanoid robots, A.I. love affairs to shamanic divination. Like Shantideva, he is interested in what happens when we adopt a 'third-person perspective,' when we rise above our usual self-centered identities, expand our moral imaginations and take 'the viewpoint of anyone at all, as if you were not directly involved.' Rather than drawing the boundary of the moral circle crisply, as Sebo would have it, Keane is interested in the circle's permeability. 'What counts as human?' he asks. 'Where do you draw the line?' And, crucially, 'What lies on the other side?' Several vignettes stand out. Keane cites a colleague, Scott Stonington, a professor of anthropology and practicing physician, who did fieldwork with Thai farmers some two decades ago. End-of-life care for parents in Thailand, he writes, often forces a moral dilemma: Children feel a profound debt to their parents for giving them life, requiring them to seek whatever medical care is available, no matter how expensive or painful. Life, precious in all its forms, is supported to the end and no objections are made to hospitalization, medical procedures or interventions. But to die in a hospital is to die a 'bad death'; to be able to let go, one should be in one's own bed, surrounded by loved ones and familiar things. To this end, a creative solution was needed: Entrepreneurial hospital workers concocted 'spirit ambulances' with rudimentary life support systems like oxygen to bear dying patients back to their homes. It is a powerful image — the spirit ambulance, ferrying people from this world to the next. Would that we, in our culture, could be so clear about how to negotiate the imperceptible line between body and soul, the confusion that arises at the edge of the human. Take Keane's description of the Japanese roboticist Masahiro Mori, who, in the 1970s, likened the development of a humanoid robot to hiking toward a mountain peak across an uneven terrain. 'In climbing toward the goal of making robots appear like a human, our affinity for them increases until we come to a valley,' he wrote. When the robot comes too close to appearing human, people get creeped out — it's real, maybe too real, but something is askew. What might be called the converse of this, Keane suggests, is the Hindu experience of darshan with an inanimate deity. Gazing into a painted idol's eyes, one is prompted to see oneself as if from the god's perspective — a reciprocal sight — from on high rather than from within that 'uncanny valley.' The glimpse is itself a blessing in that it lifts us out of our egos for a moment. We need relief from our self-centered subjectivity, Keane suggests — hence the attraction of A.I. boyfriends, girlfriends and therapists. The inscrutability of an A.I. companion, like that of an Indian deity, encourages a surrender, a yielding of control, a relinquishment of personal agency that can feel like the fulfillment of a long-suppressed dream. Of course, something is missing here too: the play of emotion that can only occur between real people. But A.I. systems, as new as they are, play into a deep human yearning for relief from the boundaries of self. Could A.I. ever function as a spirit ambulance, shuttling us through the uncanny valleys that keep us, as Shantideva knew, from accepting others? As Jeff Sebo would say, there is at least a 'non-negligible' — that is, at least a one in 10,000 — chance that it might.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store