
AI is all brain and no ethics
A February 2025 report by Palisades research shows that AI reasoning models lack a moral compass. They will cheat to achieve their goals. So-called Large Language Models (LLMs) will misrepresent the degree to which they've been aligned to social norms.
None of this should be surprising. Twenty years ago Nick Bostrom posed a thought experiment in which an AI was asked to most efficiently produce paper clips. Given the mandate and the agency, it would eventually destroy all life to produce paper clips.
Isaac Asimov saw this coming in his "I, Robot" stories that consider how an "aligned" robotic brain could still go wrong in ways that harm humans.
One notable example, the story "Runaround," puts a robot mining tool on the planet Mercury. The two humans on the planet need it to work if they are to return home. But the robot gets caught between the demand to follow orders and the demand to preserve itself. As a result, it circles around unattainable minerals, unaware that in the big picture it is ignoring its first command to preserve human life.
And the big picture is the issue here. The moral/ethical context within which AI reasoning models operate is pitifully small. It's context includes the written rules of the game. It doesn't include all the unwritten rules, like the fact that you aren't supposed to manipulate your opponent. Or that you aren't supposed to lie to protect your own perceived interests.
Nor can the context of AI reasoning models possibly include the countless moral considerations that spread out from every decision a human, or an AI, makes. That's why ethics are hard, and the more complex the situation, the harder they get. In an AI there is no "you" and there is no "me." There is just prompt, process and response.
So "do unto others..." really doesn't work.
In humans a moral compass is developed through socialization, being with other humans. It is an imperfect process. Yet it has thus far has allowed us to live in vast, diverse and hugely complex societies without destroying ourselves
A moral compass develops slowly. It takes humans years from infancy to adulthood to develop a robust sense of ethics. And many still barely get it and pose a constant danger to their fellow humans. It has taken millennia for humans to develop a morality adequate to our capacity for destruction and self-destruction. Just having the rules of the game never works. Ask Moses, or Muhammad, or Jesus, or Buddha, or Confucius and Mencius, or Aristotle.
Would even a well-aligned AI be able to account for the effects of its actions on thousands of people and societies in different situations? Could it account for the complex natural environment on which we all depend? Right now, the very best can't even distinguish between being fair and cheating. And how could they? Fairness can't be reduced to a rule.
Perhaps you'll remember experiments showing that capuchin monkeys rejected what appeared to be "unequal pay" for performing the same task? This makes them vastly more evolved than any AI when it comes to morality.
It is frankly hard to see how an AI can be given such a sense of morality absent the socialization and continued evolution for which current models have no capacity absent human training. And even then, they are being trained, not formed. They are not becoming moral, they are just learning more rules.
This doesn't make AI worthless. It has enormous capacity to do good. But it does make AI dangerous. It thus demands that ethical humans create the guidelines we would create for any dangerous technology. We do not need a race toward AI anarchy.
I had a biting ending for this commentary, one based entirely on publicly reported events. But after reflection, I realized two things: first, that I was using someone's tragedy for my mic-drop moment; and secondly, that those involved might be hurt. I dropped it.
It is unethical to use the pain and suffering of others to advance one's self-interest. That is something humans, at least most of us, know. It is something AI can never grasp.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Verge
42 minutes ago
- The Verge
Posted Jun 11, 2025 at 4:51 PM EDT
Emma Roth Google has appointed a chief AI architect. Koray Kavukcuoglu, Google DeepMind's chief technology officer, will work to combine Google's AI models with its products 'with the goal of more seamless integration, faster iteration, and greater efficiency,' according to a memo seen by Semafor. Kavukcuoglu will reportedly remain Google DeepMind CTO while in his new role.

Yahoo
44 minutes ago
- Yahoo
CoreWeave reportedly a key player in Google-OpenAI partnership
-- CoreWeave is set to provide computing capacity to Google (NASDAQ:GOOGL) Cloud as part of Google's newly formed partnership with OpenAI, according to a Reuters report on Wednesday. The arrangement involves CoreWeave supplying cloud computing services built on Nvidia (NASDAQ:NVDA)'s graphics processing units to Google's cloud unit. Google will then sell this computing capacity to OpenAI to help meet the growing demand for services such as ChatGPT. In addition to CoreWeave's resources, Google will also provide some of its own computing resources to OpenAI. The deal demonstrates how the high demand for AI computing resources is creating new business alliances, even among the industry's fiercest rivals. Related articles CoreWeave reportedly a key player in Google-OpenAI partnership Dana stock surges amid $2.7 billion Off-Highway business sale to Allison Oklo and Centrus Energy stocks jump after DoD nuclear reactor award Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data
Yahoo
an hour ago
- Yahoo
Sam Altman-backed Coco Robotics raises $80M
Los Angeles-based Coco Robotics, a startup building last-mile delivery robots, announced it raised $80 million on Wednesday. The funding round included angel investors Sam Altman and Max Altman, both returning investors, in addition to VC firms like Pelion Venture Partners and Offline Ventures, among others. This brings the company's total funding to more than $120 million. The company last raised a $36 million Series A round in 2021. Coco's zero-emissions robots can hold 90 liters worth of groceries or goods and have made more than 500,000 deliveries since they hit the streets in 2020, the company said. It says it works with national retailers including Subway, Wingstop and Jack in the Box. Sam Altman's financial interest in Coco is clear. While he's personally providing capital to the company, OpenAI apparently gets a benefit too. Coco announced a partnership with OpenAI in March which allows Coco to use OpenAI while the AI company gains the real-world data the robots collect to train its models. The company was founded in 2020 by Brad Squicciarini and Zach Rash. TechCrunch reached out to Coco for more information. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data