
AI is all brain and no ethics
A February 2025 report by Palisades research shows that AI reasoning models lack a moral compass. They will cheat to achieve their goals. So-called Large Language Models (LLMs) will misrepresent the degree to which they've been aligned to social norms.
None of this should be surprising. Twenty years ago Nick Bostrom posed a thought experiment in which an AI was asked to most efficiently produce paper clips. Given the mandate and the agency, it would eventually destroy all life to produce paper clips.
Isaac Asimov saw this coming in his "I, Robot" stories that consider how an "aligned" robotic brain could still go wrong in ways that harm humans.
One notable example, the story "Runaround," puts a robot mining tool on the planet Mercury. The two humans on the planet need it to work if they are to return home. But the robot gets caught between the demand to follow orders and the demand to preserve itself. As a result, it circles around unattainable minerals, unaware that in the big picture it is ignoring its first command to preserve human life.
And the big picture is the issue here. The moral/ethical context within which AI reasoning models operate is pitifully small. It's context includes the written rules of the game. It doesn't include all the unwritten rules, like the fact that you aren't supposed to manipulate your opponent. Or that you aren't supposed to lie to protect your own perceived interests.
Nor can the context of AI reasoning models possibly include the countless moral considerations that spread out from every decision a human, or an AI, makes. That's why ethics are hard, and the more complex the situation, the harder they get. In an AI there is no "you" and there is no "me." There is just prompt, process and response.
So "do unto others..." really doesn't work.
In humans a moral compass is developed through socialization, being with other humans. It is an imperfect process. Yet it has thus far has allowed us to live in vast, diverse and hugely complex societies without destroying ourselves
A moral compass develops slowly. It takes humans years from infancy to adulthood to develop a robust sense of ethics. And many still barely get it and pose a constant danger to their fellow humans. It has taken millennia for humans to develop a morality adequate to our capacity for destruction and self-destruction. Just having the rules of the game never works. Ask Moses, or Muhammad, or Jesus, or Buddha, or Confucius and Mencius, or Aristotle.
Would even a well-aligned AI be able to account for the effects of its actions on thousands of people and societies in different situations? Could it account for the complex natural environment on which we all depend? Right now, the very best can't even distinguish between being fair and cheating. And how could they? Fairness can't be reduced to a rule.
Perhaps you'll remember experiments showing that capuchin monkeys rejected what appeared to be "unequal pay" for performing the same task? This makes them vastly more evolved than any AI when it comes to morality.
It is frankly hard to see how an AI can be given such a sense of morality absent the socialization and continued evolution for which current models have no capacity absent human training. And even then, they are being trained, not formed. They are not becoming moral, they are just learning more rules.
This doesn't make AI worthless. It has enormous capacity to do good. But it does make AI dangerous. It thus demands that ethical humans create the guidelines we would create for any dangerous technology. We do not need a race toward AI anarchy.
I had a biting ending for this commentary, one based entirely on publicly reported events. But after reflection, I realized two things: first, that I was using someone's tragedy for my mic-drop moment; and secondly, that those involved might be hurt. I dropped it.
It is unethical to use the pain and suffering of others to advance one's self-interest. That is something humans, at least most of us, know. It is something AI can never grasp.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
32 minutes ago
- Forbes
The Prompt: Meta Eyes Scale AI
Welcome back to The Prompt. Meta is reportedly planning to acquire a 49% stake in data labelling behemoth Scale AI for $14.8 billion, according to The Information. The deal is slated to place Scale AI's young billionaire CEO Alexandr Wang at a top position inside Meta along with a number of Scale AI employees, who will work in a new AI lab dedicated to developing 'superintelligence'— an AI system that outperforms human capabilities. CEO Mark Zuckerberg is reportedly closely involved in assembling the team of AI researchers and has gone to great lengths like setting up a WhatsApp group called 'Recruiting Party,' personally reaching out to potential recruits and rearranging desks for researchers to sit near him, Bloomberg reported. The new lab is part of Meta's efforts to keep up in the cutthroat AI race while wrangling a string of internal issues including employee churn, management problems and delayed or disappointing product releases. Now let's get into the headlines. DATA DILEMMAS Social media network Reddit sued Anthropic for allegedly training its AI models on personal user data without permission, and continuing to do so despite telling Reddit it had stopped, Forbes reported. Reddit was an early mover in capitalizing on its rich reserve of organic human data catalogued in its discussion forums, striking licensing deals with OpenAI and Google. In a lawsuit filed last week, Reddit claimed Anthropic's bot accessed its servers 100,000 times. BIG PLAYS ChatGPT will now be able to connect to a crop of external applications such as Google Drive, DropBox and Sharepoint, allowing enterprise users to glean insights from internal documents through the chatbot. It will also be able to access meeting recordings and transcriptions. The announcement was the latest in a series of feature releases intended to increase ChatGPT's functionality and keep people engaged. OpenAI has also reached $10 billion in annualized revenue through sales of its consumer products, CNBC reported. AI DEAL OF THE WEEK Young AI coding startup Anysphere has become the face of 'vibe coding' — a phrase coined by OpenAI cofounder Andrej Karpthy describing the use of large language models to create applications when the user doesn't necessarily need to know how to program. The nascent startup has raised $900 million at a $9.9 billion valuation and claims to have about $500 million in annualized revenue. The startup is betting that AI is going to dramatically transform software engineering in the next decade, making it magnitudes easier to program applications while eliminating cumbersome aspects of the process like correcting syntax or or debugging code. All a person has to do is press tab and AI completes the line of code for you and jumps to the next spot. Also of note: Enterprise AI startup Glean raised $150 million in Series F funding at a $7.2 billion valuation. Employees use the company's AI tools to search for internal information and build AI agents (software that can carry out specific tasks end-to-end) that can resolve IT tickets, write performance reviews and help prepare for meetings. Glean claimed to have passed $100 million in annualized revenue in February. (Read our 2023 profile of the company here.) DEEP DIVE Runway AI Throngs of excited moviegoers piled into Alice Tully Hall at Lincoln Center on Thursday night to be a part of Runway's third annual AI film festival. Cristobal Valenzuela, CEO of the $3.3 billion video and photo generation AI startup, spoke to a crowd of hundreds, asking them to think less about the digital tools and AI software used to make the short films they were about to watch, and instead focus on their human elements. The winning film, Total Pixel Space by Jacob Adler, is a jumble of both realistic and impossible vivid landscapes like a flying pig, people floating in a city or inside a pool, a bloom of jellyfish and a meerkat donning a bright yellow turtleneck. The 9 minute 28 second film raises the question of how many images could possibly exist in the world. The answer: Every image is composed of thousands of pixels— coordinates of positions and colors, a coalition of numbers. The film was selected from 6000 submissions, up from 300 a year ago, as interest in experimenting with AI models has exploded over the years. For all the creative benefits of video generation AI software, TV networks and filmmakers are adopting the technology for a more pragmatic reason: to produce and edit both television shows and movies quickly and more cheaply. AMC Network, which has produced popular shows like Breaking Bad and The Walking Dead, recently announced its plans to use Runway's AI models to create marketing and TV content. Lionsgate, the studio behind blockbuster hits like The Hunger Games and The Twilight Saga, has a partnership with Runway to use its models with a goal of making films on a fraction of the budget. But several studios don't want to openly admit they're using AI due to fears of backlash from creatives, who have voiced their concerns that these AI models are trained on copyrighted data scraped from the internet without consent and compensation. Runway is also currently facing litigation from a group of artists who claim their data was illicitly used to train its AI models. WEEKLY DEMO The Department of Government Efficiency developed a faulty AI tool to review thousands of contracts at the Department of Veteran Affairs that could be cut, labelling them as 'munchable,' Propublica reported. The software, developed by a programmer who has no formal experience in AI, was prone to making errors such as hallucinating the size of contracts, misreading them and inflating their value. MODEL BEHAVIOR Autonomous vehicles became easy targets during protests against Immigration and Customs Enforcement arrests in Los Angeles over the weekend. At least five Waymo driverless vehicles that were in the area were vandalized and set ablaze amid the protests. After the incident, Waymo halted its service in parts of downtown LA.


Geek Wire
an hour ago
- Geek Wire
Starbucks AI assistant, powered by Microsoft, helps baristas focus more on coffee and customers
Starbucks' Green Dot Assist AI-powered tool details how to make a specific drink and offers to a show a barista a quick video tutorial. (Screen grab via Starbucks) AI is coming to your Starbucks. The tech isn't going to make your coffee, but it might make the barista's job easier so they can focus on making your coffee better and faster. The Seattle-based coffee giant revealed a new generative AI-powered assistant called 'Green Dot Assist' that is designed to help baristas find quicker solutions to tasks that can get in the way of customer service. CNBC reported that the feature is powered by Microsoft Azure's OpenAI platform. Using in-store iPads instead of flipping through manuals, Starbucks says its partners can get quick answers related to real-time shift planning, coaching, store performance, standards, troubleshooting and even a refresher on the ingredients for a seasonal beverage. The tech initiative was announced at Leadership Experience 2025 (LE25) in Las Vegas, where Starbucks is bringing together more than 14,000 of its coffeehouse leaders from across North America. The goal of the three-day gathering is to facilitate immersive workshops and connect workers, who can hear directly from company leaders, including new CEO Brian Niccol. Green Dot Assist will pilot in 35 stores, with broader rollout in the company's U.S. and Canadian locations in fiscal year 2026. Niccol, who was named Starbucks CEO last August, revealed plans in September to help the slumping coffee chain boost its brand and sales, in part by investing in technology 'that enhance the partner and customer experience, improve our supply chain and evolve our app and mobile ordering platform.' As GeekWire previously reported, it's worth watching how Niccol handles the company's digital ordering system. Starbucks helped pioneer mobile order-ahead capabilities more than a decade ago. Now about a third of total transactions from Starbucks' U.S. company-operated stores come from its app. But the app has become somewhat of a sore spot due to inaccurate wait times and the congestion it creates inside stores — both for those trying to pick up their mobile orders, and people ordering at the store itself. 'We're returning to what made us iconic: handcrafted coffee, human connection, and a relentless focus on excellence,' Niccol said in Las Vegas. Starbucks also announced a next-gen point-of-sale system designed to reduce training time and drink remakes while enhancing workflow. The system features improved accessibility with removed language barriers as well as customer personalization.


Gizmodo
an hour ago
- Gizmodo
The Trump Administration Is Launching an AI Chatbot
No word on whether it'll speak like Trump. Chatbots are notoriously sources of misinformation and false confidence, so surely imbuing one with the authority of the federal government won't cause any problems, right? 404 Media spotted a GitHub repository that shows the Trump administration is working on its very own chatbot that appears set to eventually launch on its own website, According to 404 Media's findings, appears to be the work of the US General Services Administration's Technology Transformation Services, which is curently being run by Thomas Shedd. He's a former Tesla engineer who was identified by the New York Times as an ally of Elon Musk, who was appointed to the role of director of GSA's tech arm. In audio of a leaked meeting published by 404 Media, Shedd can be heard pushing for AI integration across the government. While currently redirects to the White House's landing page, a version of the site that 404 Media uncovered reveals some details about the project, including the mission to 'Accelerate government innovation with AI.' The project appears to include a chatbot, though details are vague on what exactly that bot will be able to do. It also shows plans to launch an API that will integrate with OpenAI, Google, and Anthropic. Code from the project suggests they are also working on integrations with Amazon Web Services' Bedrock and Meta's LLaMA. While the project appears pretty bare bones at the moment, there is a pretty ambitious launch schedule. Per 404 Media, the GitHub code shows that the project displays a planned launch date of July 4, which is certainly one way to celebrate Independence Day. If all goes to plan, well, as the tagline to Roland Emmerich's 1996 masterpiece Independence Day goes, 'We've always believed we weren't alone. On July 4th, we'll wish we were.' The embrace of AI shouldn't come as much of a surprise, even if the project itself is relatively ill-defined at this point. Earlier this year, as Elon Musk and the Department of Government Efficiency were unceremoniously firing federal employees at the General Services Administration, it launched a chatbot that was supposed to help the remaining agency members with their tasks. That project was reportedly in the works before the DOGE team rushed it out the door, presumably in an attempt to compensate for all the labor that was cut from the agency. The project appears to be something newly concocted by Shedd and the team that remains at GSA's tech team. Given all that could go wrong with a chatbot, rushing it out the door doesn't seem like the safest or smartest idea in the world, but hey, they're the experts, right?