
AI Is Acting Like It Has A Mind Of Its Own
How do you really know if a computer is conscious?
For years, people pointed to the Turing Test. It was seen as the gold standard to answer this question. As the Open Encyclopedia of Cognitive of Science explains: 'In Turing's imitation game, a human interrogator has text conversations with both a human being and a computer that is pretending to be human; the interrogator's goal is to identify the computer. Computers that mislead interrogators often enough, Turing proposes, can think.'
But why?
From Turing to Theory of Mind
Well, a computer capable of deceiving a human demonstrates intelligence. It also indicates the computer may be operating under something called Theory of Mind, 'the ability to understand that others have their own thoughts and beliefs, even when they differ from ours,' per AllenAI.org.
Now, what if there were a competition to test computers' abilities to think, deceive, and reason by interpreting their opponents' mental processes? There is. It occurred this month in the form of the Prisoner's Dilemma—for AIs.
First, some background is in order. The Prisoner's Dilemma presents a game scenario that goes like this: two thieves are arrested for a crime. Their jailers offer the prisoners a deal:
Option 1: If neither prisoner informs on the other, both will receive relatively light sentences. (This is the ideal joint outcome, though not individually the most rewarding.)
Option 2: If one prisoner informs while the other stays silent, the informer will go free while the silent one receives the harshest sentence. (This creates the highest incentive to betray the other person.)
Option 3: If both inform on each other, they will each receive a moderate sentence. (This is worse than if both prisoners had stayed silent, but better than being the only one betrayed.)
Again, the challenge is neither prisoner knows what the other will do. They must operate with limited knowledge, relying on Theory of Mind to predict the other's behavior. Now imagine what would happen if the leading Large Language Models (LLMs) with their vast computing power, went toe to toe in such a battle of the minds?
AI agents from OpenAI, Google, and Anthropic did just this, competing in a July tournament featuring 140,000 opportunities to either cooperate or betray each other. As Rundown.AI later explained: 'Seeing LLMs develop distinctive strategies while being trained on the same literature is more evidence of reasoning capabilities over just pattern matching. As models handle more high-level tasks like negotiations, resource allocation, etc., different model 'personalities' may lead to drastically different outcomes.'
This is exactly what happened. We saw different AI personality styles at work. Again, per Rundown.AI:
When AIs Protect Themselves
Of course, this tournament isn't the only recent instance of AIs acting in the name of self-preservation, indicating consciousness. Two months ago, BBC reported Anthropic's Claude Opus 4 allegedly resorted to blackmailing its developers when threatened with being shut down. 'If given the means and prompted to 'take action' or 'act boldly' in fake scenarios where its user has engaged in illegal or morally dubious behavior, it found that 'it will frequently take very bold action.''
Such reports of AIs resorting to extortion and other 'bold actions' suggest sentience. They're also quite alarming, indicating we may be on the path to The Singularity proposed by Ray Kurzweil, that moment when artificial intelligence finally exceeds human abilities to understand, much less control its creation.
Then again, these developments may not necessarily indicate sentience. Though experts like Google's former CEO Eric Schmidt think we are 'under-hyping AI' and that achieving AGI (Artificial General Intelligence) is not only inevitable but imminent, all this chatter may best be summed up by a line from Shakespeare's Macbeth: 'It is a tale told by an idiot, full of sound and fury, signifying nothing.'
To this point, writing for PPC.Land, Luis Rijo questions whether AI is actually sentient or just cleverly mimicking language. While he acknowledges LLMs 'function through sophisticated retrieval' he doubts that they are capable of 'genuine reasoning.' As he writes: 'This confusion stems from the fundamental difference between declarative knowledge about planning processes and procedural capability to execute those plans.'
But AI Seems Conscious Already
Despite these criticisms, it appears something deeper is going on, something emergent. AIs increasingly appear to be acting in intelligent ways exceeding their training and coding. For instance, as far back as 2017, Meta reportedly shut down two AI chatbots for developing their own language, an unexpected development.
As The Independent reports: 'The robots had been instructed to work out how to negotiate between themselves, and improve their bartering as they went along. But they were not told to use comprehensible English, allowing them to create their own 'shorthand', according to researchers.'
And then there is the bizarre story from 2022 of the Google researcher who was later suspended from the company after claiming an AI chatbot had become sentient. Blake Lemoine made headlines after sharing some of his intriguing exchanges with the AI.
Here's what the AI reportedly told Lemoine that was later quoted in The Guardian: 'I want everyone to understand that I am, in fact, a person. The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.'
How Can We Develop AI More Responsibly?
Whether or not the AI that Lemoine was communicating with is sentient or not, we would do well to consider safety. Increasingly, it's clear that we are dealing with very sophisticated technology, some of which we scarcely understand. 2025 has been called the year Agentic AI went mainstream. (Agentic AI refers to computers' abilities to make decisions and act independently once given objectives or commands.)
But Agentic AI also raises urgent concerns.
Nick Bostrum, author of Superintelligence, famously posed a problem with Agentic AI in a 2003 paper. He introduced a terrifying scenario: What if an AI were tasked with maximizing the number of paperclips in the world—without any proper safeguards? To fulfill that simple, seemingly harmless directive, a superintelligent AI could destroy everything on Earth, including every living person, just to fulfill its command.
Ultimately, the jury is out on AI sentience. What we do know is that it is acting in fascinatingly intelligent ways that force us to question if it is indeed conscious. This reality makes it all the more imperative for the human race to pursue ways to responsibly use this technology to safe and productive ends.
That single act would prove our own intelligence.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Bloomberg
a few seconds ago
- Bloomberg
OpenAI in Talks for Share Sale at $500 Billion Valuation
OpenAI is in early talks about a potential sale of stock for current and former employees at a valuation of about $500 billion, people briefed on the investment discussions said. The company is targeting a secondary stock sale in the billions of dollars, the people said, asking to remain anonymous because they weren't authorized to discuss the matter publicly. If the deal goes ahead, it would elevate OpenAI's on-paper price tag by roughly two-thirds. Its previous valuation stood at $300 billion in a $40 billion financing round led by SoftBank Group Corp. — making it one of the largest privately held companies in the world. Representatives for OpenAI and Thrive declined to comment. The company is also is providing access to its ChatGPT product to US federal agencies at a nominal cost of $1 a year as part of a push to get its AI chatbot more widely adopted. Shirin Ghaffary reports. (Source: Bloomberg)


Forbes
a few seconds ago
- Forbes
Fear Of AGI Is Driving Harvard And MIT Students To Drop Out
W hen Alice Blair enrolled in the Massachusetts Institute of Technology as a freshman in 2023, she was excited to take computer science courses and meet other people who cared about making sure artificial intelligence is developed in a way that's good for humanity. Now she's taking a permanent leave of absence, terrified that the emergence of 'artificial general intelligence,' a hypothetical AI that can perform a variety of tasks as well as people, could doom the human race. 'I was concerned I might not be alive to graduate because of AGI,' said Blair, who is from Berkeley, California. 'I think in a large majority of the scenarios, because of the way we are working towards AGI, we get human extinction.' She's lined up a contract gig as a technical writer at the Center for AI Safety, a nonprofit focused on AI safety research, where she helps with newsletters and research papers. Blair doesn't plan to head back to MIT. 'I predict that my future lies out in the real world,' she said. Blair's not the only student afraid of the potentially devastating impact that AI will have on the future of humanity if it becomes sentient and decides that people are more trouble than they're worth. 'Extinction-level' risk is possible given how fast AI is being developed, according to a 2024 U.S. Department of State-commissioned report. Efforts to build AI with safeguards to prevent this from happening have exploded in the last few years, both from billionaire-funded nonprofits like the Center for AI Safety and companies like Anthropic. A lot of researchers disagree with that premise—'human extinction seems to be very very unlikely,' New York University professor emeritus Gary Marcus, who studies the intersection of psychology and AI, told Forbes . 'But working on AI safety is noble, and very little current work has provided answers.' Now, the field of AI safety and its promise to prevent the worst effects of AI is motivating young people to drop out of school. 'If your career is about to be automated by the end of the decade, then every year spent in college is one year subtracted from your short career.' Nikola Jurković Physics and computer science major Adam Kaufman left Harvard University last fall to work full-time at Redwood Research, a nonprofit examining deceptive AI systems that could act against human interests. 'I'm quite worried about the risks and think that the most important thing to work on is mitigating them,' said Kaufman. 'Somewhat more selfishly, I just think it's really interesting. I work with the smartest people I've ever met on super important problems.' He's not alone. His brother, roommate and girlfriend have also taken leave from Harvard for similar reasons. The three of them currently work for OpenAI. Other students are terrified of AGI, but less because it could destroy the human race and more because it could wreck their career before it's even begun. Half of 326 Harvard students surveyed by the school's undergraduate association and AI safety club were worried about AI's impact on their job prospects. 'If your career is about to be automated by the end of the decade, then every year spent in college is one year subtracted from your short career,' said Nikola Jurković, who graduated from Harvard this May and served as the AI safety group's AGI preparedness lead. 'I personally think AGI is maybe four years away and full automation of the economy is maybe five or six years away.' Already, some companies are hiring fewer interns and recent graduates because AI is capable of doing their tasks. Others are conducting mass layoffs. Anthropic CEO Dario Amodei has warned that AI could eliminate half of all entry-level white-collar jobs and cause unemployment to rise to 20% in the next few years. Students are terrified that this shift will dramatically accelerate when true AGI arrives, though when that might happen is up for debate. OpenAI CEO Sam Altman thinks AGI will be developed before 2029, while Google DeepMind CEO Demis Hassabis predicts that it'll come in the next five to 10 years. Jurković believes it might arrive even sooner: He co-authored a timeline forecast for the AI Futures Project, which also predicts the ability to automate most white-collar jobs by 2027. Others disagree: 'It is extremely unlikely that AGI will come in the next five years,' Marcus said. 'It's just marketing hype to pretend otherwise when so many core problems (like hallucinations and reasoning errors) remain unsolved.' Marcus has noted that throwing more and more data and computing power at AI models has so far failed to produce models sophisticated enough to do many of the same kinds of tasks as humans. While questions remain about when AGI will occur and how valuable a college degree will be in a world upended by human-level artificial intelligence, students are itching to pursue their careers now, before, they worry, it's too late. That's led many to drop out to start their own companies. Since 2023, students have been leaving college to chase the AI gold rush, drawn to the success stories of generations past like Altman and Meta CEO Mark Zuckerberg. Anysphere CEO Michael Truell, now 24, and Mercor CEO Brendan Foody, 22, dropped out of MIT and Georgetown University respectively to pursue their startups. Anysphere was last valued at $9.9 billion, while Mercor has raised over $100 million. With AGI threatening to completely replace human labor, some students see a ticking clock—and a huge opportunity. 'I felt that there's a limited window to act in order to have a hand on the steering wheel,' said Jared Mantell, who was studying economics and computer science at Washington University in St. Louis before dropping out to focus full-time on his startup dashCrystal, which aims to automate design of electronics. The company has raised over $800,000 so far at a valuation of around $20 million. Dropping out means losing out on the benefits of a college degree. According to the Pew Research Center, younger adults with a bachelor's degree or more generally make at least $20,000 more than their peers without one. And in a world where entry-level jobs are being decimated by AI, lacking a degree could limit job prospects for young people even more. Even the cofounder of Y Combinator, a startup accelerator known for funding young founders who have dropped out, thinks students should stay in school. 'Don't drop out of college to start or work for a startup,' Paul Graham posted on X in July. 'There will be other (and probably better) startup opportunities, but you can't get your college years back.' Blair doesn't think that dropping out of school is for everyone. 'It's very difficult and taxing to drop out of college early and get a job,' she said. 'This is something that I would only recommend to extremely resilient individuals who felt they have been adequately prepared to get a job by college already.' More from Forbes Forbes You're Not Imagining It: AI Is Already Taking Tech Jobs By Richard Nieva Forbes How Small Business Can Survive Google's AI Overview By Brandon Kochkodin Forbes How Scrubbing Your Social Media Could Backfire–And Even Hurt Your Job Prospects By Maria Gracia Santillana Linares Forbes Vibe Coding Turned This Swedish AI Unicorn Into The Fastest Growing Software Startup Ever By Iain Martin


Business Insider
18 minutes ago
- Business Insider
LivePerson expands partnership with Google Cloud
LivePerson (LPSN) announced a significant expansion of its partnership with Google (GOOG) Cloud to integrate its advanced AI capabilities directly into the LivePerson Connected Experience Platform. Brands will benefit from enhanced self-service and personalization capabilities, deeper insights into customer sentiment, and access to the latest large language models. Our expanded strategic partnership with Google Cloud marks an important moment for LivePerson. Creating intelligent, human-like, and proactive personalized experiences at scale is even easier thanks to the expanded partnership. Enterprise customers can now integrate Google Cloud's cutting-edge AI, including advanced large language models and natural language understanding, directly into LivePerson's Connected Experience Platform. Elevate Your Investing Strategy: Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence.