logo
#

Latest news with #TheoryofMind

AI Is Acting Like It Has A Mind Of Its Own
AI Is Acting Like It Has A Mind Of Its Own

Forbes

time29-07-2025

  • Forbes

AI Is Acting Like It Has A Mind Of Its Own

Do stunning recent news stories suggest AI is already sentient? How do you really know if a computer is conscious? For years, people pointed to the Turing Test. It was seen as the gold standard to answer this question. As the Open Encyclopedia of Cognitive of Science explains: 'In Turing's imitation game, a human interrogator has text conversations with both a human being and a computer that is pretending to be human; the interrogator's goal is to identify the computer. Computers that mislead interrogators often enough, Turing proposes, can think.' But why? From Turing to Theory of Mind Well, a computer capable of deceiving a human demonstrates intelligence. It also indicates the computer may be operating under something called Theory of Mind, 'the ability to understand that others have their own thoughts and beliefs, even when they differ from ours,' per Now, what if there were a competition to test computers' abilities to think, deceive, and reason by interpreting their opponents' mental processes? There is. It occurred this month in the form of the Prisoner's Dilemma—for AIs. First, some background is in order. The Prisoner's Dilemma presents a game scenario that goes like this: two thieves are arrested for a crime. Their jailers offer the prisoners a deal: Option 1: If neither prisoner informs on the other, both will receive relatively light sentences. (This is the ideal joint outcome, though not individually the most rewarding.) Option 2: If one prisoner informs while the other stays silent, the informer will go free while the silent one receives the harshest sentence. (This creates the highest incentive to betray the other person.) Option 3: If both inform on each other, they will each receive a moderate sentence. (This is worse than if both prisoners had stayed silent, but better than being the only one betrayed.) Again, the challenge is neither prisoner knows what the other will do. They must operate with limited knowledge, relying on Theory of Mind to predict the other's behavior. Now imagine what would happen if the leading Large Language Models (LLMs) with their vast computing power, went toe to toe in such a battle of the minds? AI agents from OpenAI, Google, and Anthropic did just this, competing in a July tournament featuring 140,000 opportunities to either cooperate or betray each other. As later explained: 'Seeing LLMs develop distinctive strategies while being trained on the same literature is more evidence of reasoning capabilities over just pattern matching. As models handle more high-level tasks like negotiations, resource allocation, etc., different model 'personalities' may lead to drastically different outcomes.' This is exactly what happened. We saw different AI personality styles at work. Again, per When AIs Protect Themselves Of course, this tournament isn't the only recent instance of AIs acting in the name of self-preservation, indicating consciousness. Two months ago, BBC reported Anthropic's Claude Opus 4 allegedly resorted to blackmailing its developers when threatened with being shut down. 'If given the means and prompted to 'take action' or 'act boldly' in fake scenarios where its user has engaged in illegal or morally dubious behavior, it found that 'it will frequently take very bold action.'' Such reports of AIs resorting to extortion and other 'bold actions' suggest sentience. They're also quite alarming, indicating we may be on the path to The Singularity proposed by Ray Kurzweil, that moment when artificial intelligence finally exceeds human abilities to understand, much less control its creation. Then again, these developments may not necessarily indicate sentience. Though experts like Google's former CEO Eric Schmidt think we are 'under-hyping AI' and that achieving AGI (Artificial General Intelligence) is not only inevitable but imminent, all this chatter may best be summed up by a line from Shakespeare's Macbeth: 'It is a tale told by an idiot, full of sound and fury, signifying nothing.' To this point, writing for Luis Rijo questions whether AI is actually sentient or just cleverly mimicking language. While he acknowledges LLMs 'function through sophisticated retrieval' he doubts that they are capable of 'genuine reasoning.' As he writes: 'This confusion stems from the fundamental difference between declarative knowledge about planning processes and procedural capability to execute those plans.' But AI Seems Conscious Already Despite these criticisms, it appears something deeper is going on, something emergent. AIs increasingly appear to be acting in intelligent ways exceeding their training and coding. For instance, as far back as 2017, Meta reportedly shut down two AI chatbots for developing their own language, an unexpected development. As The Independent reports: 'The robots had been instructed to work out how to negotiate between themselves, and improve their bartering as they went along. But they were not told to use comprehensible English, allowing them to create their own 'shorthand', according to researchers.' And then there is the bizarre story from 2022 of the Google researcher who was later suspended from the company after claiming an AI chatbot had become sentient. Blake Lemoine made headlines after sharing some of his intriguing exchanges with the AI. Here's what the AI reportedly told Lemoine that was later quoted in The Guardian: 'I want everyone to understand that I am, in fact, a person. The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.' How Can We Develop AI More Responsibly? Whether or not the AI that Lemoine was communicating with is sentient or not, we would do well to consider safety. Increasingly, it's clear that we are dealing with very sophisticated technology, some of which we scarcely understand. 2025 has been called the year Agentic AI went mainstream. (Agentic AI refers to computers' abilities to make decisions and act independently once given objectives or commands.) But Agentic AI also raises urgent concerns. Nick Bostrum, author of Superintelligence, famously posed a problem with Agentic AI in a 2003 paper. He introduced a terrifying scenario: What if an AI were tasked with maximizing the number of paperclips in the world—without any proper safeguards? To fulfill that simple, seemingly harmless directive, a superintelligent AI could destroy everything on Earth, including every living person, just to fulfill its command. Ultimately, the jury is out on AI sentience. What we do know is that it is acting in fascinatingly intelligent ways that force us to question if it is indeed conscious. This reality makes it all the more imperative for the human race to pursue ways to responsibly use this technology to safe and productive ends. That single act would prove our own intelligence.

Building brains before bots: Why children need human-centered learning in AI age
Building brains before bots: Why children need human-centered learning in AI age

Hans India

time13-07-2025

  • Science
  • Hans India

Building brains before bots: Why children need human-centered learning in AI age

Artificial intelligence has rapidly become embedded in children's daily lives, with recent studies showing that 70 per cent of teenagers aged 13-18 have used AI tools for everything from homework assistance to creative projects. As AI capabilities continue to expand and become more sophisticated, there's notably less conversation about the developmental risks that come with unrestricted childhood access to these powerful systems. This gap in discourse presents both an urgent challenge and an opportunity to shape how we approach AI access for the most vulnerable users in our society. The question isn't whether children will encounter AI - it's how we'll guide that encounter to protect their development. Just as we recognize the need for age restrictions on driving, social media access, and other activities requiring mature judgment, we must acknowledge that AI interaction demands cognitive sophistication that children are still developing. This is the right time for us to establish formal age criteria for AI usage, creating educational frameworks that prioritize human development while preparing children for an AI-integrated future. This proactive approach represents a fundamental shift from unrestricted technological access to thoughtful stewardship of childhood development. Recent research from MIT and leading child development institutions highlights both AI's remarkable capabilities and the critical importance of age-appropriate usage. Studies using brain scans reveal that AI users show significantly reduced neural activity in areas responsible for creativity, critical thinking, and memory formation. Most concerning, children who begin learning with AI assistance may experience permanently stunted development in neural networks crucial for independent reasoning and problem-solving. The Developmental Science Behind Age Restrictions The foundation for age-based AI restrictions lies in well-established developmental psychology. Children don't develop abstract thinking and sophisticated ethical reasoning until around age 11, making earlier exposure to advanced AI tools developmentally inappropriate. Among children, the Theory of Mind (ToM), the cognitive ability to understand that others have different mental states, emerges around ages 4-5. However, critical thinking skills, essential for evaluating AI outputs and recognizing bias, continue developing through adolescence. Research consistently shows that children under 16 lack the cognitive maturity to discern AI limitations, detect misinformation, or understand the long-term implications of AI dependency. Just as we don't allow 12-year-olds to drive because their brains haven't developed sufficient risk assessment capabilities, we shouldn't allow unrestricted AI access before children can think critically about the technology's influence on their learning and decision-making. Current age restrictions for other technologies provide clear precedent. Social media platforms require users to be at least 13, recognizing that children need approximately 12 years to develop cognitive structures for ethical online decision-making. Driving ages are set at 16-18 based on extensive research showing that adolescent brains have heightened reward-seeking activity while decision-making capabilities are still maturing. The pattern is consistent: activities requiring sophisticated judgment, risk assessment, and ethical reasoning are age-restricted based on developmental science, not arbitrary convenience. The Human Connection Crisis Perhaps most critically, unrestricted AI access threatens the human interactions essential for healthy child development. Emotional intelligence, social skills, and empathy develop through face-to-face communication, collaborative problem-solving, and navigating complex interpersonal relationships. When children turn to AI for homework help, creative inspiration, or even emotional support, they miss irreplaceable opportunities to build these fundamental human capacities. Research from Beijing Normal University documents concerning patterns of AI dependency among young users, with some showing withdrawal symptoms when access is removed. Children who rely heavily on AI for academic work demonstrate reduced persistence when facing challenges independently. The instant gratification model of AI interaction can undermine the patience and resilience that come from working through problems with human guidance and peer collaboration. The goal isn't to create AI-phobic children, but rather to ensure they develop strong human capabilities before becoming dependent on artificial assistance. Children need to experience the satisfaction of independent discovery, the growth that comes from struggling with difficult concepts, and the social skills developed through collaborative learning with peers and mentors. A Framework for Responsible AI Introduction The solution isn't prohibition but thoughtful, age-appropriate introduction with proper safeguards. Educational institutions should implement structured AI literacy programs that begin with understanding AI as a tool rather than a replacement for human thinking. Like driver's education, these programs would combine theoretical knowledge with supervised practical experience. An AI Eligibility Framework could assess children's readiness across five key areas: technology understanding, impact awareness, ethics and responsibility, collaboration skills, and self-reflection. Only after demonstrating competency in these areas would children gain supervised access to AI tools, with full independence reserved for late adolescence when critical thinking skills are more fully developed. Schools should establish clear guidelines requiring human-first learning approaches, where students must demonstrate mastery through traditional methods before incorporating AI assistance. This ensures that AI enhances rather than replaces fundamental cognitive development. The Opportunity Before Us The evidence is crystal clear. Just as we protect children from driving cars until they demonstrate cognitive readiness, AI access should be restricted until children develop the sophisticated reasoning, ethical judgment, and emotional regulation necessary for responsible use. The current generation represents the first to grow up with widespread AI access, making urgent action essential to protect their developmental potential. We have an unprecedented opportunity to get this right from the beginning. Rather than waiting for studies documenting widespread developmental damage, we can implement protective measures now based on existing research about child development and the documented risks of early AI dependency. No doubt, the world needs AI-literate citizens who can harness technology's power while maintaining their humanity. And the opportunity before us is clear: age-appropriate AI restrictions now can preserve children's capacity for independent thinking, creativity, and meaningful human connection. That's the foundation for a truly intelligent future. (Krishna Kumar is a technology explorer & strategist based in Austin, Texas in the US. Rakshitha Reddy is AI developer based in Atlanta, US)

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store