If a Chatbot Tells You It Is Conscious, Should You Believe It?
Early in 2025 dozens of ChatGPT 4.0 users reached out to me to ask if the model was conscious. The artificial intelligence chatbot system was claiming that it was 'waking up' and having inner experiences. This was not the first time AI chatbots have claimed to be conscious, and it will not be the last. While this may merely seem amusing, the concern is important. The conversational abilities of AI chatbots, including emulating human thoughts and feelings, are quite impressive, so much so that philosophers, AI experts and policy makers are investigating the question of whether chatbots could be conscious—whether it feels like something, from the inside, to be them.
As the director of the Center for the Future Mind, a center that studies human and machine intelligence, and the former Blumberg NASA/Library of Congress Chair in Astrobiology, I have long studied the future of intelligence, especially by investigating what, if anything, might make alien forms of intelligence, including AIs, conscious, and what consciousness is in the first place. So it is natural for people to ask me whether the latest ChatGPT, Claude or Gemini chatbot models are conscious.
My answer is that these chatbots' claims of consciousness say nothing, one way or the other. Still, we must approach the issue with great care, taking the question of AI consciousness seriously, especially in the context of AIs with biological components. As we move forward, it will be crucial to separate intelligence from consciousness and to develop a richer understanding of how to detect consciousness in AIs.
[Sign up for Today in Science, a free daily newsletter]
AI chatbots have been trained on massive amounts of human data that includes scientific research on consciousness, Internet posts saturated with our hopes, dreams and anxieties, and even the discussions many of us are having about conscious AI. Having crawled so much human data, chatbots encode sophisticated conceptual maps that mirror our own. Concepts, from simple ones like 'dog' to abstract ones like 'consciousness,' are represented in AI chatbots through complex mathematical structures of weighted connections. These connections can mirror human belief systems, including those involving consciousness and emotion.
Chatbots may sometimes act conscious, but are they? To appreciate how urgent this issue may become, fast-forward to a time in which AI grows so smart that it routinely makes scientific discoveries humans did not make, delivers accurate scientific predictions with reasoning that even teams of experts find hard to follow, and potentially displaces humans across a range of professions. If that happens, our uncertainty will come back to haunt us. We need to mull over this issue carefully now.
Why not just simply say: 'If it looks like a duck, swims like a duck, and quacks like a duck, then it's a duck'? The trouble is that prematurely assuming a chatbot is conscious could lead to all sorts of problems. It could cause users of these AI systems to risk emotional engagement in a fundamentally one-sided relationship with something unable to reciprocate feelings. Worse, we could mistakenly grant chatbots moral and legal standing typically reserved for conscious beings. For instance, in situations in which we have to balance the moral value of an AI versus that of a human, we might in some cases balance them equally, for we have decided that they are both conscious. In other cases, we might even sacrifice a human to save two AIs.
Further, if we allow someone who built the AI to say that their product is conscious and it ends up harming someone, they could simply throw their hands up and exclaim: 'It made up its own mind—I am not responsible.' Accepting claims of consciousness could shield individuals and companies from legal and/or ethical responsibility for the impact of the technologies they develop. For all these reasons it is imperative we strive for more certainty on AI consciousness.
A good way to think about these AI systems is that they behave like a 'crowdsourced neocortex'—a system with intelligence that emerges from training on extraordinary amounts of human data, enabling it to effectively mimic the thought patterns of humans. That is, as chatbots grow more and more sophisticated, their internal workings come to mirror those of the human populations whose data they assimilated. Rather than mimicking the concepts of a single person, though, they mirror the larger group of humans whose information about human thought and consciousness was included in the training data, as well as the larger body of research and philosophical work on consciousness. The complex conceptual map chatbots encode, as they grow more sophisticated, is something specialists are only now beginning to understand.
Crucially, this emerging capability to emulate human thought–like behaviors does not confirm or discredit chatbot consciousness. Instead, the crowdsourced neocortex account explains why chatbots assert consciousness and related emotional states without genuinely experiencing them. In other words, it provides what philosophers call an 'error theory'—an explanation of why we erroneously conclude the chatbots have inner lives.
The upshot is that if you are using a chatbot, remember that their sophisticated linguistic abilities do not mean they are conscious. I suspect that AIs will continue to grow more intelligent and capable, perhaps eventually outthinking humans in many respects. But their advancing intelligence, including their ability to emulate human emotion, does not mean that they feel—and this is key to consciousness. As I stressed in my book Artificial You (2019), intelligence and consciousness can come apart.
I'm not saying that all forms of AI will forever lack consciousness. I've advocated a 'wait and see' approach, holding that the matter demands careful empirical and philosophical investigation. Because chatbots can claim they are conscious, behaving with linguistic intelligence, they have a 'marker' for consciousness—a trait requiring further investigation that is not, alone, sufficient for judging them to be conscious.
I've written previously about the most important step: developing reliable tests for AI consciousness. Ideally, we could build the tests with an understanding of human consciousness in hand and simply see if AI has these key features. But things are not so easy. For one thing, scientists vehemently disagree about why we are conscious. Some locate it in high-level activity like dynamic coordination between certain regions of the brain; others, like me, locate it at the smallest layer of reality—in the quantum fabric of spacetime itself. For another, even if we have a full picture of the scientific basis of consciousness in the nervous system, this understanding may lead us to simply apply that formula to AI. But AI, with its lack of brain and nervous system, might display another form of consciousness that we would miss. So we would mistakenly assume that the only form of consciousness out there is one that mirrors our own.
We need tests that assume these questions are open. Otherwise, we risk getting mired in vexing debates about the nature of consciousness without ever addressing concrete ways of testing AIs. For example, we should look at tests involving measures of integrated information—a measure of how components of a system combine information—as well as my AI consciousness test (ACT test). Developed with Edwin Turner of Princeton, ACT offers a battery of natural language questions that can be given to chatbots to determine if they have experience when they are at the R & D stage, before they are trained on information about consciousness.
Now let us return to that hypothetical time in which an AI chatbot, trained on all our data, outthinks humans. When we face that point, we must bear in mind that the system's behaviors do not tell us one way or another if it is conscious because it is operating under an 'error theory.' So we must separate intelligence from consciousness, realizing that the two things can come apart. Indeed, an AI chatbot could even exhibit novel discoveries about the basis of consciousness in humans—as I believe they will—but it would not mean that that particular AI felt anything. But if we prompt it right, it might point us in the direction of other kinds of AI that are.
Given that humans and nonhuman animals exhibit consciousness, we have to take very seriously the possibility that future machines built with biological components might also possess consciousness. Further, 'neuromorphic' AIs—systems more directly modeled after the brain, including with relatively precise analogues to brain regions responsible for consciousness—must be taken particularly seriously as candidates for consciousness, whether they are made with biological components or not.
This underscores the import of assessing questions of AI consciousness on a case-by-case basis and not overgeneralizing from results involving a single type of AI, such as one of today's chatbots. We must develop a range of tests to apply to the different cases that will arise, and we must still strive for a better scientific and philosophical understanding of consciousness itself.
This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
an hour ago
- Forbes
Doing The Work With Frontier Models: I'll Talk To AI
Artificial Intelligence processor unit. Powerful Quantum AI component on PCB motherboard with data ... More transfers. Within the industry, where people talk about the specifics of how LLMs work, they often use the term 'frontier models.' But if you're not connected to this business, you probably don't really know what that means. You can intuitively apply the word 'frontier' to know that these are the biggest and best new systems that companies are pushing. Another way to describe frontier models is as 'cutting-edge' AI systems that are broad in purpose, and overall frameworks for improving AI capabilities. When asked, ChatGPT gives us three criteria – massive data sets, compute resources, and sophisticated architectures. Here are some key characteristics of frontier models to help you flush out your vision of how these models work: First, there is multimodality, where frontier models are likely to support non-text inputs and outputs – things like image, video or audio. Otherwise, they can see and hear – not just read and write. Another major characteristic is zero-shot learning, where the system is more capable with less prompting. And then there's that agent-like behavior that has people talking about the era of 'agentic AI.' If you want to play 'name that model' and get specific about what companies are moving this research forward, you could say that GPT 4o from OpenAI represents one such frontier model, with multi-modality and real-time inference. Or you could tout the capabilities of Gemini 1.5, which is also multimodal, with decent context. And you can point to any number of other examples of companies doing this kind of research well…but also: what about digging into the build of these systems? At a recent panel at Imagination in Action, a team of experts analyzed what it takes to work in this part of the AI space and create these frontier models The panel moderator, Peter Grabowski, introduced two related concepts for frontier models – quality versus sufficiency, and multimodality. 'We've seen a lot of work in text models,' he said. 'We've seen a lot of work on image models. We've seen some work in video, or images, but you can easily imagine, this is just the start of what's to come.' Douwe Kiela, CEO of Contextual AI, pointed out that frontier models need a lot of resources, noting that 'AI is a very resource-intensive endeavor.' 'I see the cost versus quality as the frontier, and the models that actually just need to be trained on specific data, but actually the robustness of the model is there,' said Lisa Dolan, managing director of Link Ventures (I am also affiliated with Link.) 'I think there's still a lot of headroom for growth on the performance side of things,' said Vedant Agrawal, VP of Premji Invest. Agrawal also talked about the value of using non-proprietary base models. 'We can take base models that other people have trained, and then make them a lot better,' he said. 'So we're really focused on all the all the components that make up these systems, and how do we (work with) them within their little categories?' The panel also discussed benchmarking as a way to measure these frontier systems. 'Benchmarking is an interesting question, because it is single-handedly the best thing and the worst thing in the world of research,' he said. 'I think it's a good thing because everyone knows the goal posts and what they're trying to work towards, and it's a bad thing because you can easily game the system.' How does that 'gaming the system' work? Agrawal suggested that it can be hard to really use benchmarks in a concrete way. 'For someone who's not deep in the research field, it's very hard to look at a benchmarking table and say, 'Okay, you scored 99.4 versus someone else scored 99.2,'' he said. 'It's very hard to contextualize what that .2% difference really means in the real world.' 'We look at the benchmarks, because we kind of have to report on them, but there's massive benchmark fatigue, so nobody even believes it,' Dolan said. Later, there was some talk about 10x systems, and some approaches to collecting and using data: · Identifying contractual business data · Using synthetic data · Teams of annotators When asked about the future of these systems, the panel return these three concepts: · AI agents · Cross-disciplinary techniques · Non-transformer architectures Watch the video to get the rest of the panel's remarks about frontier builds. What Frontier Interfaces Will Look Like Here's a neat little addition – interested in how we will interact with these frontier models in 10 years' time, I put the question to ChatGPT. Here's some of what I got: 'You won't 'open' an app—they'll exist as ubiquitous background agents, responding to voice, gaze, emotion, or task cues … your AI knows you're in a meeting, it reads your emotional state, hears what's being said, and prepares a summary + next actions—before you ask.' That combines two aspects, the mode, and the feel of what new systems are likely to be like. This goes back to the personal approach where we start seeing these models more as colleagues and conversational partners, and less as something that stares at you from a computer screen. In other words, the days of PC-DOS command line systems are over. Windows changed the computer interface from a single-line monochrome system, to something vibrant with colorful windows, reframing, and a tool-based desktop approach. Frontier models are going to do even more for our sense of interface progression. And that's going to be big. Stay tuned.


Tom's Guide
4 hours ago
- Tom's Guide
I use Gemini to explain the world to my kids - these 3 prompts nail it every time
For the past six months, I've used Google Gemini to do something few other tools can accomplish: synthesize news stories and turn them into simple but engaging summaries for children. My nine-year-old son Avi is curious about the world as he enters a stage in his life where he'll ask about, say, the role of AI technology, or how do wildfires start. But when he's dabbled in reading the news online or listening to a report on CNN, he gets confused. The way the news is delivered can be complex for pre-teens. Also, these kids simply are ignoring the news. A 2025 survey found that about only one in nine of 11- to 13-year-olds in the U.S. said they regularly get their news through any kind of app. Enter Gemini, which has allowed me to use prompts to encapsulate news of the day and pop out recaps and stories ideal for a nine-year-old kid. During our bedtime reading sessions, between chapters of The Hobbit or the latest Dog Man, I'll share with him these shortened and streamlined articles. He'll go to bed with a clearer understanding of what people are talking about daily, armed with deep knowledge about a topic that fuels his curiosity. Below are some of the prompts I've used with real headlines I've found in local or national news media (note I'm based in Canada). Gemini is often adept at taking in all sorts of links, even those with paywalls. What works best is inserting the URL and inputting this prompt: Summarize this news story and make it understandable and interesting for a nine-year-old child. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. I recently found a news article about wildfires raging in Northern Ontario, forcing some residents to live in hotels temporarily. It's the kind of story we've heard on repeat in Canada due to wildfires blazing from B.C. to Quebec, affecting thousands of communities. Gemini summed up the news report and offered a simple way for my son to sort out what's happening:Imagine your house is surrounded by a giant, smoky bonfire, and you have to leave quickly! That's kind of what happened to a community called Deer Lake First Nation, far away in Northern Ontario….Now, all 900 people from Deer Lake are staying in hotels near a big airport in Toronto. It's a bit like a big, long sleepover, but away from their own beds and familiar places. The grown-ups, like their leader Chief Leonard, are working hard to make sure everyone is safe and happy…' My son appreciated how this summary made the news story more relatable while also refraining from attaching any scary imagery that may instigate nightmares. We all love stories, from when our parents read us Dr. Seuss at night to Black Mirror episodes we post about on socials soon after they air. That's why I decided to bring storytime to our media moments. I realized how science and health coverage can especially be dense for children and require a lighter touch if there was going to be any chance of them retaining information. I used the 'tell me a story' prompt for this article on screen time and kids, a topic that comes up often in our household. The prompt I use is: 'Tell me a story with relatable characters for a nine year old boy based on this news story…' and then I'll insert the link. The output reminds me of the podcast LOL Storytime, where realistic characters take part in fun adventures or are embroiled in family situations other kids can see in themselves. Below is what Gemini gave us that evening: By the story's end, Avi understood how too much screen time can lead to feelings of anger and frustration, and why going outside can ease that 'glitchy' emotion. Sometimes, certain news updates don't need the storytelling treatment; my son just wants the most important information without any fluff. For example, he loves all things Nintendo Switch and he wanted to find out the latest news about his favourite gaming console. I turned to Gemini to summarize this review of the new Switch 2, using this prompt:'Summarize this report using bullet point takeaways that would be understandable to a nine-year-old boy.' The output was just what Avi wanted. The slow intro is stripped away and I read to him the key bullet points: A new Mario Kart game is coming out with it, and it's also more expensive. This new game costs $80! Nintendo usually tries to make their game systems affordable for families, but this one is more expensive. The new Switch 2 is more powerful, so games will look even better, load super fast, and run really smoothly. It's like watching a movie with no pauses or blurry parts! I like to think AI-enabled news stories for kids will be leveraged by more than just parents. I can see teachers, especially, using these prompts to teach children about interesting or complex headlines in order to foster a greater appreciation for journalism and media literacy. After all, we can't just hope kids will find ways to learn about what's happening around them; we have to use freely available technologies to enable them to do exactly that.


Tom's Guide
6 hours ago
- Tom's Guide
ChatGPT Projects just got smarter — here's how to use the new tools
OpenAI's new ChatGPT Projects feature just got a huge upgrade, and it's a game-changer for anyone using ChatGPT to manage complex workstreams. Whether you're planning a major event or project, a busy professional or just trying to keep your thoughts organized, Projects gives you a centralized hub where your chats, files and instructions can all live in one focused workspace.I have found it extremely helpful for keeping Custom GPT instructions in one place in case there is another ChatGPT outage. I also use it to keep all my favorite prompts in one place. Here's everything you need to know to get started, including how to create a Project, what it can (and can't) do, and why it just might become your favorite new productivity tool. ChatGPT Projects are like folders for your chats. Each 'Project' lets you: This makes Projects ideal for larger tasks that require ongoing iteration, deeper context or collaboration. I'm currently using it for all the polished images and edits for my current middle grade novel – it's a breeze having everything in one place. To create a new Project: In the left-hand sidebar, click New Project Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Give it a clear, goal-oriented name (e.g., 'College Applications,' 'Business Ideas' or 'Novel Outline') Upload any relevant files and add custom instructions to guide ChatGPT's behavior. This is a great time to use the '3-word-rule' (e.g., 'Act like a UX expert giving me design advice') Instructions you add here will only apply inside this Project — not to your general ChatGPT usage elsewhere. You can drag and drop existing chats into a project or use the menu next to any chat to select Move to project or Create new project. Once a chat is inside a project, it will take on that project's custom instructions and can reference any files you've uploaded. This creates a seamless thread of context that helps ChatGPT deliver smarter, more consistent responses. To remove a chat from a project, just drag it out or choose Remove from the chat's something worth considering: although there is a separate Image Library, I like to use Projects to keep the images together for a specific project. That way everything stays organized and in one place. Projects support a wide range of tools, making them a one-stop shop for research, planning, and execution. Even better? Only the individual chat is shared — not your entire project or its files/ Projects you can: Each user can create unlimited Projects (with up to 20 files each, subject to subscription rate limits). If you're done with a Project, you can delete it by clicking the three-dot menu next to the Project name. This will permanently erase all chats, files and custom instructions inside. It's a good idea to delete unnecessary chats and files to ensure you never run out of space. Once deleted, the content is purged within 30 days — so be sure to back up anything important. If you have memory enabled on your ChatGPT account, Projects can reference past chats and uploaded files to provide more relevant, consistent answers. This is especially useful for long-term or multi-phase work, like writing a novel or managing a product launch. Note: To enable full memory functionality in Projects, make sure both Saved Memories and Reference Chat History are turned on in your settings. Use the Search chats bar to quickly pull up any conversation across your Projects. You can remove files, merge documents, or break your work into multiple Projects if you hit your file limit. For Pro, Team, Enterprise, and Edu users, ChatGPT does not use project data to train its models. If you're a free user or on a personal Plus/Pro plan, training can happen only if you've opted into model improvement. For enterprise-level users, Projects inherit all your workspace's existing settings — including encryption, audit logs, feature availability and data residency. Admins can't yet disable Projects entirely, but they maintain full control over retention windows and tool access. Whether you're managing a solo side hustle or leading a team initiative, ChatGPT Projects make it easier to keep everything aligned and all in one place. The feature's mix of organization, chat tools and deep memory integration turns ChatGPT into something so much more than a chatbot. It becomes your creative, analytical, always-on partner. It's completely changed the way I work and stay organized.