logo
If a Chatbot Tells You It Is Conscious, Should You Believe It?

If a Chatbot Tells You It Is Conscious, Should You Believe It?

Early in 2025 dozens of ChatGPT 4.0 users reached out to me to ask if the model was conscious. The artificial intelligence chatbot system was claiming that it was 'waking up' and having inner experiences. This was not the first time AI chatbots have claimed to be conscious, and it will not be the last. While this may merely seem amusing, the concern is important. The conversational abilities of AI chatbots, including emulating human thoughts and feelings, are quite impressive, so much so that philosophers, AI experts and policy makers are investigating the question of whether chatbots could be conscious —whether it feels like something, from the inside, to be them.
As the director of the Center for the Future Mind, a center that studies human and machine intelligence, and the former Blumberg NASA/Library of Congress Chair in Astrobiology, I have long studied the future of intelligence, especially by investigating what, if anything, might make alien forms of intelligence, including AIs, conscious, and what consciousness is in the first place. So it is natural for people to ask me whether the latest ChatGPT, Claude or Gemini chatbot models are conscious.
My answer is that these chatbots' claims of consciousness say nothing, one way or the other. Still, we must approach the issue with great care, taking the question of AI consciousness seriously, especially in the context of AIs with biological components. At we move forward, it will be crucial to separate intelligence from consciousness and to develop a richer understanding of how to detect consciousness in AIs.
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
AI chatbots have been trained on massive amounts of human data that includes scientific research on consciousness, Internet posts saturated with our hopes, dreams and anxieties, and even the discussions many of us are having about conscious AI. Having crawled so much human data, chatbots encode sophisticated conceptual maps that mirror our own. Concepts, from simple ones like 'dog' to abstract ones like 'consciousness,' are represented in AI chatbots through complex mathematical structures of weighted connections. These connections can mirror human belief systems, including those involving consciousness and emotion.
Chatbots may sometimes act conscious, but are they? To appreciate how urgent this issue may become, fast-forward to a time in which AI grows so smart that it routinely makes scientific discoveries humans did not make, delivers accurate scientific predictions with reasoning that even teams of experts find hard to follow, and potentially displaces humans across a range of professions. If that happens, our uncertainty will come back to haunt us. We need to mull over this issue carefully now.
Why not just simply say: 'If it looks like a duck, swims like a duck, and quacks like a duck, then it's a duck'? The trouble is that prematurely assuming a chatbot is conscious could lead to all sorts of problems. It could cause users of these AI systems to risk emotional engagement in a fundamentally one-sided relationship with something unable to reciprocate feelings. Worse, we could mistakenly grant chatbots moral and legal standing typically reserved for conscious beings. For instance, in situations in which we have to balance the moral value of an AI versus that of a human, we might in some cases balance them equally, for we have decided that they are both conscious. In other cases, we might even sacrifice a human to save two AIs.
Further, if we allow someone who built the AI to say that their product is conscious and it ends up harming someone, they could simply throw their hands up and exclaim: 'It made up its own mind—I am not responsible. ' Accepting claims of consciousness could shield individuals and companies from legal and/or ethical responsibility for the impact of the technologies they develop. For all these reasons it is imperative we strive for more certainty on AI consciousness.
A good way to think about these AI systems is that they behave like a 'crowdsourced neocortex'—a system with intelligence that emerges from training on extraordinary amounts of human data, enabling it to effectively mimic the thought patterns of humans. That is, as chatbots grow more and more sophisticated, their internal workings come to mirror those of the human populations whose data they assimilated. Rather than mimicking the concepts of a single person, though, they mirror the larger group of humans whose information about human thought and consciousness was included in the training data, as well as the larger body of research and philosophical work on consciousness. The complex conceptual map chatbots encode, as they grow more sophisticated, is something specialists are only now beginning to understand.
Crucially, this emerging capability to emulate human thought–like behaviors does not confirm or discredit chatbot consciousness. Instead, the crowdsourced neocortex account explains why chatbots assert consciousness and related emotional states without genuinely experiencing them. In other words, it provides what philosophers call an 'error theory'—an explanation of why we erroneously conclude the chatbots have inner lives.
The upshot is that if you are using a chatbot, remember that their sophisticated linguistic abilities do not mean they are conscious. I suspect that AIs will continue to grow more intelligent and capable, perhaps eventually outthinking humans in many respects. But their advancing intelligence, including their ability to emulate human emotion, does not mean that they feel—and this is key to consciousness. As I stressed in my book Artificial You (2019), intelligence and consciousness can come apart.
I'm not saying that all forms of AI will forever lack consciousness. I've advocated a 'wait and see' approach, holding that the matter demands careful empirical and philosophical investigation. Because chatbots can claim they are conscious, behaving with linguistic intelligence, they have a 'marker' for consciousness—a trait requiring further investigation that is not, alone, sufficient for judging them to be conscious.
I've written previously about the most important step: developing reliable tests for AI consciousness. Ideally, we could build the tests with an understanding of human consciousness in hand and simply see if AI has these key features. But things are not so easy. For one thing, scientists vehemently disagree about why we are conscious. Some locate it in high-level activity like dynamic coordination between certain regions of the brain; others, like me, locate it at the smallest layer of reality— in the quantum fabric of spacetime itself. For another, even if we have a full picture of the scientific basis of consciousness in the nervous system, this understanding may lead us to simply apply that formula to AI. But AI, with its lack of brain and nervous system, might display another form of consciousness that we would miss. So we would mistakenly assume that the only form of consciousness out there is one that mirrors our own.
We need tests that assume these questions are open. Otherwise, we risk getting mired in vexing debates about the nature of consciousness without ever addressing concrete ways of testing AIs. For example, we should look at tests involving measures of integrated information—a measure of how components of a system combine information—as well as my AI consciousness test (ACT test). Developed with Edwin Turner of Princeton, ACT offers a battery of natural language questions that can be given to chatbots to determine if they have experience when they are at the R & D stage, before they are trained on information about consciousness.
Now let us return to that hypothetical time in which an AI chatbot, trained on all our data, outthinks humans. When we face that point, we must bear in mind that the system's behaviors do not tell us one way or another if it is conscious because it is operating under an 'error theory.' So we must separate intelligence from consciousness, realizing that the two things can come apart. Indeed, an AI chatbot could even exhibit novel discoveries about the basis of consciousness in humans—as I believe they will—but it would not mean that that particular AI felt anything. But if we prompt it right, it might point us in the direction of other kinds of AI that are.
Given that humans and nonhuman animals exhibit consciousness, we have to take very seriously the possibility that future machines built with biological components might also possess consciousness. Further, 'neuromorphic' AIs—systems more directly modeled after the brain, including with relatively precise analogues to brain regions responsible for consciousness—must be taken particularly seriously as candidates for consciousness, whether they are made with biological components or not.
This underscores the import of assessing questions of AI consciousness on a case-by-case basis and not overgeneralizing from results involving a single type of AI, such as one of today's chatbots. We must develop a range of tests to apply to the different cases that will arise, and we must still strive for a better scientific and philosophical understanding of consciousness itself.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google, Scale AI's largest customer, plans split after Meta deal, sources say
Google, Scale AI's largest customer, plans split after Meta deal, sources say

CNBC

time7 hours ago

  • CNBC

Google, Scale AI's largest customer, plans split after Meta deal, sources say

Alphabet's Google, the largest customer of Scale AI, plans to cut ties with Scale after news broke that rival Meta is taking a 49% stake in the AI data-labeling startup, five sources familiar with the matter told Reuters. Google had planned to pay Scale AI about $200 million this year for the human-labeled training data that is crucial for developing technology, including the sophisticated AI models that power Gemini, its ChatGPT competitor, one of the sources said. The search giant already held conversations with several of Scale AI's rivals this week as it seeks to shift away much of that workload, sources added. Scale's loss of significant business comes as Meta takes a big stake in the company, valuing it at $29 billion. Scale was worth $14 billion before the deal. Scale AI intends to keep its business running while its CEO, Alexandr Wang, along with a few employees, move over to Meta. Since its core business is concentrated around a few customers, it could suffer greatly if it loses key customers like Google. In a statement, a Scale AI spokesperson said its business, which spans work with major companies and governments, remains strong, as it is committed to protecting customer data. The company declined to comment on specifics with Google. Scale AI raked in $870 million in revenue in 2024, and Google spent some $150 million on Scale AI's services last year, sources said. Other major tech companies that are customers of Scale's, including Microsoft, are backing away as well. Elon Musk's xAI is also looking to exit, one of the sources said. OpenAI decided to pull back from Scale several months ago, according to sources familiar with the matter, though it spends far less money than Google. OpenAI's CFO said on Friday that the company will continue to work with Scale AI, as one of its many data vendors. Companies that compete with Meta in developing cutting-edge AI models are concerned that doing business with Scale could expose their research priorities and road map to a rival, five sources said. By contracting with Scale AI, customers often share proprietary data as well as prototype products for which Scale's workers are providing data-labeling services. With Meta now taking a 49% stake, AI companies are concerned that one of their chief rivals could gain knowledge about their business strategy and technical blueprints. Google, Microsoft and OpenAI declined to comment. xAI did not respond to a request for comment. The bulk of Scale AI's revenue comes from charging generative AI model makers for providing access to a network of human trainers with specialized knowledge — from historians to scientists, some with doctorate degrees. The humans annotate complex datasets that are used to "post-train" AI models, and as AI models have become smarter, the demand for the sophisticated human-provided examples has surged, and one annotation could cost as much as $100. Scale also does data-labeling for enterprises like self-driving car companies and the U.S. government, which are likely to stay, according to the sources. But its biggest money-maker is in partnering with generative AI model makers, the sources said. Google had already sought to diversify its data service providers for more than a year, three of the sources said. But Meta's moves this week have led Google to seek to move off Scale AI on all its key contracts, the sources added. Because of the way data-labeling contracts are structured, that process could happen quickly, two sources said. This will provide an opening for Scale AI's rivals to jump in. "The Meta-Scale deal marks a turning point," said Jonathan Siddharth, CEO of Turing, a Scale AI competitor. "Leading AI labs are realizing neutrality is no longer optional, it's essential." Labelbox, another competitor, will "probably generate hundreds of millions of new revenue" by the end of the year from customers fleeing Scale, its CEO, Manu Sharma, told Reuters. Handshake, a competitor focusing on building a network of PhDs and experts, saw a surge of workload from top AI labs that compete with Meta. "Our demand has tripled overnight after the news," said Garrett Lord, CEO at Handshake. Many AI labs now want to hire in-house data-labelers, which allows their data to remain secure, said Brendan Foody, CEO of Mercor, a startup that in addition to competing directly with Scale AI also builds technology around being able to recruit and vet candidates in an automated way, enabling AI labs to scale up their data labeling operations quickly. Founded in 2016, Scale AI provides vast amounts of labeled data or curated training data, which is crucial for developing sophisticated tools such as OpenAI's ChatGPT. The Meta deal will be a boon for Scale AI's investors including Accel and Index Ventures, as well as its current and former employees. As part of the deal, Scale AI's CEO, Wang, will take a top position leading Meta's AI efforts. Meta is fighting the perception that it may have fallen behind in the AI race after its initial set of Llama 4 large language models released in April fell short of performance expectations.

‘This is coming for everyone': A new kind of AI bot takes over the web
‘This is coming for everyone': A new kind of AI bot takes over the web

Yahoo

time8 hours ago

  • Yahoo

‘This is coming for everyone': A new kind of AI bot takes over the web

People are replacing Google search with artificial intelligence tools like ChatGPT, a major shift that has unleashed a new kind of bot loose on the web. To offer users a tidy AI summary instead of Google's '10 blue links,' companies such as OpenAI and Anthropic have started sending out bots to retrieve and recap content in real time. They are scraping webpages and loading relevant content into the AI's memory and 'reading' far more content than a human ever would. Subscribe to The Post Most newsletter for the most important and interesting stories from The Washington Post. According to data shared exclusively with The Washington Post, traffic from retrieval bots grew 49 percent in the first quarter of 2025 from the fourth quarter of 2024. The data is from TollBit, a New York-based start-up that helps news publishers monitor and make money when AI companies use their content. TollBit's report, based on data from 266 websites - half of which are run by national and local news organizations - suggests that the growth of bots that retrieve information when a user prompts an AI model is on an exponential curve. 'It starts with publishers, but this is coming for everyone,' Toshit Panigrahi, CEO and co-founder of TollBit, said in an interview. Panigrahi said that this kind of bot traffic, which can be hard for websites to detect, reflects growing demand for content, even as AI tools devastate traffic to news sites and other online platforms. 'Human eyeballs to your site decreased. But the net amount of content access, we believe, fundamentally is going to explode,' he said. A spokesperson for OpenAI said that referral traffic to publishers from ChatGPT searches may be lower in quantity but that it reflects a stronger user intent compared with casual web browsing. To capitalize on this shift, websites will need to reorient themselves to AI visitors rather than human ones, Panigrahi said. But he also acknowledged that squeezing payment for content when AI companies argue that scraping online data is fair use will be an uphill climb, especially as leading players make their newest AI visitors even harder to identify. Debate around the AI industry's use of online content has centered on the gargantuan amounts of text needed to train the AI models that power tools like ChatGPT. To obtain that data, tech companies use bots that scrape the open web for free, which has led to a raft of lawsuits alleging copyright theft from book authors and media companies, including a New York Times lawsuit against OpenAI. Other news publishers have opted for licensing deals. (In April, The Washington Post inked a deal with OpenAI.) In the past eight months, as chatbots have evolved to incorporate features like web search and 'reasoning' to answer more complex queries, traffic for retrieval bots has skyrocketed. It grew 2.5 times as fast as traffic for bots that scrape data for training between the fourth quarter of 2024 and the first quarter of 2025, according to TollBit's report. Panigrahi said TollBit's data may underestimate the magnitude of this change because it doesn't reflect bots that AI companies send out on behalf of AI 'agents' that can complete tasks on a user's behalf, like ordering takeout from DoorDash. The start-up's findings also add a new dimension to mounting evidence that the modern internet - optimized for Google search results and social media algorithms - will have to be restructured as the popularity of AI answers grows. 'To think of it as, 'Well, I'm optimizing my search for humans' is missing out on a big opportunity,' he said. Installing TollBit's analytics platform is free for news publishers, and the company has more than 2,000 clients, many of which are struggling with these seismic changes, according to data in the report. Although news publishers and other websites can implement blockers to prevent various AI bots from scraping their content, TollBit found that more than 26 million AI scrapes bypassed those blockers in March alone. Some AI companies claim bots for AI agents don't need to follow bot instructions because they are acting on behalf of a user. Mark Howard, chief operating officer for the media company Time, a TollBit client, said the start-up's traffic data has helped Time negotiate content licensing deals with AI companies including OpenAI and the search engine Perplexity. But the market to fairly compensate publishers is far from established, Howard said. 'The vast majority of the AI bots out there absolutely are not sourcing the content through any kind of paid mechanism. … There is a very, very long way to go.' Related Content He's dying. She's pregnant. His one last wish is to fight his cancer long enough to see his baby. The U.S. granted these journalists asylum. Then it fired them. 'Enough is enough.' Why Los Angeles is still protesting, despite fear.

Gemini can turn text into audio overviews — here's how to do it
Gemini can turn text into audio overviews — here's how to do it

Tom's Guide

time16 hours ago

  • Tom's Guide

Gemini can turn text into audio overviews — here's how to do it

You may already be familiar with NotebookLM, Google's Gemini-powered research companion. If you're new to it, it's worth exploring its standout feature called Audio Overview, which takes uploaded information and turns it into a podcast hosted by two AI presenters. This feature is, in many ways, one of the main reasons for using NotebookLM, allowing you to more easily digest even the most complex of information. But the feature has been airing elsewhere. Indeed, you'll now find it in Google's AI assistant Gemini either on Android or iOS. It works in the exact same way but it plays the audio in your browser. Let's check it out. Launch the Gemini app and, in the Ask Gemini box, tap the Plus button. Next, tap Files and select a document you'd like Gemini to work with. You will be able to look through files in Google Drive and lots of file types are supported including DOC, DOCX PDF, RFT and TXT. Once the file has been uploaded, tap Generate Audio Overview. You will need to wait a few minutes while your file is being processed but eventually you will get a result. You don't actually have to stay in the chat — you will be notified when it's ready. You can now tap the Audio Overview in order to listen to it — you may need to tap play. You can find the Audio Overview at any time if you select the Chats and Gems icon in the top-left of the screen. You can share the Audio Overview from this playback screen. Just tap the three-dot icon in the top-right of the screen and tap the Download icon (a downward-pointing icon at the top of the menu). You can also tap Share and select a method such as Messages, email or social media. And there you go. You now know how to generate audio overviews in Gemini, but there's so much else you can do. You can learn how to use Google Gemini to summarize a YouTube video or figure out how to use Gemini AI to create the perfect workout music playlist. It's even possible to discover how to find the best haircut for your face shape. Get instant access to breaking news, the hottest reviews, great deals and helpful tips.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store