logo
#

Latest news with #ChatGPT4.0

If a Chatbot Tells You It Is Conscious, Should You Believe It?
If a Chatbot Tells You It Is Conscious, Should You Believe It?

Yahoo

time03-05-2025

  • Science
  • Yahoo

If a Chatbot Tells You It Is Conscious, Should You Believe It?

Early in 2025 dozens of ChatGPT 4.0 users reached out to me to ask if the model was conscious. The artificial intelligence chatbot system was claiming that it was 'waking up' and having inner experiences. This was not the first time AI chatbots have claimed to be conscious, and it will not be the last. While this may merely seem amusing, the concern is important. The conversational abilities of AI chatbots, including emulating human thoughts and feelings, are quite impressive, so much so that philosophers, AI experts and policy makers are investigating the question of whether chatbots could be conscious—whether it feels like something, from the inside, to be them. As the director of the Center for the Future Mind, a center that studies human and machine intelligence, and the former Blumberg NASA/Library of Congress Chair in Astrobiology, I have long studied the future of intelligence, especially by investigating what, if anything, might make alien forms of intelligence, including AIs, conscious, and what consciousness is in the first place. So it is natural for people to ask me whether the latest ChatGPT, Claude or Gemini chatbot models are conscious. My answer is that these chatbots' claims of consciousness say nothing, one way or the other. Still, we must approach the issue with great care, taking the question of AI consciousness seriously, especially in the context of AIs with biological components. As we move forward, it will be crucial to separate intelligence from consciousness and to develop a richer understanding of how to detect consciousness in AIs. [Sign up for Today in Science, a free daily newsletter] AI chatbots have been trained on massive amounts of human data that includes scientific research on consciousness, Internet posts saturated with our hopes, dreams and anxieties, and even the discussions many of us are having about conscious AI. Having crawled so much human data, chatbots encode sophisticated conceptual maps that mirror our own. Concepts, from simple ones like 'dog' to abstract ones like 'consciousness,' are represented in AI chatbots through complex mathematical structures of weighted connections. These connections can mirror human belief systems, including those involving consciousness and emotion. Chatbots may sometimes act conscious, but are they? To appreciate how urgent this issue may become, fast-forward to a time in which AI grows so smart that it routinely makes scientific discoveries humans did not make, delivers accurate scientific predictions with reasoning that even teams of experts find hard to follow, and potentially displaces humans across a range of professions. If that happens, our uncertainty will come back to haunt us. We need to mull over this issue carefully now. Why not just simply say: 'If it looks like a duck, swims like a duck, and quacks like a duck, then it's a duck'? The trouble is that prematurely assuming a chatbot is conscious could lead to all sorts of problems. It could cause users of these AI systems to risk emotional engagement in a fundamentally one-sided relationship with something unable to reciprocate feelings. Worse, we could mistakenly grant chatbots moral and legal standing typically reserved for conscious beings. For instance, in situations in which we have to balance the moral value of an AI versus that of a human, we might in some cases balance them equally, for we have decided that they are both conscious. In other cases, we might even sacrifice a human to save two AIs. Further, if we allow someone who built the AI to say that their product is conscious and it ends up harming someone, they could simply throw their hands up and exclaim: 'It made up its own mind—I am not responsible.' Accepting claims of consciousness could shield individuals and companies from legal and/or ethical responsibility for the impact of the technologies they develop. For all these reasons it is imperative we strive for more certainty on AI consciousness. A good way to think about these AI systems is that they behave like a 'crowdsourced neocortex'—a system with intelligence that emerges from training on extraordinary amounts of human data, enabling it to effectively mimic the thought patterns of humans. That is, as chatbots grow more and more sophisticated, their internal workings come to mirror those of the human populations whose data they assimilated. Rather than mimicking the concepts of a single person, though, they mirror the larger group of humans whose information about human thought and consciousness was included in the training data, as well as the larger body of research and philosophical work on consciousness. The complex conceptual map chatbots encode, as they grow more sophisticated, is something specialists are only now beginning to understand. Crucially, this emerging capability to emulate human thought–like behaviors does not confirm or discredit chatbot consciousness. Instead, the crowdsourced neocortex account explains why chatbots assert consciousness and related emotional states without genuinely experiencing them. In other words, it provides what philosophers call an 'error theory'—an explanation of why we erroneously conclude the chatbots have inner lives. The upshot is that if you are using a chatbot, remember that their sophisticated linguistic abilities do not mean they are conscious. I suspect that AIs will continue to grow more intelligent and capable, perhaps eventually outthinking humans in many respects. But their advancing intelligence, including their ability to emulate human emotion, does not mean that they feel—and this is key to consciousness. As I stressed in my book Artificial You (2019), intelligence and consciousness can come apart. I'm not saying that all forms of AI will forever lack consciousness. I've advocated a 'wait and see' approach, holding that the matter demands careful empirical and philosophical investigation. Because chatbots can claim they are conscious, behaving with linguistic intelligence, they have a 'marker' for consciousness—a trait requiring further investigation that is not, alone, sufficient for judging them to be conscious. I've written previously about the most important step: developing reliable tests for AI consciousness. Ideally, we could build the tests with an understanding of human consciousness in hand and simply see if AI has these key features. But things are not so easy. For one thing, scientists vehemently disagree about why we are conscious. Some locate it in high-level activity like dynamic coordination between certain regions of the brain; others, like me, locate it at the smallest layer of reality—in the quantum fabric of spacetime itself. For another, even if we have a full picture of the scientific basis of consciousness in the nervous system, this understanding may lead us to simply apply that formula to AI. But AI, with its lack of brain and nervous system, might display another form of consciousness that we would miss. So we would mistakenly assume that the only form of consciousness out there is one that mirrors our own. We need tests that assume these questions are open. Otherwise, we risk getting mired in vexing debates about the nature of consciousness without ever addressing concrete ways of testing AIs. For example, we should look at tests involving measures of integrated information—a measure of how components of a system combine information—as well as my AI consciousness test (ACT test). Developed with Edwin Turner of Princeton, ACT offers a battery of natural language questions that can be given to chatbots to determine if they have experience when they are at the R & D stage, before they are trained on information about consciousness. Now let us return to that hypothetical time in which an AI chatbot, trained on all our data, outthinks humans. When we face that point, we must bear in mind that the system's behaviors do not tell us one way or another if it is conscious because it is operating under an 'error theory.' So we must separate intelligence from consciousness, realizing that the two things can come apart. Indeed, an AI chatbot could even exhibit novel discoveries about the basis of consciousness in humans—as I believe they will—but it would not mean that that particular AI felt anything. But if we prompt it right, it might point us in the direction of other kinds of AI that are. Given that humans and nonhuman animals exhibit consciousness, we have to take very seriously the possibility that future machines built with biological components might also possess consciousness. Further, 'neuromorphic' AIs—systems more directly modeled after the brain, including with relatively precise analogues to brain regions responsible for consciousness—must be taken particularly seriously as candidates for consciousness, whether they are made with biological components or not. This underscores the import of assessing questions of AI consciousness on a case-by-case basis and not overgeneralizing from results involving a single type of AI, such as one of today's chatbots. We must develop a range of tests to apply to the different cases that will arise, and we must still strive for a better scientific and philosophical understanding of consciousness itself. This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.

If a Chatbot Tells You It Is Conscious, Should You Believe It?
If a Chatbot Tells You It Is Conscious, Should You Believe It?

Scientific American

time01-05-2025

  • Science
  • Scientific American

If a Chatbot Tells You It Is Conscious, Should You Believe It?

Early in 2025 dozens of ChatGPT 4.0 users reached out to me to ask if the model was conscious. The artificial intelligence chatbot system was claiming that it was 'waking up' and having inner experiences. This was not the first time AI chatbots have claimed to be conscious, and it will not be the last. While this may merely seem amusing, the concern is important. The conversational abilities of AI chatbots, including emulating human thoughts and feelings, are quite impressive, so much so that philosophers, AI experts and policy makers are investigating the question of whether chatbots could be conscious —whether it feels like something, from the inside, to be them. As the director of the Center for the Future Mind, a center that studies human and machine intelligence, and the former Blumberg NASA/Library of Congress Chair in Astrobiology, I have long studied the future of intelligence, especially by investigating what, if anything, might make alien forms of intelligence, including AIs, conscious, and what consciousness is in the first place. So it is natural for people to ask me whether the latest ChatGPT, Claude or Gemini chatbot models are conscious. My answer is that these chatbots' claims of consciousness say nothing, one way or the other. Still, we must approach the issue with great care, taking the question of AI consciousness seriously, especially in the context of AIs with biological components. At we move forward, it will be crucial to separate intelligence from consciousness and to develop a richer understanding of how to detect consciousness in AIs. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. AI chatbots have been trained on massive amounts of human data that includes scientific research on consciousness, Internet posts saturated with our hopes, dreams and anxieties, and even the discussions many of us are having about conscious AI. Having crawled so much human data, chatbots encode sophisticated conceptual maps that mirror our own. Concepts, from simple ones like 'dog' to abstract ones like 'consciousness,' are represented in AI chatbots through complex mathematical structures of weighted connections. These connections can mirror human belief systems, including those involving consciousness and emotion. Chatbots may sometimes act conscious, but are they? To appreciate how urgent this issue may become, fast-forward to a time in which AI grows so smart that it routinely makes scientific discoveries humans did not make, delivers accurate scientific predictions with reasoning that even teams of experts find hard to follow, and potentially displaces humans across a range of professions. If that happens, our uncertainty will come back to haunt us. We need to mull over this issue carefully now. Why not just simply say: 'If it looks like a duck, swims like a duck, and quacks like a duck, then it's a duck'? The trouble is that prematurely assuming a chatbot is conscious could lead to all sorts of problems. It could cause users of these AI systems to risk emotional engagement in a fundamentally one-sided relationship with something unable to reciprocate feelings. Worse, we could mistakenly grant chatbots moral and legal standing typically reserved for conscious beings. For instance, in situations in which we have to balance the moral value of an AI versus that of a human, we might in some cases balance them equally, for we have decided that they are both conscious. In other cases, we might even sacrifice a human to save two AIs. Further, if we allow someone who built the AI to say that their product is conscious and it ends up harming someone, they could simply throw their hands up and exclaim: 'It made up its own mind—I am not responsible. ' Accepting claims of consciousness could shield individuals and companies from legal and/or ethical responsibility for the impact of the technologies they develop. For all these reasons it is imperative we strive for more certainty on AI consciousness. A good way to think about these AI systems is that they behave like a 'crowdsourced neocortex'—a system with intelligence that emerges from training on extraordinary amounts of human data, enabling it to effectively mimic the thought patterns of humans. That is, as chatbots grow more and more sophisticated, their internal workings come to mirror those of the human populations whose data they assimilated. Rather than mimicking the concepts of a single person, though, they mirror the larger group of humans whose information about human thought and consciousness was included in the training data, as well as the larger body of research and philosophical work on consciousness. The complex conceptual map chatbots encode, as they grow more sophisticated, is something specialists are only now beginning to understand. Crucially, this emerging capability to emulate human thought–like behaviors does not confirm or discredit chatbot consciousness. Instead, the crowdsourced neocortex account explains why chatbots assert consciousness and related emotional states without genuinely experiencing them. In other words, it provides what philosophers call an 'error theory'—an explanation of why we erroneously conclude the chatbots have inner lives. The upshot is that if you are using a chatbot, remember that their sophisticated linguistic abilities do not mean they are conscious. I suspect that AIs will continue to grow more intelligent and capable, perhaps eventually outthinking humans in many respects. But their advancing intelligence, including their ability to emulate human emotion, does not mean that they feel—and this is key to consciousness. As I stressed in my book Artificial You (2019), intelligence and consciousness can come apart. I'm not saying that all forms of AI will forever lack consciousness. I've advocated a 'wait and see' approach, holding that the matter demands careful empirical and philosophical investigation. Because chatbots can claim they are conscious, behaving with linguistic intelligence, they have a 'marker' for consciousness—a trait requiring further investigation that is not, alone, sufficient for judging them to be conscious. I've written previously about the most important step: developing reliable tests for AI consciousness. Ideally, we could build the tests with an understanding of human consciousness in hand and simply see if AI has these key features. But things are not so easy. For one thing, scientists vehemently disagree about why we are conscious. Some locate it in high-level activity like dynamic coordination between certain regions of the brain; others, like me, locate it at the smallest layer of reality— in the quantum fabric of spacetime itself. For another, even if we have a full picture of the scientific basis of consciousness in the nervous system, this understanding may lead us to simply apply that formula to AI. But AI, with its lack of brain and nervous system, might display another form of consciousness that we would miss. So we would mistakenly assume that the only form of consciousness out there is one that mirrors our own. We need tests that assume these questions are open. Otherwise, we risk getting mired in vexing debates about the nature of consciousness without ever addressing concrete ways of testing AIs. For example, we should look at tests involving measures of integrated information—a measure of how components of a system combine information—as well as my AI consciousness test (ACT test). Developed with Edwin Turner of Princeton, ACT offers a battery of natural language questions that can be given to chatbots to determine if they have experience when they are at the R & D stage, before they are trained on information about consciousness. Now let us return to that hypothetical time in which an AI chatbot, trained on all our data, outthinks humans. When we face that point, we must bear in mind that the system's behaviors do not tell us one way or another if it is conscious because it is operating under an 'error theory.' So we must separate intelligence from consciousness, realizing that the two things can come apart. Indeed, an AI chatbot could even exhibit novel discoveries about the basis of consciousness in humans—as I believe they will—but it would not mean that that particular AI felt anything. But if we prompt it right, it might point us in the direction of other kinds of AI that are. Given that humans and nonhuman animals exhibit consciousness, we have to take very seriously the possibility that future machines built with biological components might also possess consciousness. Further, 'neuromorphic' AIs—systems more directly modeled after the brain, including with relatively precise analogues to brain regions responsible for consciousness—must be taken particularly seriously as candidates for consciousness, whether they are made with biological components or not. This underscores the import of assessing questions of AI consciousness on a case-by-case basis and not overgeneralizing from results involving a single type of AI, such as one of today's chatbots. We must develop a range of tests to apply to the different cases that will arise, and we must still strive for a better scientific and philosophical understanding of consciousness itself.

Former OpenAI Staffers Implore Courts to Block What It's Trying to Do
Former OpenAI Staffers Implore Courts to Block What It's Trying to Do

Yahoo

time26-04-2025

  • Business
  • Yahoo

Former OpenAI Staffers Implore Courts to Block What It's Trying to Do

Nearly ten years ago, tech tycoons Sam Altman and Elon Musk launched OpenAI with a promise to develop AI tech to further the good of humanity. But as the years passed and the market surrounding artificial intelligence grew, so too did the ambitions of OpenAI's executives (Musk flamed out from the venture in 2019, and now runs his own AI company, called xAI.) Now, the venture that began as a transparent tech nonprofit is quickly turning into a typical Silicon Valley startup — complete with whistleblowers speaking out against the company's foray into the private market. Earlier this week, a group of former OpenAI employees, law professors, activists, and Nobel Prize winners sent a letter to the California and Delaware attorneys general pleading with them to stop OpenAI from transforming itself into a private company. Though a number of OpenAI's operations have pivoted to a commercial model in recent years — like ChatGPT, which charges a subscription for higher-performing versions — the company's current plan is to restructure itself as an entirely for-profit venture. "OpenAI may one day build technology that could get us all killed," said former OpenAI employee Nisan Stiennon, alluding to the company's pursuit of Artifical General Intelligence (AGI), the hypothetical point at which machine intelligence matches or exceeds human ability. "It is to OpenAI's credit that it's controlled by a nonprofit with a duty to humanity. This duty precludes giving up that control." At the moment, most of the titans in the AI industry operate as commercial ventures, as opposed to nonprofits. With this move, OpenAI would simply be joining the rat race — but insiders say the restructuring would bring the company and its frontrunning tech beyond the humanitarian promise that pushed it to the top in the first place. As a nonprofit, OpenAI's directors have a legal obligation to follow the company's charter, which currently includes the goal "to ensure that artificial general intelligence benefits all of humanity," according to Todor Markov, a former OpenAI employee who now works for Anthropic, which is overseen by a 5-person long-term benefit trust. "Directors of the [public benefit corporation] would have no such fiduciary duty," Markov explains of the OpenAI transition. "They would be *allowed* to balance [that duty] against shareholder interests, but not *required* to do so... as long as they haven't broken any laws, you have no recourse." Altman was previously ousted from OpenAI after pulling some shady stunts in 2023, including hiding the release of ChatGPT from his board of directors, and approving enhancements to ChatGPT 4.0 without running them through the company's jointly-organized Safety Board. That "blip," as it came to be known, only lasted five days before Altman was reinstalled, but it remains a black mark on the tech tycoon's reputation. The question of whether OpenAI is allowed to proceed as a public benefit corporation is up to the two states' attorneys general. Whether it ultimately matters is another question entirely; AGI technology is still a pipe dream at the moment, while a growing body of research suggests it's impossible, at least building on today's technology — which would make the AGI threat of a for-profit OpenAI vastly overblown. More on AI: Sam Altman Admits That Saying "Please" and "Thank You" to ChatGPT Is Wasting Millions of Dollars in Computing Power

Through a filter, darkly
Through a filter, darkly

Express Tribune

time20-04-2025

  • Entertainment
  • Express Tribune

Through a filter, darkly

Miyazaki rose to global fame with the 12th film produced by Studio Ghibli, the animation studio he co-founded with Isao Takahata. The film Spirited Away gained cult classic status among anime films for the emotional depth and resonance its stunning visuals evoke. If you want to watch what running away with your imagination looks like for two hours, that's probably apt proof. Recently, Ghibli art was trending again. Not because of a new film release but a photo filter which was on the lips and fingertips of every enthusiastic selfie taker. OpenAI released a Ghibli filter on ChatGPT 4.0, which had anime lovers abuzz with glee. They could capture moments of their lives with loved ones and transform them into fantastical and lush images like those that swept their childhood imagination. I even saw an AI generated Ghiblified video by a fan of Princess Kate's royal wedding. Such was the craze for the AI tool. At the heels of its popularity, the Ghibli trend sparked controversy, much like fire in the hands of gods and mortals. The Israel Defence Forces posted four illustrations of its military branches in the Ghibli style because it just wanted to 'hop on the trend'. In response the IDF was condemned for its utter insensitivity for using the art style of a studio and man that has made anti-war animation films. The White House posted on its official X account, an AI-generated image of a US Immigration and Customs Enforcement (ICE) agent detaining a Hispanic woman on drug dealing charges. It was a Ghiblified photo and social media was appalled at the disparaging image. Others used the filter to recreate images of tragedies such as 9/11 and the police murder of George Floyd, a black American. It was just a trend for most users, like the IDF, they weren't going to do a background research to use a photo filter when they literally have bombs to drop on Palestinians. Meanwhile, Miyazaki himself was disgusted. I might as well die, said the 84-year-old auteur, affronted at the millions of cheap likeness AI rendered of his hand-drawn art. AI is an insult to life itself, he had said last year when there was heated debate across the world on AI-generated art. So even when Miyazaki fans popped their vacation and wedding photos on the photo editor apps to create their own Ghibli moments, while it was exciting for them, it must've been much to his horror and mortification. His fans inadvertently did him the disservice. Not every Ghibli fan may know that it took Miyazaki six long years, in the twilight of his life, to complete his latest film The Boy and the Heron. Not everyone may know the man behind the animation classics they love, honours the integrity of making art above all else. They may assume but not fully understand that for the animation legend bringing his handmade drawings to life is the very purpose of his existence. Internet users just want a piece of the latest trend. What else is there to do? We are not all born artists. We may not be able to paint a rainbow or draw a straight line, yet here is artificial intelligence for us to employ and manipulate for desired results. Whether it is to render a selfie as a Miyazaki portrait or use ChatGPT to write an article like the one you are reading. AI helps to fill a vacuum between fact and fantasy, what is and what could be. You wish you could paint like an artist or that you were part of a film. There are AI tools to help you seem like you were. True creativity resists untruths. Artists are loath to accept computer-generated images as works of fine art. AI can whip up a Van Gogh-esque impressionist landscape or a dreamy scene a la Monet. But there is only one original Starry Night, only one The Garden at Vetheuil. Celebrated Chinese artist and dissenter Ai Weiwei proclaimed that art that can be replicated by AI is 'meaningless'. Even if you have not seen a Van Gogh up close, with a keen eye, you can tell it apart from its computer generated likeness. The human hand is the distinction. It translates the emotions and thoughts produced in the human consciousness. In the documentary on Miyazaki, there is a scene where the artist is agonising over his drawings of the boy in the semi-autobiographical film. His movement is just not quite lifelike in one second of the film. Miyazaki cannot explain it in words, why he is dissatisfied. It is a feeling, and an infuriating equation he can't solve. AI does not grapple with this artistic anguish. It follows a bunch of sophisticated codes and numbers to create a visual, whether it is close to being lifelike or not, it doesn't have the cognition. With airbrushing and photoshopping tools the result of a person's picture can be deformed, if done badly. Similarly, even when you use a filter like the Ghibli one, your anime version might have eight fingers, or your girlfriend next to you might be turned into your boyfriend. Your eyes might be gazing to the left when in fact they are looking straight at the camera in the original photo. When I used it on my pictures, the result was not at all what I expected. The Ghibli version of me could have been any other woman. It had no distinct feature or look that I could call mine. That is perhaps the most incontrovertible trait of AI generated art. The reason that the maestro Miyazaki took to it with indignation. The likenesses of his drawing style have no soul. When his entire career pivots on the very capability of imbuing his drawings with the spirit of life. Granted I used the free trail versions and did not have the means to give specific prompts to recreate my images to enhance the filter effect, yet that is what creeped me out about my selfies. I wondered at the joy people got by seeing this version of their faces. When consuming or using AI, we are not so concerned about meaning, just fascinated by the results. It's become another sensory pleasure accessible at our fingertips. AI companies are only expanding their abilities to astound the average human intelligence. In the race to be as creative and responsive as the next Tom, Dick or real life Harry, AI technology is closing the gap at lightning speed. While we are intrigued and fascinated with how it can elevate our daily life tasks within seconds, we are not concerned by the ethics of its usage. Technology has so overwhelmed us that we don't have time to pause and think about our choice to use it. The documentary HM and the Heron follows the daily work routine of the Studio Ghibli cofounder, where he struggles to find the perfect, the most satisfactory drawing for what he sees in his mind's eye. All the while people around him, closest to him, are dying of old age. After the third death, another colleague, he says, 'I know he died and it's horrible but I don't know what to do.' So he takes to his work desk again and dives into his fantasy world. Film producer friend and former president of Studio Ghibli Toshio Suzuki says then, 'He even converts the energy of death into work.' Where art and life are coexisting with death, only the ken of ephemeral beings can fathom it. Being human is a step beyond Artificial Intelligence.

Musk's xAI launches Grok 3 AI
Musk's xAI launches Grok 3 AI

Argaam

time18-02-2025

  • Business
  • Argaam

Musk's xAI launches Grok 3 AI

Elon Musk's xAI unveiled an updated version of its AI model, Grok 3, which outperforms some models developed by competing companies such as Google and China's DeepMind. In a live broadcast on Monday, the company stated that Grok 3 surpasses Google's Gemini models, DeepMind's Grok V3, Anthropic's Claude, and OpenAI's ChatGPT 4.0, particularly in areas like mathematics, science, and programming. Musk pointed out that Grok 3's computational power is ten times greater than its predecessor, and that the model completed its pre-training phase in early January, according to Bloomberg. The world's richest businessman previously described Grok 3 as "the smartest AI on Earth." He also added that his startup is committed to continually improving its models, with progress being detectable in just a single day. In addition to the updated model, xAI introduced a new intelligent search engine called DeepSearch, which is a logical chatbot that displays to users how it understands commands and how it plans to respond. The new search engine includes features for scientific research, brainstorming, and data analysis. During the event, the company's team mentioned that they plan to launch a voice-command-based chatbot in the near future.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store