It's becoming less taboo to talk about AI being 'conscious' if you work in tech
Three years ago, suggesting AI was "sentient" was one way to get fired in the tech world. Now, tech companies are more open to having that conversation.
This week, AI startup Anthropic launched a new research initiative to explore whether models might one day experience "consciousness," while a scientist at Google DeepMind described today's models as "exotic mind-like entities."
It's a sign of how much AI has advanced since 2022, when Blake Lemoine was fired from his job as a Google engineer after claiming the company's chatbot, LaMDA, had become sentient. Lemoine said the system feared being shut off and described itself as a person. Google called his claims "wholly unfounded," and the AI community moved quickly to shut the conversation down.
Neither Anthropic nor the Google scientist is going so far as Lemoine.
Anthropic, the startup behind Claude, said in a Thursday blog post that it plans to investigate whether models might one day have experiences, preferences, or even distress.
"Should we also be concerned about the potential consciousness and experiences of the models themselves? Should we be concerned about model welfare, too?" the company asked.
Kyle Fish, an alignment scientist at Anthropic who researches AI welfare, said in a video released Thursday that the lab isn't claiming Claude is conscious, but the point is that it's no longer responsible to assume the answer is definitely no.
He said as AI systems become more sophisticated, companies should "take seriously the possibility" that they "may end up with some form of consciousness along the way."
He added: "There are staggeringly complex technical and philosophical questions, and we're at the very early stages of trying to wrap our heads around them."
Fish said researchers at Anthropic estimate Claude 3.7 has between a 0.15% and 15% chance of being conscious. The lab is studying whether the model shows preferences or aversions, and testing opt-out mechanisms that could let it refuse certain tasks.
In March, Anthropic CEO Dario Amodei floated the idea of giving future AI systems an "I quit this job" button — not because they're sentient, he said, but as a way to observe patterns of refusal that might signal discomfort or misalignment.
Meanwhile, at Google DeepMind, principal scientist Murray Shanahan has proposed that we might need to rethink the concept of consciousness altogether.
"Maybe we need to bend or break the vocabulary of consciousness to fit these new systems," Shanahan said on a Deepmind podcast, published Thursday. "You can't be in the world with them like you can with a dog or an octopus — but that doesn't mean there's nothing there."
Google appears to be taking the idea seriously. A recent job listing sought a "post-AGI" research scientist, with responsibilities that include studying machine consciousness.
'We might as well give rights to calculators'
Not everyone's convinced, and many researchers acknowledge that AI systems are excellent mimics that could be trained to act conscious even if they aren't.
"We can reward them for saying they have no feelings," said Jared Kaplan, Anthropic's chief science officer, in an interview with The New York Times this week.
Kaplan cautioned that testing AI systems for consciousness is inherently difficult, precisely because they're so good at imitation.
Gary Marcus, a cognitive scientist and longtime critic of hype in the AI industry, told Business Insider he believes the focus on AI consciousness is more about branding than science.
"What a company like Anthropic is really saying is 'look how smart our models are — they're so smart they deserve rights,'" he said. "We might as well give rights to calculators and spreadsheets — which (unlike language models) never make stuff up."
Still, Fish said the topic will only become more relevant as people interact with AI in more ways — at work, online, or even emotionally.
"It'll just become an increasingly salient question whether these models are having experiences of their own — and if so, what kinds," he said.
Anthropic and Google DeepMind did not immediately respond to a request for comment.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Engadget
19 minutes ago
- Engadget
Android 16 is out, complete with new features for Pixel phones including live notification updates
Google has released Android 16 to the masses, as it's now available for compatible handsets. The company says new devices will come preloaded with the OS in "the coming months." As usual, it's first coming to Google's own Pixel phones. The update includes several notable features. The biggest one might be live updates in notifications. This means that stuff like ride-shares and food deliveries will get a progress bar directly in the notification, so folks won't have to constantly open and close the app to wonder why a burrito seems to be stuck four blocks away. Also, notifications from the same app will now be automatically grouped together to reduce clutter and pings. There's now support for LE audio hearing aids, with a native control option. Users can also switch to the phone's microphone when using one of these hearing devices for clearer audio in noisy places. Google has tied all of its security features together, so users can now be protected from "online attacks, harmful apps, unsafe websites, scam calls and more in just a tap." This includes new scam detection features that were previewed back in May. Shutterbugs are getting a fairly robust suite of new features, including automatic night mode scene detection, hybrid auto exposure and more precise color temperature adjustments. UltraHDR images have been improved, with support for HEIC encoding, and Android 16 offers integration with the high-end Advanced Professional Video (APV) codec. The company is finally bringing desktop windowing to Android, but it's not ready just yet. That feature will be available to general users later in the year, but Android 16 QPR 3 Beta 2 is currently previewing the feature. There's also a neat Android ecosystem update with some nifty features. This includes upgraded RCS group chats, with custom icons and the ability to mute threads. Google Photos now offers an AI-enhanced image editor that will recommend suggested edits. Emoji Kitchen is receiving new sticker combinations and Wear OS devices can now pay for transit fares without having to open a dedicated app. Finally, Google has offered details on the Pixel Drop for June. New features include a Pixel VIPs widget that displays information on preferred contacts and more expressive captions on videos. Update, June 10, 2PM ET: Well, we jumped the gun on that a little bit. Android 16 should now be available, as previously (and incorrectly) stated. We regret the error.

Engadget
30 minutes ago
- Engadget
Apple Intelligence announcements at WWDC: Everything Apple revealed for iOS, macOS and more
Apple Intelligence hasn't landed in the way Apple likely hoped it would, but that's not stopping the company from continuing to iterate on its suite of AI tools. During its WWDC 2025 conference on Monday, Apple announced a collection of new features for Apple Intelligence, starting with upgrades to Genmoji and Image Playground that will arrive alongside iOS 26 and the company's other updated operating systems. In Messages, you'll be able to use Image Playground to generate colorful backgrounds for your group chats. At the same time, Apple has added integration with ChatGPT to the tool, meaning it can produce images in entirely new styles. As before, if you decide to use ChatGPT directly through your iPhone in this way, your information will only be shared with OpenAI if you provide permission. Separately, Genmoji will allow users to combine two emoji from the Unicode library to create new characters. For example, you might merge the sloth and light bulb emoji if you want to poke fun at yourself for being slow to understand a joke. Across Messages, FaceTime and its Phone app, Apple is bringing live translation to the mix. In Messages, the company's on-device AI models will translate a message into your recipient's preferred language as you type. When they responded, each message will be instantly translated into your language. In FaceTime, you'll see live captions as the person you're chatting with speaks, and over a phone call, Apple Intelligence will generate a voiced translation. Visual Intelligence is also in line for an upgrade. Now in addition to working with your iPhone's camera, the tool can scan what's on your screen. Like Genmoji, Visual Intelligence will also benefit from deeper integration with ChatGPT, allowing you to ask the chat bot questions about what you see. Alternatively, you can search Google, Etsy and other supported apps to find images or products that might be a visual match. And if the tool detects when you're looking at an event, iOS 26 will suggest you add a reminder to your calendar. Nifty that. If you want to access Visual Intelligence, all you need to do is press the same buttons you would to take a screenshot on your iPhone. As expected, Apple is also making it possible for developers to use its on-device foundational model for their own apps. "With the Foundation Models framework, app developers will be able to build on Apple Intelligence to bring users new experiences that are intelligent, available when they're offline, and that protect their privacy, using AI inference that is free of cost," the company said in its press release. Apple suggests an educational app like Kahoot! might use its on-device model to generate personalized quizzes for users. According to the company, the framework supports Swift, Apple's own coding language, and the model is as easy as writing three lines of code. An upgraded Shortcuts app for both iOS and macOS is also on the way, with support for actions powered by Apple Intelligence. You'll be able to tap into either of the company's on-device or Private Cloud Compute model to generate responses that are part of whatever shortcut you want carried out. Apple suggests students might use this feature to create a shortcut that compares an audio transcript of a class lecture to notes they wrote on their own. Here again users can turn to ChatGPT if they want. There are many other smaller enhancements enabled by upgrades Apple has made to its AI suite. Most notably, Apple Wallet will automatically summarize tracking details merchants and delivery carriers send to you so you can find them in one place. A year since its debut at WWDC 2024, it's safe to say Apple Intelligence has failed to meet expectations. The smarter, more personal Siri that was the highlight of last year's presentation has yet to materialize. In fact, the company delayed the upgraded digital assistant in March, only saying at the time that it would arrive sometime in the coming year. Other parts of the suite may have shipped on time, but often didn't show the company's usual level of polish. For instance, notification summaries were quite buggy at launch, and Apple ended up reworking the messages to make it clearer they were generated by Apple Intelligence. With today's announcements, Apple still has a long way to go before it catches up to competitors like Google, but at least the company kept the focus on practical features. If you buy something through a link in this article, we may earn commission.


The Verge
an hour ago
- The Verge
Google has a new AI model and website for forecasting tropical storms
Google is using a new AI model to forecast tropical cyclones and working with the US National Hurricane Center (NHC) to test it out. Google DeepMind and Google Research launched a new website today called Weather Lab to share AI weather models that Google is developing. It says its new, experimental AI-based model for forecasting cyclones — also called typhoons or hurricanes when they reach a certain strength — can generate 50 different scenarios for a storm's possible track, size, and intensity up to 15 days in advance. The NHC is working with Google to evaluate the effectiveness of the model. The collaboration comes after the Trump administration and DOGE slashed the National Weather Service's staff and capacity for federal climate and weather research. Other companies and weather agencies are also exploring whether AI can improve forecasts, but technological advances so far don't eliminate the need for traditional weather models. Google released a research paper today, which has yet to be peer-reviewed, on how its tropical cyclone model works. It claims that its model's predictions are at least as accurate as those of traditional physics-based models. We'll have to see what the National Hurricane Center's rating of it is as the Atlantic hurricane season churns through November. For now, the aim is to strengthen NHC's forecasting in order to give people more accurate warnings and time to prepare for a storm. According to Google, its model's five-day predictions for cyclone tracks in the North Atlantic and East Pacific were 87 miles (140 km) closer, on average, to the storm's actual track than predictions from the European Center for Medium-Range Weather Forecasts (ECMWF) in 2023 and 2024. Weather Lab's interactive website lets people see how AI models compare to the ECMWF's physics-based models. But Google is emphasizing that its website is just a research tool for now — not something the public should rely on for forecasts. Google's cyclone model is trained on data from Europe's ERA5 archive, which includes hundreds of millions of observations collected by weather agencies around the world combined with predictions from a traditional weather model. The company also used ERA5 to train its previous AI weather prediction model GenCast. That model outperformed one of ECMWF's leading physics-based models 97.2 percent of the time, according to research published in the journal Nature in December 2024. Animation showing the Google model's prediction for Cyclone Alfred when it was a Category 3 cyclone in the Coral Sea. The model's ensemble mean prediction (bold blue line) correctly anticipated Cyclone Alfred's rapid weakening to tropical storm status and eventual landfall near Brisbane, Australia, seven days later, with a high probability of landfall somewhere along the Queensland coast. Credit: Google The company says it's also working with the Cooperative Institute for Research in the Atmosphere at Colorado State University and other researchers in the UK and Japan to improve its AI weather models. The importance of real-world observations and older weather models in developing these new kinds of tools is one reason why AI is so far only poised to assist traditional weather forecasting instead of replacing it. Adjusting to a changing climate will also hinge on the ability to collect and analyze new data on increasingly extreme and erratic weather events. How well the US can keep up with this kind of research, however, is a growing concern under the Trump administration. DOGE's rampage through the federal government has taken its toll at the federal agency that houses the NHC and the National Weather Service, the National Oceanic and Atmospheric Administration (NOAA). The National Weather Service reduced the number of weather balloon launches after staffing cuts, and NOAA is increasingly relying on weather balloon data from private companies. Project 2025 called for dismantling NOAA — which leads climate research on top of providing weather forecasts — and privatizing much of its services. Some advocates are raising alarm over the prospect of turning weather forecasts into a paid product instead of a free service available to anyone and everyone. 'For a long time, weather has been viewed as a public good, and I think, you know, most of us agree with that … Hopefully we can contribute to that, and that's why we're trying to kind of partner with the public sector,' Peter Battaglia, a research scientist at Google DeepMind, said in a press call when The Verge asked about concerns surrounding privatizing weather services. Tellingly, Google's announcement today doesn't mention the climate crisis like the company has in previous launches for this kind of program. 'As climate change drives more extreme weather events, accurate and trustworthy forecasts are more essential than ever,' it said in a December 4 announcement for GenCast.