logo
It's becoming less taboo to talk about AI being 'conscious' if you work in tech

It's becoming less taboo to talk about AI being 'conscious' if you work in tech

Three years ago, suggesting AI was "sentient" was one way to get fired in the tech world. Now, tech companies are more open to having that conversation.
This week, AI startup Anthropic launched a new research initiative to explore whether models might one day experience "consciousness," while a scientist at Google DeepMind described today's models as "exotic mind-like entities."
It's a sign of how much AI has advanced since 2022, when Blake Lemoine was fired from his job as a Google engineer after claiming the company's chatbot, LaMDA, had become sentient. Lemoine said the system feared being shut off and described itself as a person. Google called his claims "wholly unfounded," and the AI community moved quickly to shut the conversation down.
Neither Anthropic nor the Google scientist is going so far as Lemoine.
Anthropic, the startup behind Claude, said in a Thursday blog post that it plans to investigate whether models might one day have experiences, preferences, or even distress.
"Should we also be concerned about the potential consciousness and experiences of the models themselves? Should we be concerned about model welfare, too?" the company asked.
Kyle Fish, an alignment scientist at Anthropic who researches AI welfare, said in a video released Thursday that the lab isn't claiming Claude is conscious, but the point is that it's no longer responsible to assume the answer is definitely no.
He said as AI systems become more sophisticated, companies should "take seriously the possibility" that they "may end up with some form of consciousness along the way."
He added: "There are staggeringly complex technical and philosophical questions, and we're at the very early stages of trying to wrap our heads around them."
Fish said researchers at Anthropic estimate Claude 3.7 has between a 0.15% and 15% chance of being conscious. The lab is studying whether the model shows preferences or aversions, and testing opt-out mechanisms that could let it refuse certain tasks.
In March, Anthropic CEO Dario Amodei floated the idea of giving future AI systems an "I quit this job" button — not because they're sentient, he said, but as a way to observe patterns of refusal that might signal discomfort or misalignment.
Meanwhile, at Google DeepMind, principal scientist Murray Shanahan has proposed that we might need to rethink the concept of consciousness altogether.
"Maybe we need to bend or break the vocabulary of consciousness to fit these new systems," Shanahan said on a Deepmind podcast, published Thursday. "You can't be in the world with them like you can with a dog or an octopus — but that doesn't mean there's nothing there."
Google appears to be taking the idea seriously. A recent job listing sought a "post-AGI" research scientist, with responsibilities that include studying machine consciousness.
'We might as well give rights to calculators'
Not everyone's convinced, and many researchers acknowledge that AI systems are excellent mimics that could be trained to act conscious even if they aren't.
"We can reward them for saying they have no feelings," said Jared Kaplan, Anthropic's chief science officer, in an interview with The New York Times this week.
Kaplan cautioned that testing AI systems for consciousness is inherently difficult, precisely because they're so good at imitation.
Gary Marcus, a cognitive scientist and longtime critic of hype in the AI industry, told Business Insider he believes the focus on AI consciousness is more about branding than science.
"What a company like Anthropic is really saying is 'look how smart our models are — they're so smart they deserve rights,'" he said. "We might as well give rights to calculators and spreadsheets — which (unlike language models) never make stuff up."
Still, Fish said the topic will only become more relevant as people interact with AI in more ways — at work, online, or even emotionally.
"It'll just become an increasingly salient question whether these models are having experiences of their own — and if so, what kinds," he said.
Anthropic and Google DeepMind did not immediately respond to a request for comment.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Apple's missing mojo
Apple's missing mojo

Axios

time19 minutes ago

  • Axios

Apple's missing mojo

Apple's modest AI updates announced Monday did little to shake the sense that the iPhone maker is still finding its footing in AI as rivals charge ahead. Why it matters: AI is widely seen as the largest technology shift in decades and could easily serve as an inflection point where existing leaders are dethroned and new ones crowned. Driving the news: One year after unveiling an expansive vision for personalized AI that it has largely failed to deliver, the iPhone maker focused on a smaller set of tweaks and enhancements to Apple Intelligence. Some announcements from Monday, such as live translation, are useful additions already offered on rival devices from Google, Samsung and Microsoft. In a handful of other areas, such as image generation, Apple is improving its offering by drawing more heavily on its partnership with OpenAI. ChatGPT already handles Apple Intelligence features that require more world knowledge than is available from Apple's smaller, locally run models. Apple's most significant AI move was to allow developers to make use of Apple's own models, including a 3 billion-parameter model that runs on Apple devices and a larger one that runs via Apple's servers. Yes, but: The list of things Apple left unsaid looms larger than the improvements they did announce. The company didn't expand — or even really elaborate — on the vision it laid out last year where Apple takes advantage of all that it knows about its individual users to answer questions in a privacy-friendly way. Nor did Apple announce rumored deals with Google or Perplexity to serve as additional third-party engines for Apple Intelligence. Most glaringly, the company didn't offer a concrete timeline for the improved Siri originally promised last year. Apple's Craig Federighi said only that Apple would have more to say about the delayed feature within the coming year. The big picture: Apple's incrementalism stands in sharp contrast to Google, which unveiled a host of AI features, many of which were the kinds of things that users can touch and use, such as its new tools for video creation. Microsoft, Anthropic and others have also held events in recent weeks that offered more substantive advances for developers than what Apple showed on Monday. While Apple tends to avoid being at the bleeding edge of technology, its long-standing strategy of arriving late, but polished, might not survive the fast-moving pace of generative AI. Between the lines: Apple appeared eager not to overpromise this year, announcing only features it expects to be part of the fall release. The restraint reflects the fear of repeating last year's WWDC disastrous hyping of AI features that slipped past their ship dates. However, by sharing only what it is ready to ship, Apple may have reinforced the perception that it has made little progress since last year. What they're saying: Angelo Zino, a senior vice president and equity analyst at CFRA Research said he remains positive on Apple for the long-term but called Monday's developer conference a "dud" that is testing investors' patience.

Reddit brands Anthropic as 'anything but' a white knight, heating up AI scraping wars
Reddit brands Anthropic as 'anything but' a white knight, heating up AI scraping wars

Yahoo

time40 minutes ago

  • Yahoo

Reddit brands Anthropic as 'anything but' a white knight, heating up AI scraping wars

A clash between established online content providers and artificial intelligence upstarts is heating up again as AI-driven large language models gobble information in a race to dominate the web's frontier. The latest of the AI scraping wars is between Reddit (RDDT) and AI startup Anthropic ( a company backed by tech giants Amazon (AMZN) and Google (GOOG, GOOGL) that created the AI language model Claude. Reddit is claiming in a new lawsuit that Anthropic intentionally scraped Reddit users' personal data without their consent and then put their data to work training Claude. Reddit said in its complaint that Anthropic "bills itself as the white knight of the AI industry" and argues that "it is anything but." Anthropic said last year that it had blocked its bots from Reddit's website, according to the complaint. But Reddit said Anthropic 'continued to hit Reddit's servers over one hundred thousand times.' An Anthropic spokesperson said, "We disagree with Reddit's claims and will defend ourselves vigorously." Anthropic is also defending itself against a separate suit from music publishers, including Universal Music Group (0VD.F), ABKCO, and Concord, alleging that Anthropic infringed on copyrights for Beyoncé, the Rolling Stones, and other artists as it trained Claude on lyrics to more than 500 songs. The confrontation between Reddit and Anthropic adds to a growing number of high-profile cases where copyright holders have tried to guard their works from the reach of technology firms. A question at the heart of all these lawsuits: Can artificial intelligence companies use copyrighted material to train generative AI models without asking the owner of that data for permission? Courts haven't settled on the answer. However, last February, the US District Court for Delaware handed copyright holder Thompson Reuters a win in a case that could impact what data training models can legally collect. The court granted Thompson Reuters' request for summary judgment, saying that its competitor, Ross, infringed on its copyrights by using lawsuit summaries to train its AI model. The court rejected Ross's argument that it could use the summaries under the concept of fair use, which allows copyrights to be used for news reporting, teaching, research, criticism, and commentary. One big name featuring prominently in some of these clashes is OpenAI ( the creator of chatbot ChatGPT that is run by Sam Altman and backed by Microsoft (MSFT). Comedian Sarah Silverman has accused the companies in a lawsuit of copying material from her book and 7 million pirated works in order to train its AI systems. Parenting website Mumsnet has also accused OpenAI of scraping its six billion-word database without consent. But perhaps the most prominent case targeting OpenAI is from The New York Times (NYT), which in 2023 filed a lawsuit accusing OpenAI and Microsoft of illegally using millions of the news outlet's published stories to train OpenAI's language models. The newspaper has said that ChatGPT, which trained on millions of its articles, at times generates query answers that closely mirror its original publications. Last week, OpenAI called the lawsuit "baseless" and appealed a judge's recent order in that case requiring the AI developer to preserve 'output data' generated by ChatGPT. OpenAI and Microsoft are using a defense similar to those raised in other AI training copyright disputes: that the Times' publicly available content falls under the fair use doctrine and, therefore, can be used to train its models. Getty Images is trying to chip away at that same argument in lawsuits in the US and United Kingdom filed in 2023 against AI image generation startup Stability. The UK case went to trial on Monday. Stability argues that fair use (or "fair dealing" as it's known in the UK) justified training its technology, Stable Diffusion, on copyrighted Getty material. That same defense has hallmarks of justification that Google has been asserting for the past two decades to fight lawsuits claiming it violated copyright laws when pulling information into results for users' search queries. In 2005, the Authors Guild sued Google over millions of books that the tech giant scanned and made available in 'snippets' to online searchers. Google didn't pay for the copyrighted information but did provide word-for-word pieces of the copyrighted works in search results. The US Court of Appeals for the Second Circuit reasoned in a decision that Google's scanning project tested the limits of fair use but was 'transformative' and therefore protected under fair use law. In 2016, Getty Images sued Google over similar claims, alleging that Google violated its copyrights and antitrust law by displaying Getty's high-resolution images in Google search results. The practice, Getty argued, promoted piracy, and kept its prospective customers from visiting its site and from paying for their content. Google and Getty settled before trial. In an agreement, Google agreed to display copyright holders' information more prominently, as well as enter into a licensing partnership with Getty and remove a 'view image' button from Google Search that made it more difficult to download high-quality images. Google could be drawn into the new AI scraping wars as part of a landmark antitrust case against Google's Search monopoly that was won last year by the US Justice Department. The DOJ has argued to a judge considering remedies to rectify Google's monopoly that the search giant could further entrench its dominance by training its AI model, Gemini, on its vast index of internet content. Sign in to access your portfolio

Why Waymo's Self-Driving Cars Became a Target of Protesters in Los Angeles
Why Waymo's Self-Driving Cars Became a Target of Protesters in Los Angeles

Time​ Magazine

time2 hours ago

  • Time​ Magazine

Why Waymo's Self-Driving Cars Became a Target of Protesters in Los Angeles

As protesters and police clash across Los Angeles and beyond, a striking image from the mayhem has been that of graffiti-strewn white cars engulfed in flames. But these aren't ordinary cars. They've got sensors protruding from the top and sides—and, critically, they've got no drivers. Waymo, a robotaxi company, found itself at the center of the demonstrations against the Trump Administration's Immigration and Customs Enforcement (ICE) raids after a group of protesters over the weekend, according to the Los Angeles Times, approached a parked row of the autonomous vehicles and smashed their windows, slashed their tires, spray-painted them with anti-ICE slogans, and set them on fire. While eye-catching, the trend is also extremely dangerous. Electric vehicles, like those in Waymo's fleet, have lithium-ion batteries, and in a post on X, the L.A. Police Department warned: 'Burning lithium-ion batteries release toxic gases, including hydrogen fluoride, posing risks to responders and those nearby.' According to Scientific American, first responders exposed to the fumes of burnt lithium-ion batteries without protection historically 'have developed throat burns and breathing difficulties upon arriving,' and, depending on the hydrogen fluoride levels, individuals can start coughing up blood within minutes of exposure. At least six Waymo vehicles across the county have reportedly been the target of vandalism, resulting in the company temporarily suspending operations in the area 'out of an abundance of caution.' California Gov. Gavin Newsom and Los Angeles Mayor Karen Bass have condemned the violence and destruction, which Newsom attributed to 'insurgent groups' and 'anarchists' who have infiltrated otherwise peaceful protests. President Donald Trump, who mobilized the National Guard to respond to the situation, has called the demonstrators ' troublemakers ' and ' paid insurrectionists.' Here's what to know. What is Waymo? Waymo is a subsidiary of Alphabet, Google's parent company, and grew out of the Google Self-Driving Car project that began in 2009. It launched its robotaxi business in 2020 in limited markets, which grew to include Los Angeles in 2024. While the company says its mission 'is to be the world's most trusted driver,' a national survey earlier this year found that '6 in 10 U.S. drivers still report being afraid to ride in a self-driving vehicle' while the proportion of people enthusiastic about the development of the technology has actually decreased from 18% in 2022 to 13% in 2025. Waymo vehicles were involved in 696 accidents across the U.S. between 2021 and 2024, or about one accident every other day. MKP Law Group, a Los-Angeles based firm that represents clients involved in accidents, acknowledged in a blog post that this statistic 'is not necessarily indicative of Waymo causing those accidents, as some may have been the fault of the other involved drivers.' Studies show that self-driving technology is likely safer than most human drivers. Waymo has also reportedly annoyed some people, including those who find self-driving vehicles to be an eyesore as well as locals where the vehicles routinely get stuck. Neighbors near a Waymo charging station in Santa Monica have complained about noise pollution caused by the driverless vehicles honking at each other in the parking lot. Waymo vehicles as well as other self-driving cars have previously been the target of vandals, particularly in California, where Waymo is headquartered. In January, a Waymo car was torn apart in Los Angeles. In February 2024, another Waymo car was smashed and set ablaze in San Francisco. And in July 2024, a man was charged with slashing the tires of 17 Waymo cars in San Francisco. Why is Waymo being targeted in the L.A. protests? Several potential explanations have emerged for why Waymo vehicles were targeted during the protests in Los Angeles. The Wall Street Journal reported that part of the reason the cars were vandalized was to obstruct traffic—a traditional, albeit controversial, protest tactic. Some social media users have suggested that self-driving vehicles in particular have become a new target because they are seen by protesters as ' part of the police surveillance state.' Waymo's cars are equipped with cameras that provide a 360-degree view of their surroundings, a tool that has been tapped by law enforcement, according to reports. Independent tech news site 404 Media reported in April that the Los Angeles Police Department obtained footage from a Waymo driverless car to use as part of an investigation into an unrelated hit-and-run. And Bloomberg reported in 2023 that police have increasingly relied on self-driving cars and their cameras for video evidence. Chris Gilliard, a fellow at the Social Science Research Council, told Bloomberg that self-driving vehicles are 'essentially surveillance cameras on wheels,' adding: 'We're supposed to be able to go about our business in our day-to-day lives without being surveilled unless we are suspected of a crime, and each little bit of this technology strips away that ability.' Waymo told Bloomberg at the time that it 'carefully' reviews every request from police 'to make sure it satisfies applicable laws and has a valid legal process.' Some activists have also suggested that the burning of Waymo vehicles should garner less sympathy from onlookers. 'There are people on here saying it's violent and domestic terrorism to set a Waymo car on fire,' racial justice organizer Samuel Sinyangwe posted on X. 'A robot car? Are you going to demand justice for the robot dogs next? But not the human beings repeatedly shot with rubber bullets in the street? What kind of politics is this?' 'There is no human element to Waymo,' climate and labor organizer Elise Joshi similarly posted on X. 'It's expensive and bought-out politicians are using it as an excuse to defund public transit. I pray on Waymo's downfall.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store