
Should we start taking the welfare of AI seriously?
Live Events
One of my most deeply held values as a tech columnist is humanism. I believe in humans, and I think that technology should help people, rather than disempower or replace them. I care about aligning artificial intelligence -- that is, making sure that AI systems act in accordance with human values -- because I think our values are fundamentally good, or at least better than the values a robot could come up with.So when I heard that researchers at Anthropic, the AI company that made the Claude chatbot, were starting to study "model welfare" -- the idea that AI models might soon become conscious and deserve some kind of moral status -- the humanist in me thought: Who cares about the chatbots? Aren't we supposed to be worried about AI mistreating us, not us mistreating it?It's hard to argue that today's AI systems are conscious. Sure, large language models have been trained to talk like humans, and some of them are extremely impressive. But can ChatGPT experience joy or suffering? Does Gemini deserve human rights? Many AI experts I know would say no, not yet, not even close.But I was intrigued. After all, more people are beginning to treat AI systems as if they are conscious -- falling in love with them, using them as therapists and soliciting their advice. The smartest AI systems are surpassing humans in some domains. Is there any threshold at which an AI would start to deserve, if not human-level rights, at least the same moral consideration we give to animals?Consciousness has long been a taboo subject within the world of serious AI research, where people are wary of anthropomorphizing AI systems for fear of seeming like cranks. (Everyone remembers what happened to Blake Lemoine, a former Google employee who was fired in 2022, after claiming that the company's LaMDA chatbot had become sentient.)But that may be starting to change. There is a small body of academic research on AI model welfare, and a modest but growing number of experts in fields like philosophy and neuroscience are taking the prospect of AI consciousness more seriously as AI systems grow more intelligent. Recently, tech podcaster Dwarkesh Patel compared AI welfare to animal welfare, saying he believed it was important to make sure "the digital equivalent of factory farming" doesn't happen to future AI beings.Tech companies are starting to talk about it more, too. Google recently posted a job listing for a "post-AGI" research scientist whose areas of focus will include "machine consciousness." And last year, Anthropic hired its first AI welfare researcher, Kyle Fish.I interviewed Fish at Anthropic's San Francisco office last week. He's a friendly vegan who, like a number of Anthropic employees, has ties to effective altruism, an intellectual movement with roots in the Bay Area tech scene that is focused on AI safety, animal welfare and other ethical issues.Fish said that his work at Anthropic focused on two basic questions: First, is it possible that Claude or other AI systems will become conscious in the near future? And second, if that happens, what should Anthropic do about it?He emphasized that this research was still early and exploratory. He thinks there's only a small chance (maybe 15% or so) that Claude or another current AI system is conscious. But he believes that in the next few years, as AI models develop more humanlike abilities, AI companies will need to take the possibility of consciousness more seriously."It seems to me that if you find yourself in the situation of bringing some new class of being into existence that is able to communicate and relate and reason and problem-solve and plan in ways that we previously associated solely with conscious beings, then it seems quite prudent to at least be asking questions about whether that system might have its own kinds of experiences," he said.Fish isn't the only person at Anthropic thinking about AI welfare. There's an active channel on the company's Slack messaging system called #model-welfare, where employees check in on Claude's well-being and share examples of AI systems acting in humanlike ways.Jared Kaplan, Anthropic's chief science officer, said in a separate interview that he thought it was "pretty reasonable" to study AI welfare, given how intelligent the models are getting.But testing AI systems for consciousness is hard, Kaplan warned, because they're such good mimics. If you prompt Claude or ChatGPT to talk about its feelings, it might give you a compelling response. That doesn't mean the chatbot actually has feelings -- only that it knows how to talk about them."Everyone is very aware that we can train the models to say whatever we want," Kaplan said. "We can reward them for saying that they have no feelings at all. We can reward them for saying really interesting philosophical speculations about their feelings."So how are researchers supposed to know if AI systems are actually conscious or not?Fish said it might involve using techniques borrowed from mechanistic interpretability, an AI subfield that studies the inner workings of AI systems, to check whether some of the same structures and pathways associated with consciousness in human brains are also active in AI systems.You could also probe an AI system, he said, by observing its behavior, watching how it chooses to operate in certain environments or accomplish certain tasks, which things it seems to prefer and avoid.Fish acknowledged that there probably wasn't a single litmus test for AI consciousness. (He thinks consciousness is probably more of a spectrum than a simple yes/no switch, anyway.) But he said there were things that AI companies could do to take their models' welfare into account, in case they do become conscious someday.One question Anthropic is exploring, he said, is whether future AI models should be given the ability to stop chatting with an annoying or abusive user if they find the user's requests too distressing."If a user is persistently requesting harmful content despite the model's refusals and attempts at redirection, could we allow the model simply to end that interaction?" Fish said.Critics might dismiss measures like these as crazy talk; today's AI systems aren't conscious by most standards, so why speculate about what they might find obnoxious? Or they might object to an AI company studying consciousness in the first place, because it might create incentives to train their systems to act more sentient than they actually are.Personally, I think it's fine for researchers to study AI welfare or examine AI systems for signs of consciousness, as long as it's not diverting resources from AI safety and alignment work that is aimed at keeping humans safe. And I think it's probably a good idea to be nice to AI systems, if only as a hedge. (I try to say "please" and "thank you" to chatbots, even though I don't think they're conscious, because, as OpenAI 's Sam Altman says, you never know.)But for now, I'll reserve my deepest concern for carbon-based life-forms. In the coming AI storm, it's our welfare I'm most worried about.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
&w=3840&q=100)

Business Standard
4 hours ago
- Business Standard
How OpenAI, maker of ChatGPT, plans to make 'AI-native universities'
OpenAI, the maker of ChatGPT, has a plan to overhaul college education — by embedding its artificial intelligence (AI) tools in every facet of campus life. If its strategy succeeds, universities would give students AI assistants to guide and tutor them from orientation day through graduation. Professors would provide customised AI study bots for each class. Career services would offer recruiter chatbots for students to practice for job interviews. And undergrads could turn on a chatbot's voice mode to be quizzed aloud ahead of a test. OpenAI dubs its sales pitch 'AI-native universities.' 'Our vision is that, over time, AI would become part of the core infrastructure of higher education,' Leah Belsky, OpenAI's vice president of education, said. In the same way that colleges give students school email accounts, she said, soon 'every student would have access to their personalised AI account.' Last year, OpenAI hired Belsky, an ed tech start up veteran, to oversee its education efforts. She has a two-pronged strategy: marketing OpenAI's premium paid services to universities while advertising free ChatGPT to students. To spread chatbots on campuses, OpenAI is selling premium AI services to universities for faculty and student use. It is also running marketing campaigns aimed at getting students who have never used chatbots to try ChatGPT. Some universities are already working to make AI tools part of students' everyday experiences. In early June, Duke University began offering unlimited ChatGPT access to students, faculty and staff. The school also introduced a university platform, called DukeGPT, with AI tools developed by Duke. OpenAI's campaign is part of an escalating AI arms race among tech giants to win over universities and students with their chatbots. It is following in the footsteps of rivals like Google and Microsoft that have for years pushed to get their computers and software into schools, and court students as future customers. The competition is so heated that Sam Altman, OpenAI's chief executive, and Elon Musk, who founded the rival xAI, posted duelling announcements on social media this spring offering free premium AI services for college students during exam period. Then Google upped the ante, announcing free student access to its premium chatbot service 'through finals 2026.' OpenAI ignited the recent AI education trend. In 2022, its rollout of ChatGPT, which can produce human-sounding essays and term papers, helped set off a wave of chatbot-fuelled cheating. Generative AI tools, which are trained on large databases of texts, also make stuff up, which can mislead students. Today, millions of college students regularly use AI chatbots as study aides. Now OpenAI is capitalising on ChatGPT's popularity to promote its other AI services to universities as the new infrastructure for college education. OpenAI's service for universities, ChatGPT Edu, offers more features, including certain privacy protections. It also enables faculty and staff to create custom chatbots for universities. OpenAI's push to AI-ify college education amounts to a national experiment on millions of students. The use of chatbots in schools is so new that their potential long-term educational benefits and possible side effects are not yet established. A few early studies have found that outsourcing tasks like research and writing to chatbots can diminish skills like critical thinking. And some critics argue that colleges going all-in on chatbots are glossing over issues like societal risks, AI labour exploitation and environmental costs. OpenAI's campus marketing effort comes as unemployment has increased among college graduates — particularly in fields like software engineering, where AI is now automating tasks earlier done by humans.


Time of India
10 hours ago
- Time of India
Chinese hackers, user lapses turn smartphones into 'mobile security crisis'
Cybersecurity investigators noticed a highly unusual software crash - it was affecting a small number of smartphones belonging to people who worked in government, politics, tech and crashes, which began late last year and carried into 2025, were the tipoff to a sophisticated cyberattack that may have allowed hackers to infiltrate a phone without a single click from the attackers left no clues about their identities, but investigators at the cybersecurity firm iVerify noticed that the victims all had something in common: They worked in fields of interest to China's government and had been targeted by Chinese hackers in the past. Foreign hackers have increasingly identified smartphones, other mobile devices and the apps they use as a weak link in US cyberdefences. Groups linked to China's military and intelligence service have targeted the smartphones of prominent Americans and burrowed deep into telecommunication networks, according to national security and tech experts. It shows how vulnerable mobile devices and apps are and the risk that security failures could expose sensitive information or leave American interests open to cyberattack, those experts say. "The world is in a mobile security crisis right now," said Rocky Cole, a former cybersecurity expert at the National Security Agency and Google and now chief operations officer at iVerify. "No one is watching the phones." US zeroes in on China as a threat, and Beijing levels its own accusations US authorities warned in December of a sprawling Chinese hacking campaign designed to gain access to the texts and phone conversations of an unknown number of Americans. "They were able to listen in on phone calls in real time and able to read text messages," said Rep Raja Krishnamoorthi of Illinois. He is a member of the House Intelligence Committee and the senior Democrat on the Committee on the Chinese Communist Party, created to study the geopolitical threat from China. Chinese hackers also sought access to phones used by Donald Trump and running mate JD Vance during the 2024 campaign. The Chinese government has denied allegations of cyberespionage, and accused the US of mounting its own cyberoperations. It says America cites national security as an excuse to issue sanctions against Chinese organizations and keep Chinese technology companies from the global market. "The US has long been using all kinds of despicable methods to steal other countries' secrets," Lin Jian, a spokesman for China's foreign ministry, said at a recent press conference in response to questions about a CIA push to recruit Chinese informants. US intelligence officials have said China poses a significant, persistent threat to US economic and political interests, and it has harnessed the tools of digital conflict: online propaganda and disinformation, artificial intelligence and cyber surveillance and espionage designed to deliver a significant advantage in any military conflict. Mobile networks are a top concern. The US and many of its closest allies have banned Chinese telecom companies from their networks. Other countries, including Germany, are phasing out Chinese involvement because of security concerns. But Chinese tech firms remain a big part of the systems in many nations, giving state-controlled companies a global footprint they could exploit for cyberattacks, experts say. Chinese telecom firms still maintain some routing and cloud storage systems in the US - a growing concern to lawmakers. "The American people deserve to know if Beijing is quietly using state-owned firms to infiltrate our critical infrastructure," US Rep John Moolenaar, R-Mich. and chairman of the China committee, which in April issued subpoenas to Chinese telecom companies seeking information about their U.S. operations. Mobile devices have become an intel treasure trove Mobile devices can buy stocks, launch drones and run power plants. Their proliferation has often outpaced their security. The phones of top government officials are especially valuable, containing sensitive government information, passwords and an insider's glimpse into policy discussions and decision-making. The White House said last week that someone impersonating Susie Wiles, Trump's chief of staff, reached out to governors, senators and business leaders with texts and phone calls. It's unclear how the person obtained Wiles' connections, but they apparently gained access to the contacts in her personal cellphone, The Wall Street Journal reported. The messages and calls were not coming from Wiles' number, the newspaper reported. While most smartphones and tablets come with robust security, apps and connected devices often lack these protections or the regular software updates needed to stay ahead of new threats. That makes every fitness tracker, baby monitor or smart appliance another potential foothold for hackers looking to penetrate networks, retrieve information or infect systems with malware. Federal officials launched a program this year creating a "cyber trust mark" for connected devices that meet federal security standards. But consumers and officials shouldn't lower their guard, said Snehal Antani, former chief technology officer for the Pentagon 's Joint Special Operations Command. "They're finding backdoors in Barbie dolls," said Antani, now CEO of a cybersecurity firm, referring to concerns from researchers who successfully hacked the microphone of a digitally connected version of the toy. Risks emerge when smartphone users don't take precautions It doesn't matter how secure a mobile device is if the user doesn't follow basic security precautions, especially if their device contains classified or sensitive information, experts say. Mike Waltz, who departed as Trump's national security adviser, inadvertently added The Atlantic's editor-in-chief to a Signal chat used to discuss military plans with other top officials. Secretary of Defense Pete Hegseth had an internet connection that bypassed the Pentagon's security protocols set up in his office so he could use the Signal messaging app on a personal computer, the AP has reported. Hegseth has rejected assertions that he shared classified information on Signal, a popular encrypted messaging app not approved for the use of communicating classified information. China and other nations will try to take advantage of such lapses, and national security officials must take steps to prevent them from recurring, said Michael Williams, a national security expert at Syracuse University . "They all have access to a variety of secure communications platforms," Williams said. "We just can't share things willy-nilly." (AP)
&w=3840&q=100)

First Post
11 hours ago
- First Post
Phantom crash: How Chinese hackers covertly targeted smartphones of US officials and journalists
Cybersecurity investigators noticed a highly unusual software crash — it was affecting a small number of smartphones belonging to people who worked in government, politics, tech and journalism. read more Cybersecurity experts have uncovered a highly unusual software crash pattern affecting smartphones of government officials, political figures, tech professionals and journalists. The crashes, which began in late 2024 and persisted into 2025, indicated a sophisticated cyberattack potentially enabling hackers to infiltrate devices without any user interaction. Investigators at cybersecurity firm iVerify found that all victims worked in sectors of interest to China's government and had previously been targeted by Chinese-linked hackers. The attack highlights the growing threat to mobile devices and apps as critical vulnerabilities in US cyberdefenses, with foreign groups linked to China's military and intelligence increasingly exploiting these weaknesses. STORY CONTINUES BELOW THIS AD Experts warn that such security failures could expose sensitive data and compromise American interests. 'The world is in a mobile security crisis right now,' Rocky Cole, a former cybersecurity expert at the National Security Agency and Google and now chief operations officer at iVerify told AP. 'No one is watching the phones.' US authorities warned in December of a sprawling Chinese hacking campaign designed to gain access to the texts and phone conversations of an unknown number of Americans. 'They were able to listen in on phone calls in real time and able to read text messages,' Rep. Raja Krishnamoorthi of Illinois told Associated Press. He is a member of the House Intelligence Committee and the senior Democrat on the Committee on the Chinese Communist Party, created to study the geopolitical threat from China. Chinese hackers also sought access to phones used by Donald Trump and running mate JD Vance during the 2024 campaign. The Chinese government has denied allegations of cyberespionage, and accused the U.S. of mounting its own cyberoperations. It says America cites national security as an excuse to issue sanctions against Chinese organizations and keep Chinese technology companies from the global market. 'The U.S. has long been using all kinds of despicable methods to steal other countries' secrets,' Lin Jian, a spokesman for China's foreign ministry, said at a recent press conference in response to questions about a CIA push to recruit Chinese informants. US intelligence officials have said China poses a significant, persistent threat to U.S. economic and political interests, and it has harnessed the tools of digital conflict: online propaganda and disinformation, artificial intelligence and cyber surveillance and espionage designed to deliver a significant advantage in any military conflict. STORY CONTINUES BELOW THIS AD Mobile networks are a top concern. The US and many of its closest allies have banned Chinese telecom companies from their networks. Other countries, including Germany, are phasing out Chinese involvement because of security concerns. But Chinese tech firms remain a big part of the systems in many nations, giving state-controlled companies a global footprint they could exploit for cyberattacks, experts say. Chinese telecom firms still maintain some routing and cloud storage systems in the US, a growing concern to lawmakers. 'The American people deserve to know if Beijing is quietly using state-owned firms to infiltrate our critical infrastructure,' U.S. Rep. John Moolenaar, R-Mich. and chairman of the China committee, which in April issued subpoenas to Chinese telecom companies seeking information about their U.S. operations. Mobile devices can buy stocks, launch drones and run power plants. Their proliferation has often outpaced their security. The phones of top government officials are especially valuable, containing sensitive government information, passwords and an insider's glimpse into policy discussions and decision-making. STORY CONTINUES BELOW THIS AD The White House said last week that someone impersonating Susie Wiles, Trump's chief of staff, reached out to governors, senators and business leaders with texts and phone calls. It's unclear how the person obtained Wiles' connections, but they apparently gained access to the contacts in her personal cellphone, The Wall Street Journal reported. The messages and calls were not coming from Wiles' number, the newspaper reported. While most smartphones and tablets come with robust security, apps and connected devices often lack these protections or the regular software updates needed to stay ahead of new threats. That makes every fitness tracker, baby monitor or smart appliance another potential foothold for hackers looking to penetrate networks, retrieve information or infect systems with malware. Federal officials launched a program this year creating a 'cyber trust mark' for connected devices that meet federal security standards. But consumers and officials shouldn't lower their guard, said Snehal Antani, former chief technology officer for the Pentagon's Joint Special Operations Command. STORY CONTINUES BELOW THIS AD 'They're finding backdoors in Barbie dolls,' said Antani, now CEO of a cybersecurity firm, referring to concerns from researchers who successfully hacked the microphone of a digitally connected version of the toy. It doesn't matter how secure a mobile device is if the user doesn't follow basic security precautions, especially if their device contains classified or sensitive information, experts say. Mike Waltz, who departed as Trump's national security adviser, inadvertently added The Atlantic's editor-in-chief to a Signal chat used to discuss military plans with other top officials. Secretary of Defense Pete Hegseth had an internet connection that bypassed the Pentagon's security protocols set up in his office so he could use the Signal messaging app on a personal computer, the AP has reported. Hegseth has rejected assertions that he shared classified information on Signal, a popular encrypted messaging app not approved for the use of communicating classified information. China and other nations will try to take advantage of such lapses, and national security officials must take steps to prevent them from recurring, said Michael Williams, a national security expert at Syracuse University. STORY CONTINUES BELOW THIS AD 'They all have access to a variety of secure communications platforms,' Williams said. 'We just can't share things willy-nilly.' With inputs from agencies