logo
Should we start taking the welfare of AI seriously?

Should we start taking the welfare of AI seriously?

Indian Express25-04-2025

One of my most deeply held values as a tech columnist is humanism. I believe in humans, and I think that technology should help people, rather than disempower or replace them. I care about aligning artificial intelligence — that is, making sure that AI systems act in accordance with human values — because I think our values are fundamentally good, or at least better than the values a robot could come up with.
So when I heard that researchers at Anthropic, the AI company that made the Claude chatbot, were starting to study 'model welfare' — the idea that AI models might soon become conscious and deserve some kind of moral status — the humanist in me thought: Who cares about the chatbots? Aren't we supposed to be worried about AI mistreating us, not us mistreating it?
It's hard to argue that today's AI systems are conscious. Sure, large language models have been trained to talk like humans, and some of them are extremely impressive. But can ChatGPT experience joy or suffering? Does Gemini deserve human rights? Many AI experts I know would say no, not yet, not even close.
But I was intrigued. After all, more people are beginning to treat AI systems as if they are conscious — falling in love with them, using them as therapists and soliciting their advice. The smartest AI systems are surpassing humans in some domains. Is there any threshold at which an AI would start to deserve, if not human-level rights, at least the same moral consideration we give to animals?
Consciousness has long been a taboo subject within the world of serious AI research, where people are wary of anthropomorphizing AI systems for fear of seeming like cranks. (Everyone remembers what happened to Blake Lemoine, a former Google employee who was fired in 2022, after claiming that the company's LaMDA chatbot had become sentient.)
But that may be starting to change. There is a small body of academic research on AI model welfare, and a modest but growing number of experts in fields like philosophy and neuroscience are taking the prospect of AI consciousness more seriously as AI systems grow more intelligent. Recently, tech podcaster Dwarkesh Patel compared AI welfare to animal welfare, saying he believed it was important to make sure 'the digital equivalent of factory farming' doesn't happen to future AI beings.
Tech companies are starting to talk about it more, too. Google recently posted a job listing for a 'post-AGI' research scientist whose areas of focus will include 'machine consciousness.' And last year, Anthropic hired its first AI welfare researcher, Kyle Fish.
I interviewed Fish at Anthropic's San Francisco office last week. He's a friendly vegan who, like a number of Anthropic employees, has ties to effective altruism, an intellectual movement with roots in the Bay Area tech scene that is focused on AI safety, animal welfare and other ethical issues.
Fish said that his work at Anthropic focused on two basic questions: First, is it possible that Claude or other AI systems will become conscious in the near future? And second, if that happens, what should Anthropic do about it?
He emphasized that this research was still early and exploratory. He thinks there's only a small chance (maybe 15% or so) that Claude or another current AI system is conscious. But he believes that in the next few years, as AI models develop more humanlike abilities, AI companies will need to take the possibility of consciousness more seriously.
'It seems to me that if you find yourself in the situation of bringing some new class of being into existence that is able to communicate and relate and reason and problem-solve and plan in ways that we previously associated solely with conscious beings, then it seems quite prudent to at least be asking questions about whether that system might have its own kinds of experiences,' he said.
Fish isn't the only person at Anthropic thinking about AI welfare. There's an active channel on the company's Slack messaging system called #model-welfare, where employees check in on Claude's well-being and share examples of AI systems acting in humanlike ways.
Jared Kaplan, Anthropic's chief science officer, said in a separate interview that he thought it was 'pretty reasonable' to study AI welfare, given how intelligent the models are getting.
But testing AI systems for consciousness is hard, Kaplan warned, because they're such good mimics. If you prompt Claude or ChatGPT to talk about its feelings, it might give you a compelling response. That doesn't mean the chatbot actually has feelings — only that it knows how to talk about them.
'Everyone is very aware that we can train the models to say whatever we want,' Kaplan said. 'We can reward them for saying that they have no feelings at all. We can reward them for saying really interesting philosophical speculations about their feelings.'
So how are researchers supposed to know if AI systems are actually conscious or not?
Fish said it might involve using techniques borrowed from mechanistic interpretability, an AI subfield that studies the inner workings of AI systems, to check whether some of the same structures and pathways associated with consciousness in human brains are also active in AI systems.
You could also probe an AI system, he said, by observing its behavior, watching how it chooses to operate in certain environments or accomplish certain tasks, which things it seems to prefer and avoid.
Fish acknowledged that there probably wasn't a single litmus test for AI consciousness. (He thinks consciousness is probably more of a spectrum than a simple yes/no switch, anyway.) But he said there were things that AI companies could do to take their models' welfare into account, in case they do become conscious someday.
One question Anthropic is exploring, he said, is whether future AI models should be given the ability to stop chatting with an annoying or abusive user if they find the user's requests too distressing.
'If a user is persistently requesting harmful content despite the model's refusals and attempts at redirection, could we allow the model simply to end that interaction?' Fish said.
Critics might dismiss measures like these as crazy talk; today's AI systems aren't conscious by most standards, so why speculate about what they might find obnoxious? Or they might object to an AI company studying consciousness in the first place, because it might create incentives to train their systems to act more sentient than they actually are.
Personally, I think it's fine for researchers to study AI welfare or examine AI systems for signs of consciousness, as long as it's not diverting resources from AI safety and alignment work that is aimed at keeping humans safe. And I think it's probably a good idea to be nice to AI systems, if only as a hedge. (I try to say 'please' and 'thank you' to chatbots, even though I don't think they're conscious, because, as OpenAI's Sam Altman says, you never know.)

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Sundar Pichai sees AI as a tool, not a threat: 8 ways tech professionals can maintain their relevance
Sundar Pichai sees AI as a tool, not a threat: 8 ways tech professionals can maintain their relevance

Time of India

time24 minutes ago

  • Time of India

Sundar Pichai sees AI as a tool, not a threat: 8 ways tech professionals can maintain their relevance

In the ever-intensifying tug of war between artificial intelligence and human ingenuity, Google CEO Sundar Pichai didn't pick a side—he reinforced a partnership. At Bloomberg's Tech Conference in San Francisco, Pichai offered more than just a glimpse into Google's future. He dropped a subtle yet seismic remark: 'Whoever is running it [Google] will have an extraordinary AI companion. ' It wasn't a throwaway line. It was a manifesto. As tech companies double down on AI, automating everything from emails to engineering, fear of redundancy looms large. But Pichai's vision breaks from the doomsday narrative. He doesn't see AI as a replacement for people—but as a relentless amplifier of human capability. 'I view this as making engineers dramatically more productive,' he said. He isn't talking about a handover—he's talking about a hand-in-hand future. His own experiments—'vibe coding' with AI-powered tools like Cursor and Replit—speak volumes. It's not just about leading from the front; it's about co-creating with code. And in doing so, Pichai set a powerful precedent: in a world where machines are accelerating, humans must learn how to steer. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Switch to UnionBank Rewards Card UnionBank Credit Card Apply Now Undo For tech professionals standing at the edge of this transformation, the takeaway is clear: if you want to remain relevant in the age of AI, you must evolve into something machines can't replicate. Here's how to stay irreplaceable in a world where your next colleague might be code. Build, don't just operate: Become the architect of automation The future will not be kind to those who merely operate systems built by others. It will reward those who design them. Dive into AI model architecture, algorithm training, and prompt engineering. Whether you're a data analyst or backend developer, re-skill to be a creator of AI tools, not just a consumer. Learn foundational AI principles—vector embeddings, transformers, and tokenization, not just applications like ChatGPT. Anchor your career in the human-only zone AI is fast, but it's not empathetic. It's logical, but not ethical. It can mimic reasoning, but not real-world judgment. Your ability to lead, negotiate, mentor, and listen, those are unautomatable trait. Build a career portfolio rooted in these human-exclusive capabilities. Develop skills in stakeholder communication, ethical reasoning, and emotional intelligence. These are your firewalls. Speak the language of machines—fluently If AI is going to be your co-pilot, learn how to talk to it. Prompt engineering is becoming the new coding. Whether you're training an AI or delegating a task to it, knowing how to communicate effectively with LLMs will be a make-or-break skill. Practise designing layered prompts with context, constraints, and roleplay to get consistent, reliable AI outputs. Think like a product, not just an employee Your resume is no longer a list of qualifications—it's a roadmap of your adaptability. AI systems will commodify many skill sets. The only way to stay valuable is to evolve like a product: Iterate constantly, gather feedback, and reinvent as needed. Build a personal learning system. Set quarterly skill goals, track learning KPIs, and always be in beta. Be the ethicist in the room Tech is no longer neutral. AI decisions affect hiring, health care, policing, and global equity. Professionals who understand algorithmic bias, explainability, and fairness will be invaluable. The more powerful the AI, the more vital it becomes to have humans who can say, 'Should we?' instead of just 'Can we?' Study real-world AI failures (like COMPAS or Amazon's biased hiring tool) to prepare for conversations that matter. Learn to lead across human + machine teams Leadership now requires a hybrid mindset. You must be able to manage human talent while integrating machine output. That means understanding workflows where AI handles execution and humans handle escalation. Use AI for sprint planning, bug triage, or documentation, but keep strategic decision-making firmly in human hands. Stay loud in the public conversation Silence won't protect your career. In an AI-driven world, your voice, whether through writing, speaking, or teaching, becomes a differentiator. Those who shape the narrative are harder to replace. Publish your learnings. Write AI guides. Lead meetups. The more visible your thinking, the more defensible your role. Don't compete with AI, collaborate intelligently Sundar Pichai didn't romanticize a machine-driven world. He recognized that the future belongs to those who can co-create with AI, not fear it. It's not man versus machine. It's man with machine, if you're prepared to grow. In the tug of war between technology and humanity, the rope isn't slipping from your hands. But you must grip harder, with skills, ethics, adaptability, and vision. Because relevance isn't a title, it's a habit. Is your child ready for the careers of tomorrow? Enroll now and take advantage of our early bird offer! Spaces are limited.

'Today, AI is like an intern that can work for a couple of hours…,' says OpenAI CEO Sam Altman
'Today, AI is like an intern that can work for a couple of hours…,' says OpenAI CEO Sam Altman

Hindustan Times

time32 minutes ago

  • Hindustan Times

'Today, AI is like an intern that can work for a couple of hours…,' says OpenAI CEO Sam Altman

The world is steadily transitioning towards embracing Artificial Intelligence (AI), slowly adopting tools and automation processes in day-to-day lives. While the technology is simplifying business processes and tasks, people are now fearing that AI could replace jobs in future. However, many industry experts also assure that AI will work alongside humans. Now, at the Snowflake Summit 2025, OpenAI CEO Sam Altman shares greater insight on how people will start to embrace AI in real time. Reportedly, Altman provided a statement that AI could replace entry-level jobs or interns. However, Gen Z could actually benefit from the technology. This claim also supports the recent Oxford Economics study, which talks about how companies are hiring fewer college graduates in recent times. Know what the OpenAI CEO said more about AI taking human jobs. Also read: Google pauses 'Ask Photos' AI Feature to address performance issues Sam Altman chaired a panel with Snowflake CEO Sridhar Ramaswamy at the Snowflake Summit 2025, during which he said that AI could perform similar tasks to junior-level employees, eventually replacing the hours of work done by interns. Altman stated, 'Today AI is like an intern that can work for a couple of hours, but at some point it'll be like an experienced software engineer that can work for a couple of days.' He further added that AI could resolve business problems and that 'we start to see agents that can help us discover new knowledge.' Also read: Microsoft launches Xbox Copilot beta on Android app to assist gamers with real-time support While it seems like a very practical prediction, it is not the first time we have heard something like this. As businesses and companies are heavily investing in AI tools, it is not only saving them money on hiring resources, but it is so fast tracking certain tasks which used to take hours with human intelligence. But how is Gen Z vastly embracing AI? At Sequoia Capital's AI Ascent event, Altman highlighted how different generations of people are using AI in the real world. He said, many are using AI as a replacement for Google. However, Gen Z is using AI as an advisor, whereas younger generations are using the technology as an operating system. Therefore, people in their twenties are heavily relying on AI tools like as ChatGPT to perform the majority of tasks. This also showcases a great example of how AI will work alongside humans, but this could also create an imbalance in the job market, especially for people who are just starting new in the job industry. Mobile Finder: Apple iPhone 17 Pro Max LATEST specs, features, and price

Apple WWDC 2025 Event Live Updates: Apple to unveil latest iOS features, updates in opening keynote
Apple WWDC 2025 Event Live Updates: Apple to unveil latest iOS features, updates in opening keynote

Indian Express

timean hour ago

  • Indian Express

Apple WWDC 2025 Event Live Updates: Apple to unveil latest iOS features, updates in opening keynote

Apple WWDC 2025 Event Live Updates: WWDC 2025, Apple's five-day annual developer conference will kick off on Monday, June 9, with a keynote from CEO Tim Cook and other company executives at Apple Park in Cupertino, California, US. Since the AI boom, developer conferences have been all about artificial intelligence. At WWDC 2024, Apple joined the AI race by introducing Apple Intelligence – its response to the rise of models and chatbots like ChatGPT. But a year later, progress has been slow, features limited, and the generative AI-revamped Siri has been delayed indefinitely. As a result, this year's WWDC is likely to see Apple pivot back to software-focused announcements, including an overhauled, unified UI design across all its devices as well as a new naming convention for iOS. We could also see Apple open up its foundational AI models for developers to build on. You can watch the livestream of the Apple WWDC 2025 keynote via YouTube at the link below: Stay tuned to our live blog for real-time updates, key announcements, and analysis from Apple WWDC 2025 as it unfolds. © IE Online Media Services Pvt Ltd

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store