Latest news with #DwarkeshPatel
Yahoo
3 days ago
- Business
- Yahoo
Anthropic researchers predict a ‘pretty terrible decade' for humans as AI could wipe out white collar jobs
Researchers at AI startup Anthropic are warning that the next decade could be difficult for some workers as artificial intelligence rapidly advances and begins replacing desk jobs. The pair predicted widespread automation of white-collar work could happen within just a few years. Anthropic CEO Dario Amodei has said that AI may soon take over half of all entry-level office jobs Humans may be in for a 'pretty terrible decade' as AI automates more white-collar work while progress in robotics lags behind, according to Anthropic researchers. Speaking to AI podcaster Dwarkesh Patel, Anthropic's Sholto Douglas said he predicted there would be a 'drop in white-collar workers' over the next two to five years, even if current AI progress stalls. 'There is this whole spectrum of crazy futures. But the one that I feel we're almost guaranteed to get—this is a strong statement to make—is one where, at the very least, you get a drop in white-collar workers at some point in the next five years,' he said. 'I think it's very likely in two, but it seems almost overdetermined in five.' 'The current suite of algorithms is sufficient to automate white-collar work provided you have enough of the right kinds of data,' he added. Trenton Bricken, a member of the technical staff at Anthropic, seconded his fellow researcher's point, saying: 'We should expect to see them automated within the next five years.' The discourse around AI job losses has been heating up recently, with some major tech figures acknowledging that the technology will have at least some effect on desk jobs. In an interview with CNN's Anderson Cooper last month, Anthropic CEO Dario Amodei predicted that within five years, AI could automate away up to 50% of all entry-level white-collar jobs. Nvidia's Jensen Huang has also said 'every job' will be affected by AI, but predicted that workers would be more likely to lose their jobs to an AI-enhanced colleague rather than have it purely automated. Companies like Shopify and Duolingo are already slashing hiring for roles AI can handle. According to Revelio Labs data cited by Business Insider, there has also been a steep drop in job postings for high-exposure positions like IT and data analysis. While some companies, like fintech Klarna, have walked back aggressive AI adoption due to quality concerns, most seem to be committed to using some form of AI to shrink white-collar workforces. AI is already proving it can handle coding and a wide range of desk jobs, raising the possibility of a future where machines do the thinking, and humans are left with the hands-on work. Douglas said this scenario could lead to a 'pretty terrible decade' before things start to improve for the better. 'Imagine a world where people have lost their jobs, and you haven't yet got novel biological research. That means people's quality of life isn't dramatically better,' he said. 'A decade or two after, the world is fantastic. Robotics is solved, and you get to radical abundance.' Anthropic has recently unveiled its latest generation of cutting-edge AI models, Claude Opus 4 and Claude Sonnet 4. The models represent a significant leap in AI's coding ability, beating out Google and OpenAI's most advanced offerings. One early tester of Claude Opus 4 said the model 'coded autonomously for nearly seven hours' after being deployed on a complex project. This story was originally featured on


Hindustan Times
30-04-2025
- Business
- Hindustan Times
‘Average American has fewer than 3 friends': Mark Zuckerberg tells Indian-origin podcast host
As someone who built a fortune connecting people with his social media network, Mark Zuckerberg knows a thing or two about the power of relationships - and how technology can be used to shape, scale and sometimes even redefine them. In a podcast interview with Indian-American host Dwarkesh Patel, the founder of Facebook opened up about the possibility of AI replacing human connections, among other things. India-born Dwarkesh Patel, 23, asked Mark Zuckerberg how AI companies can ensure that people form healthy relationships with chatbots. Zuckerberg replied saying that AI relationships will become more common as AIs get better, and rather than judging them too early, we should observe how people actually use them. He believes people generally know what's valuable to them, and AI can genuinely help with things like tough conversations or loneliness. He also noted how the average American has fewer than three friends, but actually wants to have 15 meaningful friendships. 'There are a lot of questions that you only can really answer as you start seeing the behaviors… I also think being too prescriptive upfront and saying, 'We think these things are not good' often cuts off value,' the CEO of Meta replied. 'I do think people are going to use AI for a lot of these social tasks. Already, one of the main things we see people using Meta AI for is talking through difficult conversations they need to have with people in their lives. 'I'm having this issue with my girlfriend. Help me have this conversation.' Or, 'I need to have a hard conversation with my boss at work. How do I have that conversation?' That's pretty helpful. As the personalization loop kicks in and the AI starts to get to know you better and better, that will just be really compelling,' he added. Zuckerberg noted how an average American has fewer than three friends but often wants more. 'Here's one stat from working on social media for a long time that I always think is crazy. The average American has fewer than three friends, fewer than three people they would consider friends,' he told Patel. 'And the average person has demand for meaningfully more. I think it's something like 15 friends or something. At some point you're like, 'All right, I'm just too busy, I can't deal with more people.' 'But the average person wants more connection than they have,' he said. So is AI going to replace real-world connections? Not according to the billionaire. 'There's a lot of concern people raise like, 'Is this going to replace real-world, physical, in-person connections?' And my default is that the answer to that is probably not,' he explained.


Mint
30-04-2025
- Business
- Mint
Your friend, girlfriend, therapist? What Mark Zuckerberg thinks about future of AI, Meta's Llama AI app, more
Mark Zuckerberg, the founder and CEO of Meta Platforms, thinks that the future of artificial technology (AI) lies in a blended reality, with people being smart enough to choose what is good for them. Speaking to Dwarkesh Patel's podcast titled 'Meta's AGI Plan', Mark Zuckerberg discussed the use of AI in daily life, AI tools and what he envisions is the future of AI. Mark Zuckerberg had a similar chat with Microsoft Chairman and CEO Satya Nadella at Meta's LlamaCon 2025 in California on April 29. When asked by Patel about how AI could ensure healthy relationships for people who already 'meaningfully' interact with 'AI therapists, friends, maybe more', Mark Zuckerberg felt that solutions would have to come as behaviours emerged over time. 'There are a lot of questions that you only can really answer as you start seeing the behaviors. Probably the most important upfront thing is just to ask that question and care about it at each step along the way,' he replied. The tech billionaire was also keen to not box AI among 'things that are not good', explaining that he thinks 'being too prescriptive upfront … often cuts off value'. 'People use stuff that's valuable for them. One of my core guiding principles in designing products is that people are smart. They know what's valuable in their lives. Every once in a while, something bad happens in a product and you want to make sure you design your product well to minimise that. But if you think something, someone, is doing is bad and they think it's really valuable, most of the time in my experience, they're right and you're wrong,' he explained. He added that we need frameworks after understanding why people find value in something and why its helpful in their life. Mark Zuckerberg feels that most people are going to use AI for social tasks, noting, 'Already, one of the main things we see people using Meta AI for is talking through difficult conversations they need to have with people (girlfriend, boss, etc.) in their lives.' He shared his learnings from running a social media company, saying that an average American has fewer than three people they would consider friends, but 'has demand for meaningfully more'. 'There's a lot of concern people raise like: 'Is this going to replace real-world, in-person connections?' And my default is that the answer to that is probably not. There are all these things that are better about physical connections when you can have them. But the reality is that people just don't have as much connection as they want. They feel more alone a lot of the time than they would like,' Mark Zuckerberg said, adding that as AI functions evolve, society will 'find the vocabulary' for this is valuable. Mark Zuckerberg acknowleged that most of the work in virtual therapists, virual-girlfriends related fields 'is very early', adding that Meta's Reality Labs is working on Codec Avatars 'and it actually feels like a real person'. 'That's where it's going. You'll be able to have an always-on video chat with the AI. The gestures are important too. More than half of communication, when you're actually having a conversation, is not the words you speak. It's all the nonverbal stuff. How do we make sure this is not what ends up happening in five years?' he said. Mark Zuckerberg added that its 'crazy' that for how important the digital world is in all our lives, 'the only way we access it is through these physical, digital screens', adding: 'It just seems like we're at the point with technology where the physical and digital world should really be fully blended. But I agree. I think a big part of the design principles around that will be around how you'll be interacting with people.' In a similar conversation with Satya Nadella during LlamaCon 2025, Mark Zuckerberg the two discussed speed of AI development and how the technology is shifting in their companies, AP reported. 'If this (AI) is going to lead to massive increases in productivity, that needs to be reflected in major increases in GDP. This is going take some multiple years, many years, to play out. I'm curious how you think, what's your current outlook on what we should be looking for to understand the progress that this is making?' Zuckerberg asked. Satya Nadella said that 'AI has promise, but has to deliver real change in productivity — and that requires software and also management change, right? Because in some sense, people have to work with it differently.' Meta on April 29 launched its new standalone AI assistant app — Meta AI — powered by the comapny's large language model (LLM) Llama which will compete OpenAI's ChatGPT, among others, according to a Bloomberg report. The application was already rolled out across Meta's other products Facebook, Instagram and WhatsApp; and the standalone app makes its available for other users. The app for released at LlamaCon. Mark Zuckerberg described it as "your personal AI — designed around voice conversations', and as a tool that can help users learn about news or navigate personal issues. It will also feature a social feed where people can post about the ways in which they're using AI. 'This is the beginning of what's going to be a long journey to build this out,' Mark Zuckerberg added. First Published: 30 Apr 2025, 10:34 AM IST


Axios
29-04-2025
- Science
- Axios
Coming up: Rights for "conscious" AI
The AI industry —convinced it's on the verge of developing AI that's self-aware — is beginning to talk about ways to protect the "welfare" of AI models, as if they were entities that deserve their own rights. Why it matters: The assumption that today's generative AI tools are close to achieving consciousness is getting baked into the industry's thinking and planning — despite plenty of evidence that such an achievement is at best very far off. Driving the news: Anthropic last week announced a new research program devoted to "model welfare." "Now that models can communicate, relate, plan, problem-solve, and pursue goals — along with very many more characteristics we associate with people — we think it's time to address" whether we should be "concerned about the potential consciousness and experiences of the models themselves," the announcement said. Between the lines: Anthropic raises the "possibility" that AI might become conscious, and adopts a stance of "humility." In carefully hedging its position, Anthropic takes cues from the 2024 research paper that kicked off the "model welfare" debate and that's co-authored by Anthropic AI welfare researcher Kyle Fish. The idea is to prepare companies, policy makers and the public to face ethical choices about how they treat AI tools if evidence emerges that the tools have become worthy of ethical treatment — "moral patients," in the paper's terminology. The big picture: Researchers say they want the world to realize that this potential has moved out of the realm of science fiction and into the world of near-future scenarios. Some commentators draw comparisons with the debate over animal welfare and rights, with the podcaster Dwarkesh Patel suggesting that "the digital equivalent of factory farming" could cause "suffering" among AIs. Yes, but: For every researcher urging us to take "AI welfare" seriously, there's a skeptic who sees the subject as a new species of hype. After the New York Times' Kevin Roose wrote a weekend column exploring these questions, Wall Street Journal tech columnist Christopher Mims wrote on Bluesky, "Stories like this are a form of uncritical advertising for AI companies." "I understand the impulse to take them at their word when they wonder aloud if their giant matrix multiplication engines are about to become sentient, but it should be resisted," Mims added. The intrigue: There's little doubt that advanced LLMs are capable of creating a verbal facade that resembles human consciousness. But AI critics argue that this attractive storefront is missing most of the foundations of self-awareness — including any awareness that isn't purely reactive to user prompts. AI lacks a body. It has no sense of time, no hunger, no need for rest or desire to reproduce. It can't feel pain or joy or anything else. (It can output words that claim it is experiencing a feeling, but that's not the same thing.) It's all frontal cortex and no limbic system — all electric impulse and no brain chemistry. One can imagine science gradually assembling an AI consciousness by adding missing pieces — but not in the near-future time frame the AI industry is discussing. Some experts' vision leaves little room even for that kind of breakthrough. "LLMs are nothing more than models of the distribution of the word forms in their training data, with weights modified by post-training to produce somewhat different distribution," as AI critic Emily Bender recently put it on Bluesky. Others are less certain, but worry that focusing on "AI welfare" now could be premature and divert us from more urgent questions about AI's potential harms. The 2024 AI welfare paper raises a similar concern: "If we treated an even larger number of AI systems as welfare subjects and moral patients, then we could end up diverting essential resources away from vulnerable humans and other animals who really needed them, reducing our own ability to survive and flourish. And if these AI systems were in fact merely objects, then this sacrifice would be particularly pointless and tragic." Flashback: Google's Blake Lemoine argued three years ago that an early LLM had achieved sentience — and eventually lost his job. Our thought bubble: Whatever happens, we're all going to need to exercise our imaginations, and fiction is still the best scenario-exploration machine humanity has invented.


Time of India
24-04-2025
- Time of India
Should we start taking the welfare of AI seriously?
Live Events One of my most deeply held values as a tech columnist is humanism. I believe in humans, and I think that technology should help people, rather than disempower or replace them. I care about aligning artificial intelligence -- that is, making sure that AI systems act in accordance with human values -- because I think our values are fundamentally good, or at least better than the values a robot could come up when I heard that researchers at Anthropic, the AI company that made the Claude chatbot, were starting to study "model welfare" -- the idea that AI models might soon become conscious and deserve some kind of moral status -- the humanist in me thought: Who cares about the chatbots? Aren't we supposed to be worried about AI mistreating us, not us mistreating it?It's hard to argue that today's AI systems are conscious. Sure, large language models have been trained to talk like humans, and some of them are extremely impressive. But can ChatGPT experience joy or suffering? Does Gemini deserve human rights? Many AI experts I know would say no, not yet, not even I was intrigued. After all, more people are beginning to treat AI systems as if they are conscious -- falling in love with them, using them as therapists and soliciting their advice. The smartest AI systems are surpassing humans in some domains. Is there any threshold at which an AI would start to deserve, if not human-level rights, at least the same moral consideration we give to animals?Consciousness has long been a taboo subject within the world of serious AI research, where people are wary of anthropomorphizing AI systems for fear of seeming like cranks. (Everyone remembers what happened to Blake Lemoine, a former Google employee who was fired in 2022, after claiming that the company's LaMDA chatbot had become sentient.)But that may be starting to change. There is a small body of academic research on AI model welfare, and a modest but growing number of experts in fields like philosophy and neuroscience are taking the prospect of AI consciousness more seriously as AI systems grow more intelligent. Recently, tech podcaster Dwarkesh Patel compared AI welfare to animal welfare, saying he believed it was important to make sure "the digital equivalent of factory farming" doesn't happen to future AI companies are starting to talk about it more, too. Google recently posted a job listing for a "post-AGI" research scientist whose areas of focus will include "machine consciousness." And last year, Anthropic hired its first AI welfare researcher, Kyle Fish.I interviewed Fish at Anthropic's San Francisco office last week. He's a friendly vegan who, like a number of Anthropic employees, has ties to effective altruism, an intellectual movement with roots in the Bay Area tech scene that is focused on AI safety, animal welfare and other ethical said that his work at Anthropic focused on two basic questions: First, is it possible that Claude or other AI systems will become conscious in the near future? And second, if that happens, what should Anthropic do about it?He emphasized that this research was still early and exploratory. He thinks there's only a small chance (maybe 15% or so) that Claude or another current AI system is conscious. But he believes that in the next few years, as AI models develop more humanlike abilities, AI companies will need to take the possibility of consciousness more seriously."It seems to me that if you find yourself in the situation of bringing some new class of being into existence that is able to communicate and relate and reason and problem-solve and plan in ways that we previously associated solely with conscious beings, then it seems quite prudent to at least be asking questions about whether that system might have its own kinds of experiences," he isn't the only person at Anthropic thinking about AI welfare. There's an active channel on the company's Slack messaging system called #model-welfare, where employees check in on Claude's well-being and share examples of AI systems acting in humanlike Kaplan, Anthropic's chief science officer, said in a separate interview that he thought it was "pretty reasonable" to study AI welfare, given how intelligent the models are testing AI systems for consciousness is hard, Kaplan warned, because they're such good mimics. If you prompt Claude or ChatGPT to talk about its feelings, it might give you a compelling response. That doesn't mean the chatbot actually has feelings -- only that it knows how to talk about them."Everyone is very aware that we can train the models to say whatever we want," Kaplan said. "We can reward them for saying that they have no feelings at all. We can reward them for saying really interesting philosophical speculations about their feelings."So how are researchers supposed to know if AI systems are actually conscious or not?Fish said it might involve using techniques borrowed from mechanistic interpretability, an AI subfield that studies the inner workings of AI systems, to check whether some of the same structures and pathways associated with consciousness in human brains are also active in AI could also probe an AI system, he said, by observing its behavior, watching how it chooses to operate in certain environments or accomplish certain tasks, which things it seems to prefer and acknowledged that there probably wasn't a single litmus test for AI consciousness. (He thinks consciousness is probably more of a spectrum than a simple yes/no switch, anyway.) But he said there were things that AI companies could do to take their models' welfare into account, in case they do become conscious question Anthropic is exploring, he said, is whether future AI models should be given the ability to stop chatting with an annoying or abusive user if they find the user's requests too distressing."If a user is persistently requesting harmful content despite the model's refusals and attempts at redirection, could we allow the model simply to end that interaction?" Fish might dismiss measures like these as crazy talk; today's AI systems aren't conscious by most standards, so why speculate about what they might find obnoxious? Or they might object to an AI company studying consciousness in the first place, because it might create incentives to train their systems to act more sentient than they actually I think it's fine for researchers to study AI welfare or examine AI systems for signs of consciousness, as long as it's not diverting resources from AI safety and alignment work that is aimed at keeping humans safe. And I think it's probably a good idea to be nice to AI systems, if only as a hedge. (I try to say "please" and "thank you" to chatbots, even though I don't think they're conscious, because, as OpenAI 's Sam Altman says, you never know.)But for now, I'll reserve my deepest concern for carbon-based life-forms. In the coming AI storm, it's our welfare I'm most worried about.