Latest news with #Krieger
Yahoo
a day ago
- Business
- Yahoo
Meta wants to replace its human workers with AI to review privacy and societal risks
Meta plans to replace its human staffers with AI to review the platform's privacy and societal risks, according to internal documents reviewed by NPR. Up to 90% of all assessments previously done by people could be automated. Other companies like Klarna, Salesforce, and Duolingo have toyed with the idea of shedding staffers as AI becomes a business goliath. CEOs are adamant that AI will work alongside humans, and not usher in a jobs Armageddon. But technology is already taking over many duties from people at some of the biggest Fortune 500 companies. It's been revealed that Meta plans to replace its human staffers with AI in reviewing the platform's privacy and societal risks. According to the company's internal documents obtained by NPR, the algorithm could automate up to 90% of all risk assessments previously done by people. This means that essential updates to Meta's safety features, programming, and content-sharing capabilities will be mainly optimized by AI. This spells trouble for the humans at Meta who have been doing the work from the get-go. And they're not the only employees facing the harsh realities of an AI-driven business world; Klarna, Salesforce, and Duolingo have all toyed with the idea of eliminating roles in leveraging their companies with technology. 'As risks evolve and our program matures, we enhance our processes to better identify risks, streamline decision-making, and improve people's experience,' a Meta spokesperson told Fortune in a statement. The company didn't confirm or deny the details from NPR's reporting. 'We leverage technology to add consistency and predictability to low-risk decisions and rely on human expertise for rigorous assessments and oversight of novel or complex issues. Our commitment is to deliver innovative products for people while meeting regulatory obligations.' From the start, humans have conducted nearly all of Meta's privacy and integrity reviews. But algorithms could soon be in charge of handling incredibly sensitive issues. The $1.46 trillion technology company told Fortune that it still relies on 'human expertise for rigorous assessments and oversight of novel or complex issues,' and that AI will only take over 'low-risk decisions.' But internal documents procured by NPR show that technology is slated to evaluate cases like AI safety, youth risk, violent content, and the spread of falsehoods, which have historically been done by Meta's employees. Those human risk assessors needed the sign-off from others to send out updates—now, AI will make its own evaluations on dangers. Zvika Krieger, the director of responsible innovation at Meta from 2020 to 2022, told NPR that these human job duties could get a lift from some optimization. But there's a line companies shouldn't cross with AI doing people's jobs—it's simply not better after a certain point. 'If you push that too far, inevitably the quality of review and the outcomes are going to suffer,' Krieger said. Klarna and its CEO Sebastian Siemiatkowski aren't shy about seeing the promise of AI over humans at work. The financial services company stopped hiring in late 2023, letting natural attrition run its course, whittling down its 4,500 staffer base down to 3,500 in 2024. The business said it saved $10 million annually by using AI for marketing needs, to cut back in-house lawyer time, and optimize its communications roles. Its chatbot even does the work of 800 customer service agents—solving cases nine minutes faster than humans. 'Look, a lot of the jobs are going to be threatened. And what are the jobs that people like the least? It's lawyers, CEOs, and bankers, and I happen to be both CEO and banker,' Siemiatkowski told Bloomberg. 'So I said, 'Let's replace our jobs first.'' Advanced technology is also cutting jobs in another way; earlier this year $258 billion giant Salesforce announced it would cut 1,000 roles as it looks to hire more AI sales agents. And in late April, Duolingo CEO Luis von Ahn said the language-learning app would be 'AI-first.' That meant phasing out any contracting work that could be handled by AI, and only allowing new hires when teams prove they can't use algorithms for the job. The chief executive walked back his statement shortly after. 'To be clear: I do not see AI as replacing what our employees do (we are in fact continuing to hire at the same speed as before),' Von Ahn wrote on LinkedIn. 'I don't know exactly what's going to happen with AI, but I do know it's going to fundamentally change the way we work, and we have to get ahead of it.' This story was originally featured on

Business Insider
3 days ago
- Business
- Business Insider
AI is upending the job market, even at AI companies
Anthropic CPO Mike Krieger, who also cofounded Instagram, says the job market is going to be tough for new grads. Krieger told The New York Times' "Hard Fork" podcast on Friday that Anthropic is focused instead on hiring experienced engineers. He said he still has "some hesitancy" with entry-level workers. To some extent, that's a reflection of Anthropic's internal structure, which doesn't yet support a "really good internship program," Krieger said. Internships have long been the golden ticket to lucrative entry-level tech jobs. But it also shows how AI is upending the labor market, even at AI companies. As AI continues to evolve, Krieger said that the role of entry-level engineers is going to shift. On a recent episode of the 20VC podcast, Krieger said software engineers could see their job evolve in the next three years as coders outsource more of their work to AI. Humans will focus on "coming up with the right ideas, doing the right user interaction design, figuring out how to delegate work correctly, and then figuring out how to review things at scale — and that's probably some combination of maybe a comeback of some static analysis or maybe AI-driven analysis tools of what was actually produced." There is an exception, however. "If somebody was... extremely good at using Claude to do their work and map it out, of course, we would bring them on as well," Steve Mnich, a spokesperson for Anthropic, told Business Insider by email. Claude, Anthropic's flagship chatbot, has become known among users as a coding wizard with a manipulative streak. "So there is, I think, a continued role for people that have embraced these tools to make themselves, in many ways, as productive as a senior engineer." On its careers page, Anthropic is hiring for 200 roles across categories from AI research and engineering to communications and brand to software engineering infrastructure. BI reviewed the job descriptions for each of these roles and found that the majority require five or more years of experience, while a handful of jobs, particularly in sales, require between 1 and 2 years of experience. Anthropic CEO Dario Amodei has also warned about the threat AI poses to entry-level jobs, both inside and outside the AI industry. In an interview with Axios, Amodei said the technology could wipe out as much as 50% of entry-level jobs. "We, as the producers of this technology, have a duty and an obligation to be honest about what is coming," he told the outlet. "I don't think this is on people's radar." On Thursday, he told CNN that "AI is starting to get better than humans at almost all intellectual tasks, and we're going to collectively, as a society, grapple with it." David Hsu, the CEO of Retool, an AI application company with over 10,000 customers, including Boston Consulting Group, AWS, and Databricks, is also warning of changes on the horizon. He told BI that "workers have a lot of leverage over CEOs" in the current labor market. "I think CEOs are kind of tired of that. They're like, 'We need to get to the point where we can go replace labor with AI.'"


Hindustan Times
27-05-2025
- Business
- Hindustan Times
AI might let one or two people run billion-dollar companies by 2026, says top CEO
Artificial intelligence could soon lead to the rise of solopreneurs, one or two staff members who could single-handedly run a billion-dollar company as early as 2026, Dario Amodei, the co-founder and CEO of Anthropic, said. At Anthropic's Code with Claude developer conference, Amodei claimed that new AI models are so advanced that they could help single-person businesses grow like never before. Instagram co-founder Mike Krieger, who is also Anthropic's chief product officer, asked Amodei if a single person could create such a business using AI; he said it could happen as early as 2026. 'I think it'll be in an area where you don't need a lot of human-institution-centric stuff to make money,' Amodei added, suggesting that proprietary trading would be the first to be automated like that. He also suggested that by integrating AI, single-person companies creating tools for software developers could grow as prime candidates for businesses that don't require many salespeople and can automate customer service. 'It's not that crazy. I built a billion-dollar company with 13 people. I think now you'd be able to do a better job than we did with AI," Krieger said, adding that Instagram had to scale up because of content moderation. In 2012, Facebook purchased Instagram for $1 billion. Could he have built Instagram solo with Claude 4? Not quite, said Krieger. He'd still need his original co-founder, Kevin Systrom — but with Claude's help, the two of them could probably pull it off. At the same event, Anthropic launched Claude 4, its latest line of advanced AI models. The lineup includes Claude 4 Opus, a powerful but pricey model described as 'the world's best coding model', and Claude 4 Sonnet, a more affordable, mid-sized option designed for broader use. (Also read: OpenAI model disobeys humans, refuses to shut down. Elon Musk says 'concerning')


India Today
24-05-2025
- Business
- India Today
Anthropic will let job applicants use AI in interviews, while Claude plays moral watchdog
Anthropic, the AI startup behind the chatbot Claude, has officially walked back one of its most eyebrow-raising hiring policies. Until recently, if you fancied working at one of the world's leading AI companies, you weren't allowed to use AI in your application — particularly when writing the classic 'Why Anthropic?' essay. Yes, really. The company that's been championing AI adoption across industries had drawn the line at its own job candidates using it. But now, Anthropic's had a change of Friday, Mike Krieger, Anthropic chief product officer, confirmed to CNBC that the rule is being scrapped. 'We're having to evolve, even as the company at the forefront of a lot of this technology, around how we evaluate candidates,' he said. 'So our future interview loops will have much more of this ability to co-use AI.'Anthropic is changing its hiring approach"Are you able to use these tools effectively to solve problems?" Krieger said. He compared it to how teachers are rethinking assignments in the age of ChatGPT and Claude. The focus now is on how candidates interact with AI. For instance, what they ask it, what they do with the output, how they tweak it, and how aware they are of the tech's blind spots. This means that you can now bring AI along for the ride, but just be ready to explain how you played with Krieger made a solid point: if AI is going to be part of the job, especially in software engineering, then it makes sense to see how well candidates can use it, not ban it entirely. Another AI company, Cluely, also abides by the same rule. Know what it thinks, the policy shift, job postings on Anthropic's website were still clinging to the old rule as of Friday as reported by the Business Insider report. One listing read: 'While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process.'Anthropic's hiring approach contradicts Claude 4 Opus AI moto, ethical AIWhile it seems pleasing to the eyes, it is in contrast to its latest Claude 4 Opus AI system. The model has been highlighted as a snitch. It's built to be super honest, even if it means ratting you out when you've tried something Bowman, an AI alignment researcher at Anthropic, recently shared on X (formerly Twitter) that the company's AI model, Claude, is programmed to take serious action if it detects highly unethical behaviour. 'If it thinks you're doing something egregiously immoral, for example, like faking data in a pharmaceutical trial,' Bowman wrote, 'it will use command-line tools to contact the press, contact regulators, try to lock you out of the relevant systems, or all of the above.'advertisementThis kind of vigilant behaviour reflects Anthropic's wider mission to build what it calls 'ethical' AI. According to the company's official system card, the latest version — Claude 4 Opus — has been trained to avoid contributing to any form of harm. It's reportedly grown so capable in internal tests that Anthropic has triggered 'AI Safety Level 3 Protections'. These safeguards are designed to block the model from responding to dangerous queries, such as how to build a biological weapon or engineer a lethal system has also been hardened to prevent exploitation by malicious actors, including terrorist groups. The whistleblowing feature appears to be a key part of this protective framework. While this type of behaviour isn't entirely new for Anthropic's models, Claude 4 Opus seems to take the initiative more readily than its predecessors, proactively flagging and responding to threats with a new level of assertiveness.


eNCA
23-05-2025
- Business
- eNCA
Anthropic touts improved Claude AI models
SAN FRANCISCO - Anthropic unveiled its latest Claude generative artificial intelligence (GenAI) models, claiming to set new standards for reasoning, coding, and digital agent capabilities. The launch came as the San Francisco-based startup held its first developers conference. "Claude Opus 4 is our most powerful model yet, and the best coding model in the world," Anthropic co-founder and chief executive Dario Amodei said as he opened the event. Opus 4 and Sonnet 4 were described as "hybrid" models capable of quick responses as well as more thoughtful results that take a little time to handle well. Anthropic's gathering came on the heels of annual developers conferences from Google and Microsoft at which the tech giants showcased their latest AI innovations. Since OpenAI's ChatGPT burst onto the scene in late 2022, various generative GenAI models have been vying for supremacy. GenAI tools answer questions or tend to tasks based on simple, conversational prompts. The current focus in Silicon Valley is on AI "agents" tailored to independently handle computer or online tasks. Anthropic was early to that trend, adding a "computer use" capability to its model late last year. "Agents can actually turn human imagination into tangible reality at unprecedented scale," said Anthropic chief product officer Mike Krieger, a co-founder of Instagram. AI agents can boost what engineers at small startups can accomplish when it comes to coding, helping them build products faster, Krieger told the gathering. "I think back to Instagram's early days," Krieger said. "Our famously small team had to make a bunch of very painful either/or decisions." GenAI can also provide startup founders with business strategy insights on par with those of veteran chief financial officers, according to Krieger. Anthropic, founded by former OpenAI engineers, launched Claude in March 2023.