
Anthropic will let job applicants use AI in interviews, while Claude plays moral watchdog
Anthropic, the AI startup behind the chatbot Claude, has officially walked back one of its most eyebrow-raising hiring policies. Until recently, if you fancied working at one of the world's leading AI companies, you weren't allowed to use AI in your application — particularly when writing the classic 'Why Anthropic?' essay. Yes, really. The company that's been championing AI adoption across industries had drawn the line at its own job candidates using it. But now, Anthropic's had a change of heart.advertisementOn Friday, Mike Krieger, Anthropic chief product officer, confirmed to CNBC that the rule is being scrapped. 'We're having to evolve, even as the company at the forefront of a lot of this technology, around how we evaluate candidates,' he said. 'So our future interview loops will have much more of this ability to co-use AI.'Anthropic is changing its hiring approach"Are you able to use these tools effectively to solve problems?" Krieger said. He compared it to how teachers are rethinking assignments in the age of ChatGPT and Claude. The focus now is on how candidates interact with AI. For instance, what they ask it, what they do with the output, how they tweak it, and how aware they are of the tech's blind spots. This means that you can now bring AI along for the ride, but just be ready to explain how you played with it.advertisement
Krieger made a solid point: if AI is going to be part of the job, especially in software engineering, then it makes sense to see how well candidates can use it, not ban it entirely. Another AI company, Cluely, also abides by the same rule. Know what it thinks, here.Despite the policy shift, job postings on Anthropic's website were still clinging to the old rule as of Friday as reported by the Business Insider report. One listing read: 'While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process.'Anthropic's hiring approach contradicts Claude 4 Opus AI moto, ethical AIWhile it seems pleasing to the eyes, it is in contrast to its latest Claude 4 Opus AI system. The model has been highlighted as a snitch. It's built to be super honest, even if it means ratting you out when you've tried something dodgy.Sam Bowman, an AI alignment researcher at Anthropic, recently shared on X (formerly Twitter) that the company's AI model, Claude, is programmed to take serious action if it detects highly unethical behaviour. 'If it thinks you're doing something egregiously immoral, for example, like faking data in a pharmaceutical trial,' Bowman wrote, 'it will use command-line tools to contact the press, contact regulators, try to lock you out of the relevant systems, or all of the above.'advertisementThis kind of vigilant behaviour reflects Anthropic's wider mission to build what it calls 'ethical' AI. According to the company's official system card, the latest version — Claude 4 Opus — has been trained to avoid contributing to any form of harm. It's reportedly grown so capable in internal tests that Anthropic has triggered 'AI Safety Level 3 Protections'. These safeguards are designed to block the model from responding to dangerous queries, such as how to build a biological weapon or engineer a lethal virus.The system has also been hardened to prevent exploitation by malicious actors, including terrorist groups. The whistleblowing feature appears to be a key part of this protective framework. While this type of behaviour isn't entirely new for Anthropic's models, Claude 4 Opus seems to take the initiative more readily than its predecessors, proactively flagging and responding to threats with a new level of assertiveness.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
&w=3840&q=100)

Business Standard
an hour ago
- Business Standard
India poised to succeed at all layers of AI stack: OpenAI's Jason Kwon
India is becoming a global artificial intelligence (AI) powerhouse with the second-highest number of users of OpenAI's ChatGPT, a thriving community of developers that are among the top 10 countries globally building on OpenAI's application programming interfaces (APIs), the company's chief strategy officer, Jason Kwon, said. 'With the vast and growing pool of AI talent, a vibrant entrepreneurial spirit, and strong government support to expand the critical infrastructure, India is poised to succeed at all layers of the AI stack,' Kwon said. The executive was in India on Thursday as part of his global tour. On Thursday, OpenAI launched the OpenAI Academy India, the first international expansion of the company's education platform. Through the academy, OpenAI will support the IndiaAI Mission's FutureSkills pillar by expanding access to AI skills training for a wide range of learners — students, developers, educators, civil servants, nonprofit leaders, and small-business owners, the company said. Apart from this, the company will provide educational content on the government's Karmayogi platform, which aims to enhance skills and build capacity among public servants and government officials. On Thursday, OpenAI said it now supported 3 million paying business users of ChatGPT, up from 2 million in February. The company announced the launch of the beta version of Connectors, a tool that can help enterprises find granular data from their third-party tools without leaving ChatGPT. 'Additionally, deep research connectors (beta) are now available with HubSpot, Linear, as well as many popular Microsoft and Google tools. These build on Deep Research, an agent that conducts multi-step research for complex tasks, by gathering, synthesizing, and presenting information from third-party tools and the web,' the company said in a press note.

Business Standard
an hour ago
- Business Standard
India poised to be AI talent powerhouse, says OpenAI CSO Jason Kwon
India has always led in developer talent and now has a strong opportunity to lead in AI talent, OpenAI Global Chief Strategy Officer (CSO) Jason Kwon said on Thursday as the ChatGPT-maker launched OpenAI Academy in partnership with IndiaAI Mission. In an interview to PTI, Kwon said that India - which has the second largest number of ChatGPT users - is among the most important countries for OpenAI in terms of engagement. India has always had the "ingredients to succeed", he noted. OpenAI, he pointed out, is eyeing partnerships that will help build the future of AI infrastructure in India. "India is one of our most important countries around the world in terms of engaging. It is a place that has the second largest number in terms of ChatGPT usage, and has grown over 3-fold year-over-year," Kwon said. Users in India have shown a lot of creativity in terms of engagement on use of image generator tool which has been "really, really popular", he said about the runaway success of GPT-4o and Ghibli-inspired AI art that became an instant hit in India. Open AI is betting big on India, with Kwon saying the country has always led in developer talent, and has a strong opportunity to lead in AI talent now. "India has always led in developer talent, and I think now India has an opportunity to lead in AI talent, and we are here to help with that," he said. OpenAI, in partnership with IndiaAI Mission under the IT Ministry, on Thursday launched the OpenAI Academy India, marking the first international expansion of the ChatGPT-maker's education platform. The initiative seeks to broaden access to AI education and tools, tapping into India's fast-growing developer community, digital infrastructure, and network of startups and innovators. Kwon said this first International Academy is all about "helping people learn about AI and increase AI literacy" "It's about helping people understand how to use AI, and also it's about helping people learn how to build with AI. And we've helped lots of people get into the field, and we're hoping to help lot more people in India get into AI and make use of that developer talent that India already has, and expand into AI," he said. OpenAI, he said, is looking for partnerships that will help build the future of AI infrastructure in India. "So we are looking for partnerships that are going to help build the future of AI infrastructure in India with some of the best companies in India on the private side," he said. On IndiaAI Mission and the country's ambitions to build indigenous foundation models, Kwon said, "India has always had the ingredients to succeed". "It has the talent in terms of technical capability. I think it has a great government plan to increase compute capacity in the region, and that's part of the reason that we're here also with the OpenAI for countries initiative , which is, how do we partner with the government as well as private sector, to help build up that capacity," he said. Ultimately, whether India wants to train foundation models within the country or to leverage foundation models trained outside country, is up to Indian developers and ecosystem. "We're just here to help find out and figure out how to help India develop AI that's for India, by India and of India," he said, and observed India has always had abundant technical talent. "Its technical universities are world class, and they've produced some of the best engineers and researchers in the world. A lot of them work at our company, and I think more talent is going to come from India, and it's going to be a vital part of the future of India in both AI and software in general," Kwon said. Asked about OpenAI's investment plans for India on 12-18 month horizon, he said that is already in focus with the launch of the OpenAI Academy and in contributing to the development of talent here. "In terms of further investment in India, we have also announced additional grants to philanthropic organisations, the accelerator program that we have supported to help with extension of social impact benefits of AI, and we continue to develop features for the startup ecosystem that develops for the local market here, using our models. All those investments will continue," Kwon said. To a question on India staking claim in global AI race with an all-out offensive around foundation model, talent, computation capacities, Kwon noted that there are "many ways" to participate in the so-called AI race and that India is well positioned to be a participant in that competition. "One of the ways in which India has always done well is in application development, and I think that is where a lot of value is going to be created in the AI ecosystem. You already see that with some of the best applications right now happening in coding, and I think that software developers in India are really working with the models to develop on top of them today. There can be also indigenous development of the models here in India, and I think that we'll see what the future brings in that respect," he said. To a question on whether OpenAI is planning to set up data centers in India to comply with local data laws, he said: "I think that will be probably part of the future, because that is one of the reasons we are here with OpenAI for countries initiative, which is to help plan out the local data center capacity building for India." That would require a partnership between the public and private sector, and "we're here to help bring those partnerships together, and also help contribute to the development of sovereign and national AI," he said. Speaking at the launch of OpenAI Academy, Kwon said India is becoming the global AI powerhouse with the world's second highest number of ChatGPT users.
&w=3840&q=100)

Business Standard
an hour ago
- Business Standard
Reddit sues Anthropic for using user content to train AI without consent
Social media platform Reddit has filed a lawsuit against artificial intelligence (AI) company Anthropic, alleging that it illegally scraped user-generated content to train its chatbot, Claude. The suit was filed on Wednesday in the California Superior Court in San Francisco. Reddit claims Anthropic used automated tools to extract posts and comments from its platform despite explicit instructions not to do so. It says the content was then used to train Claude without proper user consent or licences. Ben Lee, Reddit's chief legal officer, criticised the alleged data practices, stating, 'AI companies should not be allowed to scrape information and content from people without clear limitations on how they can use that data.' He said Reddit is committed to protecting its user community, which generates one of the internet's largest bodies of discussion content. Legal partnerships cited as contrast Reddit, which went public last year, pointed to its existing licensing agreements with companies like OpenAI and Google as examples of lawful collaboration. These partnerships, the company said, include mechanisms to remove content, filter spam, and protect users. 'These partnerships allow us to enforce meaningful safeguards for our users,' said Lee, underscoring the contrast with what Reddit describes as Anthropic's unlicensed use of its data. Anthropic rejects charges, prepares defence Anthropic, founded in 2021 by former OpenAI employees and backed by Amazon, denied the allegations. 'We disagree with Reddit's claims and will defend ourselves vigorously,' the company said in a brief statement. Focus on breach of contract, not copyright While many AI-related lawsuits centre on copyright violations, Reddit's case focuses on breach of contract and unfair business practices. It argues that Anthropic violated Reddit's terms of service by accessing data without authorisation. The filing cites a 2021 research paper co-authored by Anthropic CEO Dario Amodei, which named Reddit as a valuable training resource. Subreddits on gardening, history, and personal advice were specifically mentioned for teaching AI how humans communicate. Anthropic has previously maintained its use of public data is legal. In a 2023 letter to the US Copyright Office, the company stated that its training involves statistical analysis rather than content replication.