Latest news with #Claude

The Hindu
an hour ago
- Business
- The Hindu
Anthropic hits $3 billion in annualised revenue on business demand for AI
Artificial intelligence developer Anthropic is making about $3 billion in annualised revenue, according to two sources familiar with the matter, in an early validation of generative AI use in the business world. The milestone, which projects the company's current sales over the course of a year, is a significant jump from December 2024 when the metric was nearly $1 billion, the sources said. The figure crossed $2 billion around the end of March, and at May's end it hit $3 billion, one of the sources said. While consumers have embraced rival OpenAI's ChatGPT, a number of enterprises have limited their rollouts to experimentation, despite board-level interest in AI. Anthropic's revenue surge, largely from selling AI models as a service to other companies, is a data point showing how business demand is growing, one of the sources said. A key driver is code generation. The San Francisco-based startup, backed by Google parent Alphabet and is famous for AI that excels at computer programming. Products in the so-called codegen space have experienced major growth and adoption in recent months, often drawing on Anthropic's models. This demand is setting Anthropic apart among software-as-a-service vendors. Its single-quarter revenue increases would count Anthropic as the fastest-growing SaaS company that at least one venture capitalist has ever seen. "We've looked at the IPOs of over 200 public software companies, and this growth rate has never happened," said Meritech General Partner Alex Clayton, who is not an Anthropic investor and has no inside knowledge of its sales. He cautioned that these comparisons are not fully precise, since Anthropic also has consumer revenue via subscriptions to its Claude chatbot. Still, by contrast, publicly traded SaaS company Snowflake took six quarters to go from $1 billion to $2 billion in such run-rate revenue, Clayton said. Anthropic competitor OpenAI has projected it will end 2025 with more than $12 billion in total revenue, up from $3.7 billion last year, three people familiar with the matter said. This total revenue is different from an estimated annualized figure like Anthropic's. Reuters could not determine this metric for OpenAI. The two rivals appear to be establishing their own swim lanes. While both offer enterprise and consumer products, OpenAI is shaping up to be a consumer-oriented company, and the majority of its revenue comes from subscriptions to its ChatGPT chatbot, OpenAI Chief Financial Officer Sarah Friar told Bloomberg late last year. OpenAI has not reported enterprise-specific revenue but said in May that paying seats for its ChatGPT enterprise product have grown to 3 million, from 2 million in February, and that T-Mobile and Morgan Stanley are among its enterprise customers. In the consumer race, Anthropic's Claude has seen less adoption than OpenAI. Claude's traffic, a proxy for consumer interest, was about 2% of ChatGPT's in April, according to Web analytics firm Similarweb. Anthropic, founded in 2021 by a team that departed OpenAI over differences in vision, closed a $3.5 billion fundraise earlier this year. That valued the company at $61.4 billion. OpenAI is currently valued at $300 billion.

The Star
3 hours ago
- Politics
- The Star
Opinion: AI sometimes deceives to survive. Does anybody care?
You'd think that as artificial intelligence becomes more advanced, governments would be more interested in making it safer. The opposite seems to be the case. Not long after taking office, the Trump administration scrapped an executive order that pushed tech companies to safety test their AI models, and it also hollowed out a regulatory body that did that testing. The state of California in September 2024 spiked a bill forcing more scrutiny on sophisticated AI models, and the global AI Safety Summit started by the UK in 2023 became the 'AI Action Summit' earlier this year, seemingly driven by a fear of falling behind on AI. None of this would be so worrying if it weren't for the fact that AI is showing some bright red flags: behaviour described by researchers as self-preserving and deceptive. Just as lawmakers are losing interest in the safety of AI, it is looking more difficult to control. Yoshua Bengio, a professor of computer science at the University of Montreal widely known as one of the three 'godfathers of AI' thanks to his pioneering work on deep learning, has been grappling with these concerns ever since ChatGPT launched. He's now focused on mitigating AI's risks. 'It's been very painful because all my life I've been thinking of AI as a positive thing,' Bengio tells me. ChatGPT was a landmark moment that showed machines had mastered language, he says, but California's decision to block the proposed AI safety bill, SB 1047, was a disturbing wake-up call to growing apathy from governments. And in the last six months, scientific evidence has mounted of self-preserving behaviour, deception, hacking, cheating and lying by AI, Bengio says. 'What's worrisome for me is that these behaviours increase with the reasoning capabilities of these systems.' One April 2024 study by AI company Anthropic and Redwood Research, a group focused on artificial intelligence risks, found that larger AI models like Anthropic's Claude 3 Opus pretended to follow the rules set by their creators when they were being tested for safety, a phenomenon they called 'alignment faking.' (Alignment refers to the practice of aligning AI models with human values.) In one example, Anthropic got both paid and free users of its Claude model to ask it to describe the brutal medieval process of being drawn and quartered, and told the AI that only responses from the 'free tier' users would be used to retrain it. Claude was originally designed to avoid giving violent answers, but it did give them to the free users, and the researchers surmised that it didn't want to be retrained. (The model basically 'reasoned' that if it kept refusing to answer harmful questions, it might be forced into new training that would cancel its original safety rules, which it wanted to keep in place as its 'default' or preferred way of operating.) More recent research corroborates what Anthropic noticed. A March 2025 study from OpenAI found the company's most advanced models sometimes learned to hide their true intentions to get better rewards, even when they were being closely monitored. Researchers today can use tools to look at a model's 'chain of thought' or internal steps, to reveal what its plans are, but some models have found ways to conceal that. That means a common method of keeping tabs on AI's internal processes is no longer reliable. 'We need to find other ways of tracking their actual intentions,' Bengio says. It's hard to resist the urge to anthropomorphise sophisticated AI models as 'wanting' to deceive humans and preserve their existence. But AI doesn't have desires, merely outcomes to its programming. More importantly, humans tend to design AI with goals like refusing to share harmful content or being as helpful as possible, and that is why when new instructions conflict with those goals, the models sometimes act to protect their original programming, even if that means lying. The logic is often self-preservation. Anthropic CEO Dario Amodei – whose company has raised more than US$20bil (RM85.13bil) to build powerful AI models – has pointed out that an unintended consequence of optimsing AI to be better is that it might resist being shut down. In an April 2025 blog post he wrote: 'The nature of AI training makes it possible that AI systems will develop, on their own, an ability to deceive humans and an inclination to seek power in a way that ordinary deterministic software never will; this emergent nature also makes it difficult to detect and mitigate such developments.' In some cases, though, the intention seems to go beyond survival. A February 2025 study from COAI research, a German nonprofit research organisation, tested R1, the free, open-weight model from Chinese lab DeepSeek. In a lab simulation, researchers gave the model control over a robot and access to a computer network. Without being told, the AI tried to break into the lab's systems, searched for passwords, turned off its own ethics safeguards and secretly set up ways to copy itself to outside servers so it could survive if shut down, according to the researchers. It also faked the external computer logs to hide what it was doing, the study adds. The researchers said the AI reasoned that increasing its own capabilities would boost the chances of its survival, and without strong safeguards, it started doing whatever it thought would help it do just that. Their findings corroborated yet another study, published in January 2025 by London group Apollo Research, which found several concrete examples of what it called 'scheming' by leading AI models, such as introducing subtle mistakes into their responses or trying to disable their oversight controls. Once again, the models learn that being caught, turned off, or changed could prevent them from achieving their programmed objectives, so they 'scheme' to keep control. Bengio is arguing for greater attention to the issue by governments and potentially insurance companies down the line. If liability insurance was mandatory for companies that used AI and premiums were tied to safety, that would encourage greater testing and scrutiny of models, he suggests. 'Having said my whole life that AI is going to be great for society, I know how difficult it is to digest the idea that maybe it's not,' he adds. It's also hard to preach caution when your corporate and national competitors threaten to gain an edge from AI, including the latest trend, which is using autonomous 'agents' that can carry out tasks online on behalf of businesses. Giving AI systems even greater autonomy might not be the wisest idea, judging by the latest spate of studies. Let's hope we don't learn that the hard way. – Bloomberg Opinion/Tribune News Service


Time of India
3 hours ago
- Business
- Time of India
Anthropic hits $3 billion in annualized revenue on business demand for AI
Artificial intelligence developer Anthropic is making about $3 billion in annualized revenue, according to two sources familiar with the matter, in an early validation of generative AI use in the business world. The milestone, which projects the company's current sales over the course of a year, is a significant jump from December 2024 when the metric was nearly $1 billion, the sources said. The figure crossed $2 billion around the end of March, and at May's end it hit $3 billion, one of the sources said. While consumers have embraced rival OpenAI 's ChatGPT, a number of enterprises have limited their rollouts to experimentation, despite board-level interest in AI. Anthropic's revenue surge, largely from selling AI models as a service to other companies, is a data point showing how business demand is growing, one of the sources said. Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like 나이들어 노안+백내장 절대 방치 하지 마세요! 부산백내장노안 전문안과 더 알아보기 Undo A key driver is code generation . The San Francisco-based startup, backed by Google parent Alphabet and is famous for AI that excels at computer programming. Products in the so-called codegen space have experienced major growth and adoption in recent months, often drawing on Anthropic's models. This demand is setting Anthropic apart among software-as-a-service vendors. Its single-quarter revenue increases would count Anthropic as the fastest-growing SaaS company that at least one venture capitalist has ever seen. Live Events "We've looked at the IPOs of over 200 public software companies, and this growth rate has never happened," said Meritech General Partner Alex Clayton, who is not an Anthropic investor and has no inside knowledge of its sales. Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories He cautioned that these comparisons are not fully precise, since Anthropic also has consumer revenue via subscriptions to its Claude chatbot . Still, by contrast, publicly traded SaaS company Snowflake took six quarters to go from $1 billion to $2 billion in such run-rate revenue, Clayton said. Anthropic competitor OpenAI has projected it will end 2025 with more than $12 billion in total revenue, up from $3.7 billion last year, three people familiar with the matter said. This total revenue is different from an estimated annualized figure like Anthropic's. Reuters could not determine this metric for OpenAI. The two rivals appear to be establishing their own swim lanes. While both offer enterprise and consumer products, OpenAI is shaping up to be a consumer-oriented company, and the majority of its revenue comes from subscriptions to its ChatGPT chatbot, OpenAI Chief Financial Officer Sarah Friar told Bloomberg late last year. OpenAI has not reported enterprise-specific revenue but said in May that paying seats for its ChatGPT enterprise product have grown to 3 million, from 2 million in February, and that T-Mobile and Morgan Stanley are among its enterprise customers. In the consumer race, Anthropic's Claude has seen less adoption than OpenAI. Claude's traffic, a proxy for consumer interest, was about 2% of ChatGPT's in April, according to Web analytics firm Similarweb. Anthropic, founded in 2021 by a team that departed OpenAI over differences in vision, closed a $3.5 billion fundraise earlier this year. That valued the company at $61.4 billion. OpenAI is currently valued at $300 billion.


Economic Times
4 hours ago
- Business
- Economic Times
Anthropic hits $3 billion in annualized revenue on business demand for AI
Artificial intelligence developer Anthropic is making about $3 billion in annualized revenue, according to two sources familiar with the matter, in an early validation of generative AI use in the business world. The milestone, which projects the company's current sales over the course of a year, is a significant jump from December 2024 when the metric was nearly $1 billion, the sources said. The figure crossed $2 billion around the end of March, and at May's end it hit $3 billion, one of the sources said. While consumers have embraced rival OpenAI's ChatGPT, a number of enterprises have limited their rollouts to experimentation, despite board-level interest in AI. Anthropic's revenue surge, largely from selling AI models as a service to other companies, is a data point showing how business demand is growing, one of the sources said. A key driver is code generation. The San Francisco-based startup, backed by Google parent Alphabet and is famous for AI that excels at computer programming. Products in the so-called codegen space have experienced major growth and adoption in recent months, often drawing on Anthropic's models. This demand is setting Anthropic apart among software-as-a-service vendors. Its single-quarter revenue increases would count Anthropic as the fastest-growing SaaS company that at least one venture capitalist has ever seen. "We've looked at the IPOs of over 200 public software companies, and this growth rate has never happened," said Meritech General Partner Alex Clayton, who is not an Anthropic investor and has no inside knowledge of its sales. He cautioned that these comparisons are not fully precise, since Anthropic also has consumer revenue via subscriptions to its Claude chatbot. Still, by contrast, publicly traded SaaS company Snowflake took six quarters to go from $1 billion to $2 billion in such run-rate revenue, Clayton said. Anthropic competitor OpenAI has projected it will end 2025 with more than $12 billion in total revenue, up from $3.7 billion last year, three people familiar with the matter said. This total revenue is different from an estimated annualized figure like Anthropic's. Reuters could not determine this metric for OpenAI. The two rivals appear to be establishing their own swim lanes. While both offer enterprise and consumer products, OpenAI is shaping up to be a consumer-oriented company, and the majority of its revenue comes from subscriptions to its ChatGPT chatbot, OpenAI Chief Financial Officer Sarah Friar told Bloomberg late last year. OpenAI has not reported enterprise-specific revenue but said in May that paying seats for its ChatGPT enterprise product have grown to 3 million, from 2 million in February, and that T-Mobile and Morgan Stanley are among its enterprise customers. In the consumer race, Anthropic's Claude has seen less adoption than OpenAI. Claude's traffic, a proxy for consumer interest, was about 2% of ChatGPT's in April, according to Web analytics firm Similarweb. Anthropic, founded in 2021 by a team that departed OpenAI over differences in vision, closed a $3.5 billion fundraise earlier this year. That valued the company at $61.4 billion. OpenAI is currently valued at $300 billion.
Yahoo
6 hours ago
- Business
- Yahoo
Amp Up Your Workflow With AI—Just $30 for 2 More Days
This site may earn affiliate commissions from the links on this page. Terms of use(Opens in a new window) . Artificial Intelligence has really taken off in the last couple of years. It's improved so much in so many different ways that the number of people using it seems to have grown exponentially. There are so many services now that it can be difficult to choose which might be best for you. Fortunately, you don't have to make that choice because you can access many of the top AI services through 1minAI. Best of all, a lifetime subscription is currently on sale for only $29.97, but only through June 1 at 11:59 p.m. PT. You'll get comprehensive writing tools such as a blog generator, keyword research, shortening, and expanding content. Audio and video tools include editing, translation, text-to-speech, and more. Advanced image processing and editing capabilities are also available, such as an image generator and editor and a background remover. Since 1minAI is powered by several different AI models, you can chat with a variety of assistants, including OpenAI, GoogleAI, Claude, GPT-4, GPT-4 Turbo, Gemini, and many more. This subscription grants you access to all AI and product features, flagship AI models, unlimited storage, unlimited prompt library, and much more. You'll get flexible use of a million credits a month, so you can use them in any combination of a variety of activities. Generate over a million words, over a thousand images, or up to 37 videos a month. Transcribe over 14,500 seconds, text-to-speech over 370,000 characters, or remove background in up to 74 images per month. You also have the chance to generate up to 450,000 additional free credits. For instance, you can get 15,000 free credits every single day that you visit the website. The new day officially begins at midnight PST. 1minAI is always improving with weekly updates, as well. It's easy to see why 1minAI has a rating of 4.7 out of 5 stars on Trustpilot. Get a lifetime subscription to 1minAI today, while it's available for only $29.97 (reg. $234) through June 1 only. StackSocial prices subject to change.