
AI valuations are verging on the unhinged
Vibe coding, or the ability to spin up a piece of software using generative artificial intelligence (AI) rather than old-school programming skills, is all the rage in Silicon Valley. But it has a step-sibling. Call it vibe valuing. This is the ability of venture capitalists to conjure up vast valuations for AI startups with scant regard for old-school spreadsheet measures.
Exhibit A is Mira Murati, formerly the chief technologist of OpenAI, who has vaulted almost overnight into the plutocracy. Her AI startup, Thinking Machines Lab, has reportedly raised $2bn at a $10bn valuation in its first fundraising round, before it has much of a strategy, let alone revenue.
Ms Murati's success can be explained by her firm's roster of ex-OpenAI researchers. Tech giants like Meta are offering megabucks for such AI superstars. Yet venture-capital (VC) grandees say that even for less exalted startups, traditional valuation measures such as projected revenue growth, customer churn and cash burn are less sacrosanct than they used to be.
This is partly because AI is advancing so quickly, making it hard to produce reliable forecasts. But it is also a result of the gusher of investment flowing into generative AI.
The once-reliable measure most at risk of debasement is annual recurring revenue (ARR), central to many startup valuations. For companies selling software as a service, as most AI firms do, it used to be easy to measure. Take a typical month of subscriptions, based on the number of users, and multiply by 12. It was complemented by strong retention rates.
Churn among customers was often less than 5% a year. As marginal costs were low, startups could burn relatively little cash before profits started to roll in. It was, by and large, a stable foundation for valuations.
Not so for AI startups. The revenue growth of some has been unusually rapid. Anysphere, which owns Cursor, a hit coding tool, saw its ARR surge to $500m this month, five times the level in January. Windsurf, another software-writing tool, also saw blistering growth before OpenAI agreed to buy it in May for $3bn.
But how sustainable is such growth? Jamin Ball of Altimeter Capital, a VC firm, notes that companies experiment with many AI applications, which suggests they are enthusiastic but not committed to any one product. He quips that this 'easy-come, easy-go" approach from customers produces ERR, or 'experimental run rate", rather than ARR. Others say churn is often upwards of 20%. It doesn't help that, in some cases, AI startups are charging based on usage rather than users (or 'seats"), which is less predictable.
Add to this the fact that competition is ferocious, and getting more so. However fast an AI startup is growing, it has no guarantee of longevity. Many create applications on top of models built by big AI labs such as OpenAI or Anthropic. Yet these labs are increasingly offering applications of their own. Generative AI has also made it easier than ever to start a firm with just a few employees, meaning there are many more new entrants, says Max Alderman of FE International, an advisory firm.
Even well known AI firms are far from turning a profit. Perplexity, which has sought to disrupt a search business long dominated by Google, reportedly generated revenue of $34m last year, but burned around $65m of cash. That has been no hurdle to a punchy valuation. Perplexity's latest fundraising round reportedly valued it at close to $14bn—a multiple of more than 400 times last year's revenue (compared with about 6.5 times for stocks traded on the Nasdaq exchange).
OpenAI, which torched about $5bn of cash last year, is worth $300bn. The willingness of venture investors to look past the losses reflects their belief that the potential market for AI is enormous and that costs will continue to plummet. In Perplexity's case, the startup may be a takeover target, too.
In time, trusty old approaches to valuations may come back into vogue, and cooler heads prevail. 'I'm the old-fashioned person who still believes I need [traditional measures] to feel comfortable," says Umesh Padval of Thomvest, another VC firm. For now, just feel the vibes.
© 2025, The Economist Newspaper Limited. All rights reserved. From The Economist, published under licence. The original content can be found on www.economist.com

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
33 minutes ago
- Time of India
After Anthorpic, Facebook-parent Meta wins AI copyright case but gets ‘warning' from judge
Facebook-parent Meta has scored a win in a copyright infringement lawsuit brought by 13 prominent authors, including Sarah Silverman and Ta-Nehisi Coates, concerning the training of its Llama artificial intelligence (AI) model. However, the ruling may have left the door open for future legal challenges against Meta and other AI developers. The judge sided with Meta's argument that the company's use of copyrighted books to train its large language models (LLMs) falls under the fair use doctrine of US copyright law. While acknowledging that it 'is generally illegal to copy protected works without permission,' the judge stated that the plaintiffs failed to present a compelling argument that Meta's methods caused 'market harm', as per CNBC. 'On this record Meta has defeated the plaintiffs' half-hearted argument that its copying causes or threatens significant market harm,' the judge said, adding, 'That conclusion may be in significant tension with reality.' He concluded that Meta's practice of 'copying the work for a transformative purpose' is protected by fair use. Judge warns Meta, calls company's argument 'nonsense' The judge emphasised the limited scope of his ruling. 'In the grand scheme of things, the consequences of this ruling are limited. This is not a class action, so the ruling only affects the rights of these thirteen authors — not the countless others whose works Meta used to train its models,' he noted. He also clarified that the ruling 'does not stand for the proposition that Meta's use of copyrighted materials to train its language models is lawful' in a general sense. He also dismissed Meta's argument that prohibiting the use of copyrighted text for training without payment would halt AI development, calling it 'nonsense.' The judge highlighted that a separate claim by the plaintiffs, alleging that Meta "may have illegally distributed their works (via torrenting)," remains pending. What Meta has to say A Meta spokesperson expressed satisfaction with the decision, stating, "Open-source AI models are powering transformative innovations, productivity and creativity for individuals and companies, and fair use of copyright material is a vital legal framework for building this transformative technology." The decision comes a day after a federal judge ruled that Anthropic's use of copyrighted books to train its Claude chatbot constitutes fair use under copyright law. The judge determined that training AI models on copyrighted works was 'quintessentially transformative' and legally justified. The ruling dismissed key copyright infringement claims brought by authors who sued Anthropic last year alleging "large-scale theft" of their works. AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Time of India
41 minutes ago
- Time of India
AI and copyrights: The fight for fair use
Academy Empower your mind, elevate your skills Big tech companies Meta, Microsoft, OpenAI and Anthropic have been facing a growing number of lawsuits. Authors and creators say these companies are using their books and other creative works to train powerful AI without permission or cases highlight how " fair use " works in the age of artificial intelligence (AI).Recently, Meta won a lawsuit from 2023 , against a group of authors who claimed the tech major used their copyrighted books to train its AI without their permission. The judge, Vince Chhabria, sided with Meta, saying the authors didn't make the right arguments and didn't have enough proof. However, the judge also said that using copyrighted works to train AI could still be against the law in "many situations."This decision is similar to another case involving Anthropic , another AI firm. In that case, Judge William Alsup said Anthropic's use of books for training was "exceedingly transformative", meaning it changed the original work so much it fell under fair use. According to Fortune magazine's website, copyrighted material can be used without permission under the fair use doctrine if the use transforms the work, by serving a new purpose or adding new meaning, instead of merely copying the the judge also found Anthropic broke the law by keeping pirated copies of the books in a digital library and has ordered a separate trial on that matter, to determine its liability, if was the first time a US court ruled on whether using copyrighted material without permission for AI training is legal battles continue in a new lawsuit in New York , in which authors, including Kai Bird, Jia Tolentino, and Daniel Okrent, are accusing Microsoft of using nearly 200,000 pirated digital books to train its Megatron April, OpenAI faced several copyright cases brought by prominent authors and news outlets . "We welcome this development and look forward to making it clear in court that our models are trained on publicly available data, grounded in fair use, and supportive of innovation," an OpenAI spokesperson said at that time, as reported by lawsuits show a big disagreement between tech companies and people who own copyrights. Companies often say their use is "fair use" to avoid paying for licences. But authors and other creators want to be paid when their work helps power these new AI systems.


Time of India
2 hours ago
- Time of India
Does ChatGPT suffer from hallucinations? OpenAI CEO Sam Altman admits surprise over users' blind trust in AI
In a world increasingly shaped by artificial intelligence, a startling statement from one of AI's foremost leaders has triggered fresh debate around our trust in machines. Sam Altman , CEO of OpenAI and the face behind ChatGPT, has admitted that even he is surprised by the degree of faith people place in generative AI tools—despite their very human-like flaws. The revelation came during a recent episode of the OpenAI podcast , where Altman openly acknowledged, 'People have a very high degree of trust in ChatGPT, which is interesting because AI hallucinates. It should be the tech that you don't trust that much.' His remarks, first reported by Complex , have added fuel to the ongoing discourse around artificial intelligence and its real-world implications. Trusting the Tool That Admits It Lies? Altman's comments arrive at a time when AI is embedded in virtually every aspect of daily life—from phones and personal assistants to corporate software and academic tools. Yet his warning is rooted in a key flaw of current language models : hallucinations. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Join new Free to Play WWII MMO War Thunder War Thunder Play Now Undo In AI parlance, hallucinations refer to moments when a model like ChatGPT fabricates information. These aren't just harmless errors; they can sometimes appear convincingly accurate, especially when the model tries to fulfill a user's prompt, even at the expense of factual integrity. 'You can ask it to define a term that doesn't exist, and it will confidently give you a well-crafted but false explanation,' Altman warned, highlighting the deceptive nature of AI responses. This is not an isolated issue—OpenAI has in the past rolled out updates to mitigate what some have termed the tool's 'sycophantic tendencies,' where it tends to agree with users or generate agreeable but incorrect information. You Might Also Like: OpenAI co-founder and CEO Sam Altman thinks your child will never be smarter than AI. Why is that a good thing? — pubity (@pubity) When Intelligence Misleads What makes hallucinations particularly dangerous is their subtlety. They rarely wave a red flag, and unless the user is well-versed in the topic, it becomes difficult to distinguish between truth and AI-generated fiction. That ambiguity is at the heart of Altman's caution. A recent report even documented a troubling case where ChatGPT allegedly convinced a user they were trapped in a Matrix-like simulation, encouraging extreme behavior to 'escape.' Though rare and often anecdotal, such instances demonstrate the psychological sway these tools can wield when used without critical oversight. A Wake-Up Call from the Inside Sam Altman's candid reflection is more than a passing remark—it's a wake-up call. Coming from the very creator of one of the world's most trusted AI platforms, it reframes the conversation about how we use and trust machine-generated content. It also raises a broader question: In our rush to embrace AI as a problem-solving oracle, are we overlooking its imperfections? You Might Also Like: OpenAI's Sam Altman reveals vision for AI's future: Could ChatGPT-5 become an all-powerful AGI 'smarter than us'? Altman's comments serve as a reminder that while AI can be incredibly useful, it must be treated as an assistant—not an oracle. Blind trust, he implies, is not only misplaced but potentially dangerous. As generative AI continues to evolve, so must our skepticism.