logo
#

Latest news with #SaoudRizwan

AI Coding Tool Cline Has Raised $27 Million To Help Developers Control Their AI Spend
AI Coding Tool Cline Has Raised $27 Million To Help Developers Control Their AI Spend

Forbes

time31-07-2025

  • Business
  • Forbes

AI Coding Tool Cline Has Raised $27 Million To Help Developers Control Their AI Spend

Cline CEO Saoud Rizwan said his open source AI coding tool started off as a side project for Anthropic's "Build with Claude" hackathon. Cline S oftware developers love using AI. So much so that they're emptying their wallets on AI coding software like Cursor and Anthropic's Claude Code. But the tools' ever-evolving pricing plans are driving them nuts. Earlier this month, coders using Cursor were vexed by sudden and unexpected charges after the company changed its $20-per-month subscription plan to cap previously unlimited model usage at $20, with additional fees incurred for anything more. Others complained about maxing out rate limits before being able to enter more than three prompts, calling Cursor's pricing switch 'shady' and 'vague.' (CEO Michael Truell later apologized for how the pricing changes were rolled out). When Anthropic silently added additional weekly usage limits to Claude Code, power users were left befuddled, claiming the company's calculation of usage was inaccurate. Programmers frustrated by the obscure pricing plans of the AI coding software they use are a fast-growing market, said Saoud Rizwan, CEO and founder of open source AI coding tool Cline. Many end up locked in $200 monthly subscriptions, making it difficult for them to afford testing new models from other AI providers. In October 2024, Rizwan launched Cline hoping to bring more transparency to AI service billing and help developers afford access to a variety of AI models. Cline plugs into code editors like VSCode and Cursor and provides developers access to AI models of their choice without worrying about arbitrary limits. Developers pay AI model providers like Anthropic, Google or OpenAI directly for what's called 'inference' or the cost of running AI models, and Cline shows them a full breakdown of the cost of each request. Because it is open source, users can see how Cline works and how it is built, ensuring they understand exactly how and why they are being billed. 'They're able to see what's happening under the hood, unlike other AI coding agents which are closed source,' said Nick Baumann, Cline's product marketing lead. The system itself is similar to other AI coding tools; Developers prompt Cline in plain English; They describe the code needed and the AI model to be used, and the system reads filed, analyzes codebases and creates it. The value-add is that developers know exactly what they're paying for and can choose whichever model they want for specific coding tasks. Cline has racked up 2.7 million installs since its launch in October. The company announced, Thursday, it has raised $27 million in a Series A funding round led by Emergence with participation from Pace Capital and 1984 Ventures, valuing it at $110 million. Rizwan plans to use the fresh capital to commercialize the company's open source product by adding paid features for enterprise customers like Samsung and German software company SAP who have already started using it. Cline is up against companies like Cognition, which Forbes reported is in talks to raise more than $300 million at a $10 billion valuation and Cursor, which claims it has more than $500 million in annualized revenue from subscriptions. Rizwan, 28, said his startup's biggest differentiator in the fiercely competitive AI coding space is its business model. Companies like Cursor make money through heavily subsidized $20 monthly subscriptions, managing high costs by routing queries to cheaper AI models, he claims. Cline is 'sitting that game out altogether,' he said. 'We capture zero margin on AI usage. We're purely just directing the inference.' That tactic helped convince Emergence partner Yaz El-Baba to lead the round. El-Baba told Forbes that because Cline doesn't make any money on inference it has no incentive to degrade the quality of its product. 'What other players have done is raise hundreds of millions of dollars and try to subsidize their way to ubiquity so that they become the tool of choice for developers. And the way that they've chosen to do that is by bundling inference into a subscription price that is far lower than the actual cost to provide that service,' he said. 'It's just an absolutely unsustainable business model.' But with Cline users know what they're paying for and can choose which models to use and where to send sensitive enterprise data like proprietary code. Cline started off as a side project for Anthropic's 'Build with Claude' hackathon in June 2024. Although Rizwan lost the hackathon, people saw promise in the AI coding agent he built and it started to gain popularity online. In November, he raised $5 million in seed funding and moved from Indiana to San Francisco to build the startup. 'I realized I opened up this can of worms,' he said. Now as its AI coding rivals reckon with the realities of pricing their AI coding software, Cline has found a new opening to sell its product to large enterprises, Rizwan said. And he's betting that open source is the way to go. 'Cline is open source, so you can kind of peek into the guts of the harness and kind of see how the product is interacting with the model, which is incredibly important for having control over price transparency.' MORE FROM FORBES Forbes AI Startup LangChain Is In Talks To Raise $100 Million By Rashi Shrivastava Forbes These Startups Are Helping Businesses Show Up In AI Search Summaries By Rashi Shrivastava Forbes AI Coding Startup Cognition Is In Talks To Raise At A $10 Billion Valuation By Richard Nieva Forbes This AI Founder Became A Billionaire By Building ChatGPT For Doctors By Amy Feldman

Cerebras Launches Qwen3-235B: World's Fastest Frontier AI Model with Full 131K Context Support
Cerebras Launches Qwen3-235B: World's Fastest Frontier AI Model with Full 131K Context Support

Business Wire

time08-07-2025

  • Business
  • Business Wire

Cerebras Launches Qwen3-235B: World's Fastest Frontier AI Model with Full 131K Context Support

PARIS--(BUSINESS WIRE)--Cerebras Systems today announced the launch of Qwen3-235B with full 131K context support on its inference cloud platform. This milestone represents a breakthrough in AI model performance, combining frontier-level intelligence with unprecedented speed at one-tenth the cost of closed-source models, fundamentally transforming enterprise AI deployment. "With Cerebras' inference, developers using Cline are getting a glimpse of the future, as Cline reasons through problems, reads codebases, and writes code in near real-time," said Saoud Rizwan, CEO of Cline. Frontier Intelligence on Cerebras Alibaba's Qwen3-235B delivers model intelligence that rivals frontier models such as Claude 4 Sonnet, Gemini 2.5 Flash, and DeepSeek R1 across a range of science, coding, and general knowledge benchmarks according to independent tests by Artificial Analysis. Qwen3-235B uses an efficient mixture-of-experts architecture that delivers exceptional compute efficiency, enabling Cerebras to offer the model at $0.60 per million input tokens and $1.20 per million output tokens—less than one-tenth the cost of comparable closed-source models. Cut Reasoning Time from Minutes to Seconds Reasoning models are notoriously slow, often taking minutes to answer a simple question. By leveraging the Wafer Scale Engine, Cerebras accelerates Qwen3-235B to an unprecedented 1,500 tokens per second, reducing response times from 1-2 minutes to 0.6 seconds, making coding, reasoning, and deep-RAG workflows nearly instantaneous. Based on Artificial Analysis measurements, Cerebras is the only company globally offering a frontier AI model capable of generating output at over 1,000 tokens per second, setting a new standard for real-time AI performance. 131K Context Enables Production-grade Code Generation Concurrent with this launch, Cerebras has quadrupled its context length support from 32K to 131K tokens—the maximum supported by Qwen3-235B. This expansion directly impacts the model's ability to reason over large codebases and complex documents. While 32K context is sufficient for simple code generation use cases, 131K context allows the model to process dozens of files and tens of thousands of lines of code simultaneously, enabling production-grade application development. This enhanced context length means Cerebras now directly addresses the enterprise code generation market, which is one of the largest and fastest-growing segments for generative AI. Strategic Partnership with Cline To showcase these new capabilities, Cerebras has partnered with Cline, the leading agentic coding agent for Microsoft VS Code with over 1.8 million installations. Cline users can now access Cerebras Qwen models directly within the editor—starting with Qwen3-32B at 64K context on the free tier. This rollout will expand to include Qwen3-235B with 131K context, delivering 10–20x faster code generation speeds compared to alternatives like DeepSeek R1. "With Cerebras' inference, developers using Cline are getting a glimpse of the future, as Cline reasons through problems, reads codebases, and writes code in near real-time. Everything happens so fast that developers stay in flow, iterating at the speed of thought. This kind of fast inference isn't just nice to have -- it shows us what's possible when AI truly keeps pace with developers,' said Saoud Rizwan, CEO of Cline. Frontier Intelligence at 30x the Speed and 1/10 th the Cost With today's launch, Cerebras has significantly expanded its inference offering, providing developers looking for an open alternative to OpenAI and Anthropic with comparable levels of model intelligence and code generation capabilities. Moreover, Cerebras delivers something that no other AI provider in the world—closed or open—can do: instant reasoning speed at over 1,500 tokens per second, increasing developer productivity by an order of magnitude vs. GPU solutions. All of this is delivered at one-tenth the token cost of leading closed-source models. About Cerebras Systems Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to accelerate generative AI by building from the ground up a new class of AI supercomputer. Our flagship product, the CS-3 system, is powered by the world's largest and fastest commercially available AI processor, our Wafer-Scale Engine-3. CS-3s are quickly and easily clustered together to make the largest AI supercomputers in the world, and make placing models on the supercomputers dead simple by avoiding the complexity of distributed computing. Cerebras Inference delivers breakthrough inference speeds, empowering customers to create cutting-edge AI applications. Leading corporations, research institutions, and governments use Cerebras solutions for the development of pathbreaking proprietary models, and to train open-source models with millions of downloads. Cerebras solutions are available through the Cerebras Cloud and on-premises. For further information, visit or follow us on LinkedIn, X and/or Threads

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store