logo
Building AI with old software is like trying to make a car out of a horse-drawn carriage, says YC general partner

Building AI with old software is like trying to make a car out of a horse-drawn carriage, says YC general partner

AI may be powering a new generation of apps, but most developers are still stuck in the past, said Y Combinator general partner Pete Koomen.
On an episode of the "Y Combinator" podcast published Friday, Koomen likened some AI tools to the earliest versions of automobiles — when inventors simply bolted engines onto wooden carriages without rethinking the design of the vehicle itself. Just like early car builders, developers are taking a revolutionary technology and retrofitting it into legacy designs.
He called this approach the "AI horseless carriage."
"There are all sorts of problems with that design," said Koomen, who is also the founder of software company Optimizely. "Inventing the motor was only a small part of what was needed to produce a vehicle that could take advantage of the enormous power," he added.
The system needs to be redesigned for AI to become useful, he said.
"We're using old software development mentality, techniques to build these features, and we're not actually taking full advantage of what AI can do," Koomen said.
Koomen's comments come as the industry grapples with a quick AI-induced transition for once-hot tech jobs.
Some developers and engineers may be coding themselves out of a job. As AI gets better at writing code, some product managers have speculated that AI will increasingly take on some technical coding tasks and circumvent their need for engineers. Job postings for software engineers on Indeed have hit a five-year low.
At some companies, the engineer's role is transitioning from just writing code to using creative thinking to solve problems, BI's Amanda Hoover wrote in a report in February.
"If a developer is not creative, then you can replace them very easily," said James Stanger, the chief technology evangelist at CompTIA, a nonprofit trade association for the US IT industry.
Gmail's AI is a textbook example of the problem
Koomen pointed to Gmail's AI as a case in point — he called it a chatbot bolted onto an old interface instead of being rebuilt from scratch.
"When the Gmail team set out to build this, they kind of asked, 'How can we slot AI into the Gmail application?" he said.
That approach misses the mark, Koomen said. The AI-generated drafts don't sound like what the user would write, and the prompts required to get the right tone often end up being as long as the email itself.
This makes using AI frustrating and adds more work, he said. Instead, Gmail and other AI tools should give users control over editing the system prompt itself — a set of instructions given to an AI model that guides how it should behave or respond to users.
"By editing this system prompt, I'm able to explain to the AI model how I write emails in general so that I don't have to do it every single time," he added.
The problem, Koomen said, is that developers still treat AI prompts the same way they have treated source code for decades — hidden behind interfaces, tightly controlled, and inaccessible to the user.
"For as long as we've had a software industry, there's been a division of labor between me, the user, and you, the developer," he said.
That model needs to change, Koomen said, adding that this presents a massive opportunity for founders.
"Almost every tool that we've been using for decades can be rethought from the ground up with AI," he said.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

MongoDB Soars 14.2% After Crushing Q1 -- $1B Buyback, AI Push, and Customer Surge Spark Rally
MongoDB Soars 14.2% After Crushing Q1 -- $1B Buyback, AI Push, and Customer Surge Spark Rally

Yahoo

time31 minutes ago

  • Yahoo

MongoDB Soars 14.2% After Crushing Q1 -- $1B Buyback, AI Push, and Customer Surge Spark Rally

MongoDB (NASDAQ:MDB) is off to a fast start in fiscal 2026and investors might want to take a closer look. The company reported $549 million in Q1 revenue, up 22% from last year, with its cloud product, Atlas, growing 26% and now making up 72% of total sales. Management added 2,600 new customers, marking the biggest quarterly gain in six years. The share is up 14.2% at 12.09pm today. CEO Dev Ittycheria pointed to strong traction from both enterprises and startups as AI workloads and modern app development continue to drive demand for flexible, cloud-native databases. Behind the scenes, MongoDB is becoming a cash machine. The company more than doubled non-GAAP operating income to $87.4 million and posted $105.9 million in free cash flowup 74% year-over-year. With $2.5 billion in cash and short-term investments on hand, it just authorized another $800 million in share repurchases, taking its total buyback program to $1 billion. That kind of financial firepower could give MongoDB more room to support long-term growth while returning capital to shareholders. On the AI front, MongoDB isn't just playing defense. It rolled out new retrieval modelsVoyage 3.5 and 3.5 Litethat improve accuracy and efficiency for building AI-powered apps. It also debuted its Model Context Protocol Server, which connects MongoDB to tools like GitHub Copilot and Anthropic's Claude, letting developers use natural language to interact with their data. With FY2026 revenue guidance raised up to $2.29 billion and full-year non-GAAP EPS projected to hit as high as $3.12, MongoDB could be shaping up as a quiet leader in the AI infrastructure race. This article first appeared on GuruFocus. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Anthropic launches new Claude service for military and intelligence use
Anthropic launches new Claude service for military and intelligence use

The Verge

time42 minutes ago

  • The Verge

Anthropic launches new Claude service for military and intelligence use

Anthropic on Thursday announced Claude Gov, its product designed specifically for U.S. defense and intelligence agencies. The AI models have looser guardrails for government use and are trained to better analyze classified information. The company said the models it's announcing 'are already deployed by agencies at the highest level of U.S. national security,' and that access to those models will be limited to government agencies handling classified information. The company did not confirm how long they had been in use. Claude Gov models are specifically designed to uniquely handle government needs, like threat assessment and intelligence analysis, per Anthropic's blog post. And although the company said they 'underwent the same rigorous safety testing as all of our Claude models,' the models have certain specifications for national security work. For example, they 'refuse less when engaging with classified information' that's fed into them, something consumer-facing Claude is trained to flag and avoid. Claude Gov's models also have greater understanding of documents and context within defense and intelligence, according to Anthropic, and better proficiency in languages and dialects relevant to national security. Use of AI by government agencies has long been scrutinized because of its potential harms and ripple effects for minorities and vulnerable communities. There's been a long list of wrongful arrests across multiple U.S. states due to police use of facial recognition, documented evidence of bias in predictive policing, and discrimination in government algorithms that assess welfare aid. For years, there's also been an industry-wide controversy over large tech companies like Microsoft, Google and Amazon allowing the military — particularly in Israel — to use their AI products, with campaigns and public protests under the No Tech for Apartheid movement. Anthropic's usage policy specifically dictates that any user must 'Not Create or Facilitate the Exchange of Illegal or Highly Regulated Weapons or Goods,' including using Anthropic's products or services to 'produce, modify, design, market, or distribute weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life.' At least eleven months ago, the company said it created a set of contractual exceptions to its usage policy that are 'carefully calibrated to enable beneficial uses by carefully selected government agencies.' Certain restrictions — such as disinformation campaigns, the design or use of weapons, the construction of censorship systems, and malicious cyber operations — would remain prohibited. But Anthropic can decide to 'tailor use restrictions to the mission and legal authorities of a government entity,' although it will aim to 'balance enabling beneficial uses of our products and services with mitigating potential harms.' Claude Gov is Anthropic's answer to ChatGPT Gov, OpenAI's product for U.S. government agencies, which it launched in January. It's also part of a broader trend of AI giants and startups alike looking to bolster their businesses with government agencies, especially in an uncertain regulatory landscape. When OpenAI announced ChatGPT Gov, the company said that within the past year, more than 90,000 employees of federal, state, and local governments had used its technology to translate documents, generate summaries, draft policy memos, write code, build applications, and more. Anthropic declined to share numbers or use cases of the same sort, but the company is part of Palantir's FedStart program, a SaaS offering for companies who want to deploy federal government-facing software. Scale AI, the AI giant that provides training data to industry leaders like OpenAI, Google, Microsoft, and Meta, signed a deal with the Department of Defense in March for a first-of-its-kind AI agent program for U.S. military planning. And since then, it's expanded its business to world governments, recently inking a five-year deal with Qatar to provide automation tools for civil service, healthcare, transportation, and more.

No, AI Robots Won't Take All Our Jobs
No, AI Robots Won't Take All Our Jobs

Wall Street Journal

time43 minutes ago

  • Wall Street Journal

No, AI Robots Won't Take All Our Jobs

Anthropic CEO Dario Amodei said last week that artificial intelligence could eliminate half of all entry-level white-collar jobs within five years and cause unemployment to skyrocket to as high as 20%. He should know better—as should many other serious academics, who have been warning for years that AI will mean the end of employment as we know it. In 2013 Carl Benedikt Frey and Michael A. Osborne of Oxford University produced a research paper estimating that 47% of U.S. employment was at risk of being eliminated by new technologies.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store