logo
Anthropic's cofounder says 'dumb questions' are the key to unlocking breakthroughs in AI

Anthropic's cofounder says 'dumb questions' are the key to unlocking breakthroughs in AI

Anthropic's cofounder said the key to advancing AI isn't rocket science — it's asking the obvious stuff nobody wants to say out loud.
"It's really asking very naive, dumb questions that get you very far," said Jared Kaplan at a Y Combinator event last month.
The chief science officer at Anthropic said in the video published by Y Combinator on Tuesday that AI is an "incredibly new field" and "a lot of the most basic questions haven't been answered."
For instance, Kaplan recalled how in the 2010s, everyone in tech kept saying that "big data" was the future. He asked: How big does the data need to be? How much does it actually help?
That line of thinking eventually led him and his team to study whether AI performance could be predicted based on the size of the model and the amount of compute used — a breakthrough that became known as scaling laws.
"We got really lucky. We found that there's actually something very, very, very precise and surprising underlying AI training," he said. "This was something that came about because I was just sort of asking the dumbest possible question."
Kaplan added that as a physicist, that was exactly what he was trained to do. "You sort of look at the big picture and you ask really dumb things."
Simple questions can make big trends "as precise as possible," and that can "give you a lot of tools," Kaplan said.
"It allows you to ask: What does it really mean to move the needle?" he added.
Kaplan and Anthropic did not respond to a request for comment from Business Insider.
Anthropic's AI breakthroughs
Anthropic has emerged as a powerhouse in AI‑assisted coding, especially after the release of its Claude Sonnet 3.5 model in June 2024.
"Anthropic changed everything," Sourcegraph's Quinn Slack said in a BI report published last week.
"We immediately said, 'This model is better than anything else out there in terms of its ability to write code at length' — high-quality code that a human would be proud to write," he added.
"And as a startup, if you're not moving at that speed, you're gonna die."
Anthropic cofounder Ben Mann said in a recent episode of the "No Priors Podcast" that figuring out how to make AI code better and faster has been largely driven by trial and error and measurable feedback.
"Sometimes you just won't know and you have to try stuff — and with code that's easy because we can just do it in a loop," Mann said.
Elad Gil, a top AI investor and No Priors host, concurred, saying the clear signals from deploying code and seeing if it works make this process fruitful.
"With coding, you actually have like a direct output that you can measure: You can run the code, you can test the code," he said. "There's sort of a baked-in utility function you can optimize against."
BI's Alistair Barr wrote in an exclusive report last week about how the startup might have achieved its AI coding breakthrough, crediting approaches like Reinforcement Learning from Human Feedback, or RLHF, and Constitutional AI.
Anthropic may soon be worth $100 billion, as the startup pulls in billions of dollars from companies paying for access to its models, Barr wrote.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Anthropic Revokes OpenAI's Access to Claude
Anthropic Revokes OpenAI's Access to Claude

WIRED

time7 hours ago

  • WIRED

Anthropic Revokes OpenAI's Access to Claude

Aug 1, 2025 5:41 PM OpenAI lost access to the Claude API this week after Anthropic claimed the company was violating its terms of service. Photo-Illustration:All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. Anthropic revoked OpenAI's API access to its models on Tuesday, multiple sources familiar with the matter tell WIRED. OpenAI was informed that its access was cut off due to violating the terms of service. 'Claude Code has become the go-to choice for coders everywhere and so it was no surprise to learn OpenAI's own technical staff were also using our coding tools ahead of the launch of GPT-5,' Anthropic spokesperson Christopher Nulty said in a statement to WIRED. 'Unfortunately, this is a direct violation of our terms of service.' According to Anthropic's commercial terms of service, customers are barred from using the service to 'build a competing product or service, including to train competing AI models' or 'reverse engineer or duplicate' the services. This change in OpenAI's access to Claude comes as the ChatGPT-maker is reportedly preparing to release a new AI model, GPT-5, which is rumored to be better at coding. OpenAI was plugging Claude into its own internal tools using special developer access (APIs), instead of using the regular chat interface, according to sources. This allowed the company to run tests to evaluate Claude's capabilities in things like coding and creative writing against its own AI models, and check how Claude responded to safety-related prompts involving categories like CSAM, self-harm, and defamation, the sources say. The results help OpenAI compare its own models' behavior under similar conditions and make adjustments as needed. 'It's industry standard to evaluate other AI systems to benchmark progress and improve safety. While we respect Anthropic's decision to cut off our API access, it's disappointing considering our API remains available to them,' OpenAI's chief communications officer Hannah Wong said in a statement to WIRED. Nulty says that Anthropic will 'continue to ensure OpenAI has API access for the purposes of benchmarking and safety evaluations as is standard practice across the industry.' The company did not respond to WIRED's request for clarification on if and how OpenAI's current Claude API restriction would impact this work. Top tech companies yanking API access from competitors has been a tactic in the tech industry for years. Facebook did the same to Twitter-owned Vine (which led to allegations of anticompetitive behavior) and last month Salesforce restricted competitors from accessing certain data through the Slack API. This isn't even a first for Anthropic. Last month, the company restricted the AI coding startup Windsurf's direct access to its models after it was rumored OpenAI was set to acquire it. (That deal fell through). Anthropic's chief science officer Jared Kaplan spoke to TechCrunch at the time about revoking Windsurf's access to Claude, saying 'I think it would be odd for us to be selling Claude to OpenAI.' A day before cutting off OpenAI's access to the Claude API, Anthropic announced new rate limits on Claude Code, its AI-powered coding tool, citing explosive usage and, in some cases, violations of its terms of service.

Here's Why Anthropic Refuses to Offer 9-Figure Pay Like Meta
Here's Why Anthropic Refuses to Offer 9-Figure Pay Like Meta

Entrepreneur

time8 hours ago

  • Entrepreneur

Here's Why Anthropic Refuses to Offer 9-Figure Pay Like Meta

Anthropic CEO Dario Amodei laid out his rationale on a recent podcast for why he will not play the competing offer game despite Meta CEO Mark Zuckerberg's attempts to poach AI talent. While Meta poaches talent from Apple, OpenAI and Google, AI startup Anthropic is refusing to play the game by matching competing offers. Anthropic CEO Dario Amodei explained his reasoning on an episode of the "Big Technology Podcast," released earlier this week. Amodei said that he recently sent a Slack message to all Anthropic staff informing them that the company was not willing to "compromise our compensation principles" or its "principles of fairness" when individual employees receive outside offers. He said that Meta's efforts to poach staff were a "unifying moment" for the company, citing his decision not to match offers due to potential unfairness for other staff members. Related: AI Is Dramatically Decreasing Entry-Level Hiring at Big Tech Companies, According to a New Analysis Amodei also acknowledged on the podcast that fewer Anthropic employees had been captured by Meta's compensation offers when compared to other companies, though "not for lack of trying." Some Anthropic staff "wouldn't even talk" to Meta CEO Mark Zuckerberg, according to Amodei. Meta is reportedly offering more than $200 million in compensation to one AI researcher on the superintelligence team who worked at Apple. The tech giant did manage to poach Anthropic software engineer Joel Pobar, as of a June 30 memo. "If Mark Zuckerberg throws a dart at a dartboard and it hits your name, that doesn't mean that you should be paid 10 times more than the guy next to you who's just as skilled, just as talented," Amodei said on the podcast. Anthropic CEO Dario Amodei. Photo by Halil Sagirkaya/Anadolu via Getty Images Anthropic's compensation is tied to a level-based system. Amodei explained on the podcast that when Anthropic staff join the company, they are classified into one of many different levels, which corresponds to their compensation. "We don't negotiate that level because we think it's unfair," Amodei said. "We want to have a systematic way." Related: How Much Does It Cost to Develop and Train AI? Here's the Current Price, According to Anthropic's CEO. Amodei said that Anthropic's mission of safely creating reliable, cutting-edge AI systems inspired many employees to stay, and asserted that Zuckerberg was "trying to buy something that can't be bought," which is alignment with a company's mission. Zuckerberg, meanwhile, recently outlined his mission with his superintelligence team, a group working on creating AI that surpasses human intelligence. In a blog post on Meta's website, published on Wednesday, Zuckerberg said that Meta's goal was to bring superintelligence to every individual and allow people to reap the creative, economic and personal benefits of the technology. He contrasted the effort with the intentions of "others in the industry" who want to use AI to automate the workforce first before giving it to individuals. Meta's mission is to empower individuals with AI, Zuckerberg wrote. Related: Reddit Sues $61.5 Billion AI Startup Anthropic for Allegedly Using the Site for Training Data Since its start in 2021, Anthropic has raised close to $20 billion from companies like Google and Amazon. According to a Bloomberg report from earlier this week, the startup is nearing a deal to raise funds at a $170 billion valuation.

Tesla must pay over $242M in damages after being found partly at fault for deadly Autopilot crash
Tesla must pay over $242M in damages after being found partly at fault for deadly Autopilot crash

Business Insider

time9 hours ago

  • Business Insider

Tesla must pay over $242M in damages after being found partly at fault for deadly Autopilot crash

In a major blow to Tesla, a Florida federal jury on Friday found Elon Musk's electric car company partly to blame for a 2019 crash that left a 22-year-old woman dead and her boyfriend seriously injured. The jury sided with the plaintiffs, awarding the family of Naibel Benavides Leon and her boyfriend, Dillon Angulo, a combined $329 million in total damages — $129 million in compensatory damages and $200 million punitive damages. Jurors awarded $59 million in compensatory damages to Benavides Leon's family and $70 million to Angulo, who suffered a traumatic brain injury and broken bones among other injuries. The verdict marks a substantial setback for Tesla and its Autopilot driver-assistance feature that the attorneys for the plaintiffs said was engaged at the time of the deadly collision and had design flaws. Tesla, in a statement, called the verdict "wrong" and said it plans to appeal "given the substantial errors of law and irregularities at trial." "Today's verdict is wrong and only works to set back automotive safety and jeopardize Tesla's and the entire industry's efforts to develop and implement life-saving technology," said Tesla. The company added, "This was never about Autopilot; it was a fiction concocted by plaintiffs' lawyers blaming the car when the driver — from day one — admitted and accepted responsibility." Please help BI improve our Business, Tech, and Innovation coverage by sharing a bit about your role — it will help us tailor content that matters most to people like you. Continue By providing this information, you agree that Business Insider may use this data to improve your site experience and for targeted advertising. By continuing you agree that you accept the Terms of Service and Privacy Policy . The plaintiff's attorney, Brett Schreiber, said the verdict "represents justice for Naibel's tragic death and Dillon's lifelong injuries, holding Tesla and Musk accountable for propping up the company's trillion-dollar valuation with self-driving hype at the expense of human lives." The verdict follows a three-week civil trial that included testimony from Angulo, Benavides Leon's family members, and the driver of the Tesla that plowed into a parked SUV and struck the couple as they were stargazing outside the vehicle alongside a Key Largo road. The jury found Tesla 33% responsible for the crash, with the driver responsible for the rest. Tesla will have to pay the full punitive damages amount, and a third of the compensatory damages, which equals $42.5 million. The case stems from a wrongful-death lawsuit that the plaintiffs brought against Tesla. The lawsuit argued that the carmaker's vehicles were "defective and unsafe for their intended use." Tesla, the lawsuit said, programmed Autopilot"to allow it to be used on roadways that Tesla knew were not suitable for its use and knew this would result in collisions causing injuries and deaths of innocent people who did not choose to be a part of Tesla's experiments, such as Plaintiffs." "Despite knowing of Autopilot's deficiencies, Tesla advertised Autopilot in a way that greatly exaggerated its capabilities and hid its deficiencies," said the lawsuit, which pointed to multiple comments from Musk touting the safety and reliability of the software. Tesla driver George McGee had Autopilot on when his 2019 Model S blew past a stop sign and a flashing red light at a three-way intersection and plowed into Angulo's mother's Chevrolet Tahoe at more than 60-miles-per-hour, the lawsuit said. McGee — who previously settled a separate lawsuit with the plaintiffs for an undisclosed amount — said he had dropped his cellphone during a call and bent down to pick it up moments before his Tesla, without warning, T-boned the Tahoe. He testified during the trial that he thought of Autopilot, which allows the vehicle to steer itself, switch lanes, brake, and accelerate on its own, as a "copilot." "My concept was it would assist me should I have a failure" or "should I make a mistake," McGee said in testimony, adding, "I do feel like it failed me." "I believe it didn't warn me of the car and the individuals and nor did it apply brakes," McGee testified. Attorneys for Tesla have argued that McGee was solely responsible for the April 25, 2019, crash. In the trial's opening statements, Tesla attorney Joel Smith said the case was about a driver, not a "defective vehicle," and had "nothing to do with Autopilot." "It's about an aggressive driver, not a complacent driver, a distracted driver who was fumbling around for his cellphone," Smith said. "It's about a driver pressing an accelerator pedal and driving straight through an intersection." Tesla's attorneys said that just before the crash, McGee hit the accelerator, overriding the vehicle's set cruising speed of 45 miles per hour and its ability to brake on its own. Autopilot mode, Tesla says on its website, is "intended for use with a fully attentive driver, who has their hands on the wheel and is prepared to take over at any moment."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store