Latest news with #Codewith


News18
27-05-2025
- Business
- News18
OpenAI's Former Top Exec Claims Bots Can Build A Billion-Dollar Startup Solo
Last Updated: Dario Amodei at recent events spoke about the development of artificial intelligence and its future development. The discussions on how artificial intelligence would completely take over the human race, autonomy, and capabilities shortly continue to gain momentum every moment. With the concern still in discussion, some believe that AI can surpass humans in a certain way. Anthropic CEO Dario Amodei, while speaking at a few tech events, spoke about the modern AI models, claiming that it is very likely to see a single person run a $1 billion valuation company in a few years, all thanks to the development of artificial intelligence. It was Anthropic's recent Code with Claude developer conference in San Francisco, that Amodei responded to a question about the possibility of such a scenario. Acknowledging the possibilities of such a chance in the future, the CEO said that there is a 70 to 80 per cent chance that his prediction might come true. It remains a bold bet on where the world is headed. 'It will be an area where you don't need a lot of human institution-centric stuff to make money," Amodei said. He further suggested that proprietary trading could be the first one to automate something like that. Amodei also explained how integrating AI in single-person companies creates tools and software developers could grow as a prime candidate for business. He emphasised the importance of addressing the challenges that would come with a shift. He also noted that widespread adoption of such a model could affect job markets, and the traditional corporate structure, thus requiring new frameworks for taxation employment, and governance. The CEO made another thought-provoking claim that AI models can hallucinate less frequently than humans, thus offering better performance. The commentary comes at a time when AI models have been making strides in factual accuracy. However, with human and machine intelligence having significant flaws, understanding and integrating those flaws should be the next frontier. First Published: May 27, 2025, 17:54 IST

Yahoo
23-05-2025
- Business
- Yahoo
Anthropic CEO claims AI models hallucinate less than humans
Anthropic CEO Dario Amodei believes today's AI models hallucinate, or make things up and present them as if they're true, at a lower rate than humans do, he said during a press briefing at Anthropic's first developer event, Code with Claude, in San Francisco on Thursday. Amodei said all this in the midst of a larger point he was making: that AI hallucinations are not a limitation on Anthropic's path to AGI — AI systems with human-level intelligence or better. "It really depends how you measure it, but I suspect that AI models probably hallucinate less than humans, but they hallucinate in more surprising ways," Amodei said, responding to TechCrunch's question. Anthropic's CEO is one of the most bullish leaders in the industry on the prospect of AI models achieving AGI. In a widely circulated paper he wrote last year, Amodei said he believed AGI could arrive as soon as 2026. During Thursday's press briefing, the Anthropic CEO said he was seeing steady progress to that end, noting that "the water is rising everywhere." "Everyone's always looking for these hard blocks on what [AI] can do," said Amodei. "They're nowhere to be seen. There's no such thing." Other AI leaders believe hallucination presents a large obstacle to achieving AGI. Earlier this week, Google DeepMind CEO Demis Hassabis said today's AI models have too many "holes," and get too many obvious questions wrong. For example, earlier this month, a lawyer representing Anthropic was forced to apologize in court after they used Claude to create citations in a court filing, and the AI chatbot hallucinated and got names and titles wrong. It's difficult to verify Amodei's claim, largely because most hallucination benchmarks pit AI models against each other; they don't compare models to humans. Certain techniques seem to be helping lower hallucination rates, such as giving AI models access to web search. Separately, some AI models, such as OpenAI's GPT-4.5, have notably lower hallucination rates on benchmarks compared to early generations of systems. However, there's also evidence to suggest hallucinations are actually getting worse in advanced reasoning AI models. OpenAI's o3 and o4-mini models have higher hallucination rates than OpenAI's previous-gen reasoning models, and the company doesn't really understand why. Later in the press briefing, Amodei pointed out that TV broadcasters, politicians, and humans in all types of professions make mistakes all the time. The fact that AI makes mistakes too is not a knock on its intelligence, according to Amodei. However, Anthropic's CEO acknowledged the confidence with which AI models present untrue things as facts might be a problem. In fact, Anthropic has done a fair amount of research on the tendency for AI models to deceive humans, a problem that seemed especially prevalent in the company's recently launched Claude Opus 4. Apollo Research, a safety institute given early access to test the AI model, found that an early version of Claude Opus 4 exhibited a high tendency to scheme against humans and deceive them. Apollo went as far as to suggest Anthropic shouldn't have released that early model. Anthropic said it came up with some mitigations that appeared to address the issues Apollo raised. Amodei's comments suggest that Anthropic may consider an AI model to be AGI, or equal to human-level intelligence, even if it still hallucinates. An AI that hallucinates may fall short of AGI by many people's definition, though. This article originally appeared on TechCrunch at

Engadget
22-05-2025
- Business
- Engadget
Anthropic's Claude Opus 4 model can work autonomously for nearly a full workday
Anthropic kicked off its first-ever Code with Claude conference today with the announcement of a new frontier AI system. The company is calling Claude Opus 4 the best coding model in the world. According to Anthropic, Opus 4 is dramatically better at tasks that require it to complete thousands of separate steps, giving it the ability to work continuously for several hours in one go. Additionally, the new model can use multiple software tools in parallel, and it's better at following instructions more precisely. In combination, Anthropic says those capabilities make Opus 4 ideal for powering upcoming AI agents. For the unfamiliar, agentic systems are AIs that are designed to plan and carry out complicated tasks without human supervision. They represent an important step towards the promise of artificial general intelligence (AGI). In customer testing, Anthropic saw Opus 4 work on its own seven hours, or nearly a full workday. That's an important milestone for the type of agentic systems the company wants to build. Another reason Anthropic thinks Opus 4 is ready to enable the creation of better AI agents is because the model is 65 percent less likely to use a shortcut or loophole when completing tasks. The company says the system also demonstrates significantly better "memory capabilities," particularly when developers grant Claude local file access. To encourage devs to try Opus 4, Anthropic is making Claude Code, its AI coding agent, widely available. It has also added new integrations with Visual Studio Code and JetBrains. Even if you're not a coder, Anthropic might have something for you. That's because alongside Opus 4, the company announced a new version of its Sonnet model. Like Claude 3.7 Sonnet before it and Opus 4, the new system is a hybrid reasoning model, meaning it can execute prompts nearly instantaneously and engage in extended thinking. As a user, this gives you a best of both worlds chatbot that's better equipped to tackle complex problems when needed. It also incorporates many of the same improvements found in Opus 4, including the ability to use tools in parallel and follow instructions more faithfully. Sonnet 3.7 was so popular among users Anthropic ended up introducing a Max plan in response, which starts at $100 per month. The good news is you won't need to pay anywhere near that much to use Sonnet 4, as Anthropic is making it available to free users. For those who want to use Sonnet 4 for a project, API pricing is staying at $3 per one million input tokens and $15 for the same amount of output tokens. Notably, outside of all the usual places you'll find Anthropic's models, including Amazon Bedrock and Google Vertex AI, Microsoft is making Sonnet 4 the default model for the new coding agent it's offering through GitHub Copilot. Both Opus 4 and Sonnet 4 are available to use today. Today's announcement comes during what's already been a busy week in the AI industry. On Tuesday, Google kicked off its I/O 2025 conference, announcing, among other things, that it was rolling out AI Mode to all Search users in the US. A day later, OpenAI said it was spending $6.5 billion to buy Jony Ive's hardware startup.