logo
#

Latest news with #GPT-o3

OpenAI's latest step towards advanced artificial intelligence
OpenAI's latest step towards advanced artificial intelligence

Hindustan Times

time2 days ago

  • Business
  • Hindustan Times

OpenAI's latest step towards advanced artificial intelligence

ALMOST TWO decades after the birth of the iPhone, Steve Jobs remains the model for any tech founder seeking to wow the world with their latest product. The launch events he pioneered at Apple, with their mix of showmanship and glamour, seized the world's attention and gave prospective customers the feeling that the future had finally arrived. It was in these glittering footsteps that Sam Altman , the boss of OpenAI, attempted to follow on August 7th, when the artificial-intelligence (AI) firm launched GPT-5, its latest model. The hour-long launch, with its tech specs and live demos, wore Apple's influence proudly. The firm, which is seeking a fresh round of funding at a valuation of $500bn, made much of AI as a consumer technology. Until now, users have had to contend with an alphabet soup of models. For tasks that prioritised speed, there was the dainty 4o; for elegant prose, 4.5; for heavy-duty coding work, the juggernaut that was o3-pro. All these are now incorporated into GPT-5, a so-called 'unified' model that can decide for itself how best to approach any question it is asked. As a consequence, some casual users could be exposed to frontier AI for the first time. Chart Once the curtain fell and the spotlight went out, though, experts who heard Mr Altman's presentation were asking the same question as those who used to tune in to Jobs: just how good is the technology? GPT-5 looks to be the best in the world across various domains, excelling in areas including software engineering and scientific reasoning. According to OpenAI it also comes closest yet to beating human experts on an internal benchmark measuring 'complex, economically valuable knowledge work'. But the model is world-beating only by a slim margin: it fares slightly better than OpenAI's GPT-o3, released in April, which was in itself just a modest improvement over last year's GPT-o1. In other words, GPT-5 is not the transformational leap that some were hoping for. But a few more years of steady progress like this could yield AI systems of transformative power. The incremental improvement should not be a surprise. GPT-5 comes less than two months after OpenAI's last release, o3-pro, and the update represents about two months of progress in the fast-moving AI space. Moreover, according to METR, a research lab, GPT-5 is almost exactly where you might expect the frontier of AI capability to be in the summer of 2025. In 2019 GPT-2 could achieve 50% accuracy on the sorts of tasks that took software engineers two seconds to complete correctly. By 2020, GPT-3 could rival those engineers for tasks that took eight seconds; by 2023, GPT-4 could reliably tackle ones that took four minutes. The data, METR says, suggests a doubling every 200-odd days. More than 800 days later and GPT-5, right on trend, can handle tasks that would take a human a little over two hours. What does this mean for the achievement of 'artificial general intelligence' (AGI)? Boosters have said that within a couple of years models could reach AGI, or the point at which they do so much of the labour currently performed by white-collar workers that they reshape the global economy. GPT-5 suggests the technology could still be on track towards such a goal. Within two years, METR's trend suggests a model will be able to complete an entire working day's worth of labour. Superintelligent models, as those with capabilities beyond AGI are known, may take only a few more years. As a consequence, GPT-5 has some safety experts worried. Gaia Marcus, director of the UK's Ada Lovelace Institute, a British think-tank which monitors AI progress, warned that the release of GPT-5 makes it 'even more urgent' to comprehensively regulate how models can be used. The Future of Life Institute, a safety group which once called for a six-month pause on all AI development, warns that GPT-5's software-development abilities show OpenAI is engaged in a reckless pursuit of 'recursive self-improvement'—building AI systems that can improve themselves. The trends suggest that, if current progress continues, world-changing AI systems could emerge within a few years. GPT-5 does not dispel the idea.

Who is Shengjia Zhao? Brain behind designing ChatGPT, now became Chief scientist of..., his salary is...
Who is Shengjia Zhao? Brain behind designing ChatGPT, now became Chief scientist of..., his salary is...

India.com

time29-07-2025

  • Business
  • India.com

Who is Shengjia Zhao? Brain behind designing ChatGPT, now became Chief scientist of..., his salary is...

Who is Shengjia Zhao? Brain behind designing ChatGPT, now became Chief scientist of..., his salary is... Former OpenAI researcher Shengjia Zhao has been appointed as the Chief Scientist of Meta's newly launched Superintelligence Lab, CEO Mark Zuckerberg announced last Friday. This comes after he collected all the top talents from the industry for his prized AI team. The Superintelligence Lab, his dream project, is garnering a lot of attention from around the world for poaching the top AI talent, with astronomical salary figures. He has now confirmed Shengjia Zhao, the co-creator of ChatGPT as the Chief Scientist of his extremely prized MSL. W hat Mark Zuckerberg said? In a post on Threads, Zuckerberg introduced the scientist's addition to the team, 'I'm excited to share that Shengjia Zhao will be the Chief Scientist of Meta Superintelligence Labs. In this role, Shengjia will set the research agenda and scientific direction for our new lab working directly with me and Alex,' he wrote. ' Shengjia co-founded the new lab and has been our lead scientist from day one. Now that our recruiting is going well and our team is coming together, we have decided to formalize his leadership role. Shengjia has already pioneered several breakthroughs including a new scaling paradigm and distinguished himself as a leader in the field. I'm looking forward to working closely with him to advance his scientific vision,' he added. Who is this genius? Zhao is a genius with many achievements, he has been awarded multiple times, has a PhD in Computer Science from Stanford, and is a complete overachiever. With an impressive track record in the AI industry, he has co-created not only ChatGPT but also GPT-4 and several of OpenAI's mini models, including GPT-4.1 and GPT-o3. His recruitment is part of a broader trend of researchers transitioning from OpenAI to Meta in recent weeks because Meta is reportedly offering highly astronomical compensation packages and deals to attract top AI researchers. According to industry experts, this strategy is aimed on closing the gap to the leading AI models from rivals like OpenAI and Google after Meta's Llama 4 model was expected to take the battle to OpenAI , but was greeted with underwhelming performance. A ll about Meta's new lab Meta's Superintelligence lab was launched by Meta in order to not only enhance the Llama AI model, but also so it can advance its long-term ambitions in the field. According to the post made by Zuckerberg, Zhao is a co-founder of this new lab, all while being a part of the FAIR AI research division, which was led by Yann LeCun , the deep learning pioneer. MSL operates independently from FAIR. Meta has poured billions into hiring top talent from rivals, which include Google, OpenAI , Apple, and Anthropic. The company also acquired Scale AI for $14 billion, bringing its CEO, Alexandr Wang, on board as Meta's Chief AI Officer. Showing the commitment of Zuckerberg of investing hundreds of billions more in building vast AI data centres across the US.

OpenAI's Latest ChatGPT AI Models Are Smarter, But They Hallucinate More Than Ever
OpenAI's Latest ChatGPT AI Models Are Smarter, But They Hallucinate More Than Ever

Int'l Business Times

time07-05-2025

  • Int'l Business Times

OpenAI's Latest ChatGPT AI Models Are Smarter, But They Hallucinate More Than Ever

Artificial intelligence is evolving fast, but not always in the right direction. OpenAI's latest models, GPT o3 and o4-mini, were built to mimic human reasoning more closely than ever before. However, a recent internal investigation reveals an alarming downside: these models may be more intelligent, but they're also more prone to making things up. Hallucination in AI is a Growing Problem OpenAI's Latest ChatGPT AI Models Are Smarter, But They Hallucinate Since the birth of chatbots, hallucinations, also known as false or imaginary facts, have been a persistent issue. With each model iteration, the hope was that these AI hallucinations would decline. But OpenAI's latest findings suggest otherwise, according to The New York Times. In a benchmark test focused on public figures, GPT-o3 hallucinated in 33% of responses, twice the error rate of its predecessor, GPT-o1. Meanwhile, the more compact GPT o4-mini performed even worse, hallucinating nearly half the time (48%). Reasoning vs. Reliability: Is AI Thinking Too Hard? Unlike previous models that were great at generating fluent text, o3 and o4-mini were programmed to reason step-by-step, like human logic. Ironically, this new "reasoning" technique might be the problem. AI researchers say that the more a model does reasoning, the more likely it is to go astray. Unlike low-flying systems that stay with secure, high-confidence responses, these newer systems attempt to bridge between complicated concepts, which can cause bizarre and incorrect conclusions. On the SimpleQA test, which tests general knowledge, the performance was even worse: GPT o3 hallucinated on 51% of responses, while o4-mini shot to an astonishing 79%. These are not small errors; these are huge credibility gaps. Why More Sophisticated AI Models May Be Less Credible OpenAI attributes the rise in AI hallucinations to possibly not being the result of the reasoning itself, but of the verbosity and boldness of the models. While attempting to be useful and comprehensive, the AI begins to guess and sometimes mixes theory with fact. The outcome will sound very convincing, but they're entirely incorrect answers. According to TechRadar, this becomes especially risky when AI is employed in high-stakes environments such as law, medicine, education, or government service. A single hallucinated fact in a legal brief or medical report could have disastrous repercussions. The Real-World Risks of AI Hallucinations We already know attorneys were sanctioned for providing fabricated court citations produced by ChatGPT. But what about minor mistakes in a business report, school essay, or government policy memo? The more integrated AI becomes into our everyday routines, the fewer opportunities there are for error. The paradox is simple: the more helpful AI is, the more perilous its mistakes are. You can't save people time if they still need to fact-check everything. Treat AI Like a Confident Intern Though GPT o3 and o4-mini demonstrate stunning skills in coding, logic, and analysis, their propensity to hallucinate means users can't rely on them when they require rock-solid facts. Until OpenAI and its rivals are able to minimize these hallucinations, users need to take AI output with a grain of salt. Consider it this way: These chatbots are similar to that in-your-face co-worker who always has a response, but you still fact-check everything they state. Originally published on Tech Times

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store