logo
OpenAI forum reveals how deep research transforms inquiry

OpenAI forum reveals how deep research transforms inquiry

Techday NZ14-05-2025
OpenAI has introduced a new agentic AI system called 'deep research,' designed to handle complex, time-consuming research tasks by simulating the work of a human analyst. Presented by researchers Isa Fulford and Edward Sun during an OpenAI forum event, the new tool is powered by a fine-tuned version of OpenAI's upcoming O3 model and leverages advanced reasoning and browsing capabilities.
"Deep research is an agent in ChatGPT that can do work for you independently," Fulford explained.
"You give it a prompt, and it will find, analyse, and synthesise hundreds of online sources to create a comprehensive report at the level of a research analyst."
The system is intended to help users across a range of sectors—from academia and medicine to business and software development. "Members are finding that deep research asks clarifying questions to refine research before it even starts," said Fulford. "We think that deep research can accomplish in tens of minutes what would take a human many hours."
The model represents a major step forward in OpenAI's work with reasoning systems, building on reinforcement learning techniques introduced in its earlier models. Fulford explained how the company developed the tool: "We launched O1 in September of last year. This was the first model that we released in this new paradigm of training where models are trained to think before answering… and we called this text where the model is thinking, 'chain of thought'."
This method of structured, internal reasoning proved effective not only in tasks such as maths and coding, but also in navigating complex real-world information environments. "Around a year ago internally, we were seeing really great success… and we wondered if we could apply these same methods but for tasks that are more similar to what a large number of users do in their daily lives and jobs," Fulford said.
Sun detailed how the tool works by combining reasoning with specialised capabilities like web browsing and code execution. "The browser tool helps the model to aggregate or synthesise real-time data, and the Python tool is helping the model to process this data," he explained. The system dynamically alternates between reasoning and action, using reinforcement learning to improve over time.
One striking example involved analysing medal data from the 2020 Tokyo Olympics. "You can see how the model interleaved reasoning with actual tool calls to search for information, refine the data, and process it programmatically," Sun said.
Unlike older approaches that rely on a single-pass search or instruction-following, deep research iteratively refines its answers. "We train the model with end-to-end reinforcement learning," Sun added. "We directly optimise the model to actively learn from the feedback, both positive and negative."
OpenAI tested the model extensively against both public and internal benchmarks. According to Fulford, "the model pairing deep research scored a new high of 26.6%" on the Humanities Last Exam, an expert-level evaluation spanning over 100 subjects.
On another benchmark, GAIA, the tool also achieved a state-of-the-art result for multi-step web browsing and reasoning.
The model also underwent safety evaluations prior to release. "We did extensive red teaming with external testers, and then also went through preparedness and governance reviews that we always do at OpenAI," Fulford said.
Despite strong results, the researchers acknowledged current limitations. "It still may hallucinate facts or infer things incorrectly," Fulford said.
"Sometimes it struggles to distinguish between authoritative sources and rumours."
Use cases continue to emerge in unexpected domains. "People might be using the model a lot for coding. And that's been a really big use case," Fulford observed. Other domains include scientific and medical research, where professionals have begun verifying the model's output against their own expertise.
Users are also adapting their behaviour to suit the model. "We've seen interesting user behaviour where people put a lot of effort into refining their prompts using O1 or another model," Fulford said. "And then only after really refining that instruction, they'll send it to deep research… which makes sense if you're going to wait a long time for an output."
Currently, deep research is available to users on the Plus, Pro, Teams, Enterprise and EDU plans.
"We're very excited to release a smaller, cheaper model to the free tier," Fulford confirmed. The team also plans to improve personalisation and explore ways to let users incorporate subscription services or private data into the research process.
"This showcases how the model can effectively break down a complex task, gather information from various sources, and structure the response coherently for the user," Sun said in closing.
OpenAI's forum audience, composed of members across academia, government, and business, left the event with a clear sense that deep research marks a meaningful step toward AI systems capable of handling work currently done by skilled analysts.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Sensitive data exposure rises with employee use of GenAI tools
Sensitive data exposure rises with employee use of GenAI tools

Techday NZ

timea day ago

  • Techday NZ

Sensitive data exposure rises with employee use of GenAI tools

Harmonic Security has released its quarterly analysis finding that a significant proportion of data shared with Generative AI (GenAI) tools and AI-enabled SaaS applications by employees contains sensitive information. The analysis was conducted on a dataset comprising 1 million prompts and 20,000 files submitted to 300 GenAI tools and AI-enabled SaaS applications between April and June. According to the findings, 22% of files (total 4,400) and 4.37% of prompts (total 43,700) included sensitive data. The categories of sensitive data encompassed source code, access credentials, proprietary algorithms, merger and acquisition (M&A) documents, customer or employee records, and internal financial information. Use of new GenAI tools The data highlights that in the second quarter alone, organisations on average saw employees begin using 23 previously unreported GenAI tools. This expanding variety of tools increases the administrative load on security teams, who are required to vet each tool to ensure it meets security standards. A notable proportion of AI tool use occurs through personal accounts, which may be unsanctioned or lack sufficient safeguards. Almost half (47.42%) of sensitive uploads to Perplexity were made via standard, non-enterprise accounts. The numbers were lower for other platforms, with 26.3% of sensitive data entering ChatGPT through personal accounts, and just 15% for Google Gemini. Data exposure by platform Analysis of sensitive prompts identified ChatGPT as the most common origin point in Q2, accounting for 72.6%, followed by Microsoft Copilot with 13.7%, Google Gemini at 5.0%, Claude at 2.5%, Poe at 2.1%, and Perplexity at 1.8%. Code leakage represented the most prevalent form of sensitive data exposure, particularly within ChatGPT, Claude, DeepSeek, and Baidu Chat. File uploads and risks The report found that, on average, organisations uploaded 1.32GB of files in the second quarter, with PDFs making up approximately half of all uploads. Of these files, 21.86% contained sensitive data. The concentration of sensitive information was higher in files compared to prompts. For example, files accounted for 79.7% of all stored credit card exposure incidents, 75.3% of customer profile leaks, and 68.8% of employee personally identifiable information (PII) incidents. Files accounted for 52.6% of exposure volume related to financial projections. Less visible sources of risk GenAI risk does not only arise from well-known chatbots. Increasingly, regular SaaS tools that integrate large language models (LLMs) - often without clear labelling as GenAI - are becoming sources of risk as they access and process sensitive information. Canva was reportedly used for documents containing legal strategy, M&A planning, and client data. Replit and were involved with proprietary code and access keys, while Grammarly and Quillbot edited contracts, client emails, and internal legal content. International exposure Use of Chinese GenAI applications was cited as a concern. The study found that 7.95% of employees in the average enterprise engaged with a Chinese GenAI tool, leading to 535 distinct sensitive exposure incidents. Within these, 32.8% were related to source code, access credentials, or proprietary algorithms, 18.2% involved M&A documents and investment models, 17.8% exposed customer or employee PII, and 14.4% contained internal financial data. Preventative measures "The good news for Harmonic Security customers is that this sensitive customer data, personally identifiable information (PII), and proprietary file contents never actually left any customer tenant, it was prevented from doing so. But had organizations not had browser based protection in place, sensitive information could have ended up training a model, or worse, in the hands of a foreign state. AI is now embedded in the very tools employees rely on every day and in many cases, employees have little knowledge they are exposing business data." Harmonic Security Chief Executive Officer and Co-founder Alastair Paterson made this statement, referencing the protections offered to their customers and the wider risks posed by the pervasive nature of embedded AI within workplace tools. Harmonic Security advises enterprises to seek visibility into all tool usage – including tools available on free tiers and those with embedded AI – to monitor the types of data being entered into GenAI systems and to enforce context-aware controls at the data level. The recent analysis utilised the Harmonic Security Browser Extension, which records usage across SaaS and GenAI platforms and sanitises the information for aggregate study. Only anonymised and aggregated data from customer environments was used in the analysis.

Statement On AI In Universities From Aotearoa Communication & Media Scholars Network
Statement On AI In Universities From Aotearoa Communication & Media Scholars Network

Scoop

time2 days ago

  • Scoop

Statement On AI In Universities From Aotearoa Communication & Media Scholars Network

We speak as a network of Aotearoa academics working in the inter-disciplines of Communication and Media Studies across our universities. Among us we have shared expertise in the political, social and economic impacts of commercially distributed and circulated generative artificial intelligence ('AI') in our university workplaces. While there is a tendency in our universities to be resigned to AI as an unstoppable and unquestionable technological force, our aim is to level the playing field to promote open critical and democratic debate. With this in mind, we make the following points: For universities… · AI is not an inevitable technological development which must be incorporated into higher education; rather it is the result of particular techno-capitalist ventures, a context which needs to be recognised and considered; · AI, as a corporate product of private companies such as OpenAI, Google, etc., encroaches on the public role of the university and its role as critic and conscience, and marginalises voices which might critique business interests; For researchers… · AI impedes rather than supports productive intellectual work because it erodes important critical thinking skills; instead, it devolves human scholarly work and critical engagement with ideas–elements vital to our cultural and social life–to software that produces 'ready-made', formulaic and backward looking 'results' that do not advance knowledge; · AI promotes an unethical, reckless approach to research which can promote 'hallucinations' and over valorise disruption for its own sake rather than support quality research; · AI normalises industrial scale theft of intellectual property as our written work is fed into AI datasets largely without citation or compensation; · AI limits the productivity of academic staff by requiring them to invent new forms of assessment which subvert AI, police students and their use of AI, or assess lengthy 'chat logs', rather than engage with students in activities and assessments that require deep, critical thinking and sharing, questioning and articulating ideas with peers; For students… · AI tools create anxiety for students; some are falsely-accused of using generative-AI when they haven't, or are very stressed that it could happen to them; · AI tools such as ChatGPT are contributing to mental-health crises and delusions in various ways; promoting the use of generative-AI in academic contexts is thus unethical, particularly when considering students and the role of universities in pastoral care; · AI thus undermines the fundamental relationships between teacher and student, academics and administration, and the university and the community by fostering an environment of distrust; For Aotearoa New Zealand… · AI clashes with Te Tiriti obligations around data sovereignty and threatens the possibility of data colonialism regarding te reo itself; · AI is devastating for the environment in terms of energy and water use and the extraction of natural resources needed for the processors that AI requires. Signed by: Rosemary Overell, Senior Lecturer, Media, Film & Communications Programme, The University of Otago Olivier Jutel, Lecturer, Media, Film & Communications Programme, The University of Otago Emma Tennent, Senior Lecturer, Media & Communication, Te Herenga Waka Victoria University of Wellington Rachel Billington, Lecturer, Media, Film & Communications Programme, The University of Otago Brett Nicholls, Senior Lecturer, Media, Film & Communications Programme, The University of Otago Yuki Watanabe, Lecturer, Media, Film & Communications Programme, The University of Otago Sy Taffel, Senior Lecturer, Media Studies Programme, Massey University Leon Salter, Senior Lecturer, Communications Programme, University of Auckland Angela Feekery, Senior Lecturer, Communications Programme, Massey University Ian Huffer, Senior Lecturer, Media Studies Programme, Massey University Pansy Duncan, Senior Lecturer, Media Studies Programme, Massey University Kevin Veale, Senior Lecturer, Media Studies Programme, Massey University Peter A. Thompson, Associate Professor, Media & Communication Programme, Te Herenga Waka/Victoria University of Wellington Nicholas Holm, Associate Professor, Media Studies Programme, Massey University Sean Phelan, Associate Professor, Massey University Yuan Gong, Senior Lecturer, Media Studies Programme, Massey University Chris McMillan, Teaching Fellow, Sociology Programme, University of Auckland Cherie Lacey, Researcher, Centre for Addiction Research, University of Auckland Thierry Jutel, Associate Professor, Film, Te Herenga Waka, Victoria University of Wellington Max Soar, Teaching Fellow, Political Communication, Te Herenga Waka Victoria University of Wellington Lewis Rarm, Lecturer, Media and Communication, Te Herenga Waka | Victoria University of Wellington Tim Groves, Senior Lecturer, Film. Te Herenga Waka, Victoria University of Wellington Valerie Cooper, Lecturer, Media and Communication, Te Herenga Waka | Victoria University of Wellington Wayne Hope, Professor, Faculty of Design & Creative Technologies, Auckland University of Technology Greg Treadwell, senior lecturer in journalism, School of Communication Studies, Auckland University of Technology Christina Vogels, Senior Lecturer, Critical Media Studies, School of Communication Studies, Auckland University of Technology

Association formed to boost connection in sciences
Association formed to boost connection in sciences

Otago Daily Times

time2 days ago

  • Otago Daily Times

Association formed to boost connection in sciences

University of Otago students attending the new Otago Biochemistry and Genetics Student Association launch party had reason to pay special attention to the health and safety briefing from co-president Liam Young (right) after a tsunami alert rang out through their mobiles in the Terrace Lounge at the University of Otago on Wednesday evening. PHOTO: GERARD O'BRIEN A lack of tertiary science major social cohesion, and nobody being "mates" has inspired two University of Otago students to kick-start an association. Otago Biochemistry and Genetics Student Association (OBiGSA) co-presidents Liam Young and Cushla Bridges both came up with the idea after realising people studying the two "closely related" disciplines did not really know each other very well outside the lecture theatre. Miss Bridges said "basically none of us were mates outside of class". "We thought that we would make this club so that everyone could come together and have events like this, but also fun things. "We've had a quiz night, we're going to have like a movie night, a karaoke night, that sort of thing as well." The demand for a student association turned out to be very high and the room of their official launch on Wednesday was packed to the rafters and a sellout event. Miss Bridges said they had sold about 120 tickets. "We didn't expect it, but also we pushed quite hard for it, so I don't know what we were expecting." Mr Young said they celebrated the launch by getting to "pick the brains" of industry experts at a scientific panel. "We're basically asking them questions that you just can't find on Google ... Google and ChatGPT, they tell us everything and they seem to know everything — we're asking the humans something a bit more thought-provoking." He said they wanted some specialists to share their opinionated and controversial thoughts about various science topics. At the event yesterday were students from a range of majors, including anatomy, physiology, microbiology, immunology majors, biochemistry, genetics and zoology. Mr Young and Miss Bridges hoped the association lasted long after they left the university. "You've got to give the dream to someone else eventually," Mr Young said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store