logo
OpenAI updates its new Responses API rapidly with MCP support, GPT-4o native image gen, and more enterprise features

OpenAI updates its new Responses API rapidly with MCP support, GPT-4o native image gen, and more enterprise features

Business Mayor21-05-2025

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
OpenAI is rolling out a set of significant updates to its newish Responses API, aiming to make it easier for developers and enterprises to build intelligent, action-oriented agentic applications.
These enhancements include support for remote Model Context Protocol (MCP) servers, integration of image generation and Code Interpreter tools, and upgrades to file search capabilities—all available as of today, May 21.
First launched in March 2025, the Responses API serves as OpenAI's toolbox for third-party developers to build agentic applications atop some of the core functionalities of its hit services ChatGPT and its first-party AI agents Deep Research and Operator.
In the months since its debut, it has processed trillions of tokens and supported a broad range of use cases, from market research and education to software development and financial analysis.
Popular applications built with the API include Zencoder's coding agent, Revi's market intelligence assistant, and MagicSchool's educational platform.
The Responses API debuted alongside OpenAI's open-source Agents SDK in March 2025, as part of an initiative to provide third-party developer access to the same technologies powering OpenAI's own AI agents like Deep Research and Operator.
This way, startups and companies outside of OpenAI could integrate the same tech as it offers through ChatGPT into their own products and services, be they internal for employee usage or external for customers and partners.
Initially, the API combined elements from Chat Completions and the Assistants API—delivering built-in tools for web and file search, as well as computer use—enabling developers to build autonomous workflows without complex orchestration logic. OpenAI said at that time that the Chat Completions API would be deprecated by mid 2026.
The Responses API provides visibility into model decisions, access to real-time data, and integration capabilities that allowed agents to retrieve, reason over, and act on information.
This launch marked a shift toward giving developers a unified toolkit for creating production-ready, domain-specific AI agents with minimal friction.
A key addition in this update is support for remote MCP servers. Developers can now connect OpenAI's models to external tools and services such as Stripe, Shopify, and Twilio using only a few lines of code. This capability enables the creation of agents that can take actions and interact with systems users already depend on. To support this evolving ecosystem, OpenAI has joined the MCP steering committee.
The update brings new built-in tools to the Responses API that enhance what agents can do within a single API call.
A variant of OpenAI's hit GPT-4o native image generation model — which inspired a wave of 'Studio Ghibli' style anime memes around the web and buckled OpenAI's servers with its popularity, but can obviously create many other image styles — is now available through the API under the model name 'gpt-image-1.' It includes potentially helpful and fairly impressive new features like real-time streaming previews and multi-turn refinement.
This enables developers to build applications that can produce and edit images dynamically in response to user input.
Additionally, the Code Interpreter tool is now integrated into the Responses API, allowing models to handle data analysis, complex math, and logic-based tasks within their reasoning processes.
The tool helps improve model performance across various technical benchmarks and allows for more sophisticated agent behavior.
Improved file search and context handling
The file search functionality has also been upgraded. Developers can now perform searches across multiple vector stores and apply attribute-based filtering to retrieve only the most relevant content.
This improves the precision of information agents use, enhancing their ability to answer complex questions and operate within large knowledge domains.
Several features are designed specifically to meet enterprise needs. Background mode allows for long-running asynchronous tasks, addressing issues of timeouts or network interruptions during intensive reasoning.
Reasoning summaries, a new addition, offer natural-language explanations of the model's internal thought process, helping with debugging and transparency.
Encrypted reasoning items provide an additional privacy layer for Zero Data Retention customers.
These allow models to reuse previous reasoning steps without storing any data on OpenAI servers, improving both security and efficiency.
The latest capabilities are supported across OpenAI's GPT-4o series, GPT-4.1 series, and the o-series models, including o3 and o4-mini. These models now maintain reasoning state across multiple tool calls and requests, which leads to more accurate responses at lower cost and latency.
Despite the expanded feature set, OpenAI has confirmed that pricing for the new tools and capabilities within the Responses API will remain consistent with existing rates.
For example, the Code Interpreter tool is priced at $0.03 per session, and file search usage is billed at $2.50 per 1,000 calls, with storage costs of $0.10 per GB per day after the first free gigabyte.
Web search pricing varies based on the model and search context size, ranging from $25 to $50 per 1,000 calls. Image generation through the gpt-image-1 tool is also charged according to resolution and quality tier, starting at $0.011 per image.
All tool usage is billed at the chosen model's per-token rates, with no additional markup for the newly added capabilities.
With these updates, OpenAI continues to expand what is possible with the Responses API. Developers gain access to a richer set of tools and enterprise-ready features, while enterprises can now build more integrated, capable, and secure AI-driven applications.
All features are live as of May 21, with pricing and implementation details available through OpenAI's documentation.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

I'm using Gemini now for my Gmail and there's one major discovery that's surprising
I'm using Gemini now for my Gmail and there's one major discovery that's surprising

Tom's Guide

timean hour ago

  • Tom's Guide

I'm using Gemini now for my Gmail and there's one major discovery that's surprising

Not everything Google does or touches turns to gold. Case in point: The new Gemini AI addition to Gmail is not all it's cracked up to be. While it works amazingly well when it comes to composing emails and summarizing a thread, the great irony is that most search-related prompts are not even remotely helpful. (Note: I asked Google reps about my test results and they have not responded.) I noticed the Gemini icon for the first time just about one week ago and started diving in right away. The AI bot is starting to roll out for many users with an update that includes new search functions, enhanced smart replies, and a few inbox clean up prompts. Before I cover what didn't work for me, let me just say: I can see where this is all heading and I'm mostly pleased with the basic functions, like smart replies and summaries. I'm used to AI providing some basic help with my email replies, since I've used ChatGPT many times to help me compose and revise emails. Gemini does an exemplary job. When you want help, you can open a sidebar and enter prompts. On my phone or in the browser, I could also ask Gemini to 'polish' my own email, adding more details and context in seconds.I also really liked the summaries. At the top of the screen, there's a button called 'Summarize this email' and the little star icon for Gemini. You'll see a summary with action steps, and in all of my testing, Gemini was accurate and helpful. I found I didn't have to read back on a thread as much and used Gemini to catch me up on the conversation. I wasn't here for the smart replies and summaries, though. I've been able to do that with other AI bots for the last three years. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. I want an AI that goes much, much further than that with my email — e.g tools for helping me understand not just one email thread. I have around 650,000 emails in my Gmail and it's a treasure trove that Gemini could easily explore. I wanted to be able to find out who emailed me the most in one particular month, which topics I discussed most often this year, and create a mass email to let people I interact with the most know that I will be out a couple days in June. Unfortunately, Gemini seems woefully inadequate and returns incorrect results. When I asked the bot to find the people I emailed the most this year and also in May, the results were not correct. Gemini only listed two people and I had barely interacted with them. It's possible Gemini just found the most recent interactions, but I had asked for results from 2025 and all of I asked Gemini about topics I had discussed most often, the AI was blissfully unaware of which emails were just spam sent to me. My prompt was 'Which topics did I discuss and reply to the most in 2025' and Gemini listed a bunch of email newsletters. That was an error, because Gemini was only looking at emails sent to me the most, not those where I interacted.I also asked Gemini to compose an email to the people I interact with the most, explaining that I will be out June 5-6. Once again, Gemini only found the people that emailed me the most. While the email the bot composed was helpful, what I wanted was the bot to do the heavy lifting — compose an email with each person in a blind copy. I just wanted to click send. Gemini is also supposed to help with inbox cleanup duties, but this was mostly a miss. I asked Gemini for Gmail on my iPhone to look for old emails with large attachments and the bot showed me every email with an attachment, not the ones with the biggest attachments. And, they were not old emails -- they were all from the current month. I also asked Gemini to show me the emails with the largest attachments. For some reason, that prompt didn't work. 'I can't help with that' was the response. This prompt did work, though: 'Show me all emails with an attachment from May 2024.' I was able to then delete all of those messages quickly, which was helpful. The problem is that Gemini seemed to work about 25% of the time when I was trying to clean up my inbox. It is hit or miss. I really wanted the bot to understand my goals. Inbox clean up is fine, although anyone who has used Gmail for a while knows we've been able to tame our inbox using searches for many years. For example, I can type 'larger:5M after:2024/05/24 before:2025/05/25' to find files with attachments over 5MB this last year. There's also a filter to help guide you through that process. Instead, I wanted Gemini to be more like a smart assistant. More than anything, Gemini seemed to only search recent emails. In one query, I asked which emails seem urgent, and the bot only mentioned two from the last week. I asked which emails had a shipping label attached and the bot only found four, even though there are several dozen from the last two months. Gemini in Gmail is in more of a testing phase. Google is adding new features and enhancing the AI as time goes on, likely based on feedback or date they collect. For now, the AI is not really worth it for me, since the results are so unpredictable or outright incorrect.I expect the technology will improve, but I'll probably be leery of diving in again until it becomes obvious that Gemini will work as expected. I want the bot to make me more productive and to work reliably every time I type in a prompt. We're obviously not there yet.

Apple Developer Event Will Show It's Still Far From Being an AI Leader
Apple Developer Event Will Show It's Still Far From Being an AI Leader

Bloomberg

time2 hours ago

  • Bloomberg

Apple Developer Event Will Show It's Still Far From Being an AI Leader

Apple, a year after debuting its AI platform, will do little at WWDC to show it's catching up to leaders like OpenAI and Google. Also: The latest macOS gets its new California theme; a look at why the company is moving to an iOS 26 and macOS 26 naming system; and details on Apple's dedicated gaming app. Last week in Power On: Jony Ive's deal with OpenAI ups the pressure on Apple to find its next breakthrough product.

WWDC 2025 is make or break for Apple Intelligence — here's why
WWDC 2025 is make or break for Apple Intelligence — here's why

Tom's Guide

time2 hours ago

  • Tom's Guide

WWDC 2025 is make or break for Apple Intelligence — here's why

WWDC 2025 is going to be a big deal for Apple users. Not only are we expecting to see a big redesign for iOS 19/ (or iOS 26), but it also marks one year since Apple went all in on AI and announced Apple Intelligence. Of course Apple Intelligence hasn't really been the resounding success that Apple probably hoped. It's not been a disaster, but WWDC 2024 turned out to be the one thing Apple typically tries to avoid doing — overpromising and underdelivering. Nearly a year later, many of the promised Siri features are still missing in action. Considering Apple was already late to the party with AI, and the troubles it's had, the pressure is on at WWDC 2025. It's make or break, and if Apple doesn't ease the biggest concerns about Apple Intelligence then it risks it ending up like Siri did 10 years ago. The biggest issue with Apple Intelligence is that Apple realized AI was going to be a big deal much later than everyone else. Apple wasn't ignoring AI, but in the years before ChatGPT exploded in popularity, the company wasn't that interested in investing large amounts of money into AI development — especially with no clear end goal. According to a report from Bloomberg, it wasn't until after ChatGPT arrived that Apple's software chief Craig Federighi used generative AI for himself and realized how useful a tool generative AI could be. But by that point Apple was seriously far behind its rivals, and wouldn't be able to catch up easily. This is apparently where the main problems with Siri come in, since Apple attempted to catch up by tacking the new LLM-powered Siri onto the older voice assistant. This hasn't worked out, not only because of the delays but also because it apparently caused a bunch of problems that have been described as "whack-a-mole." All that inevitably made the controversial rollout of Apple Intelligence even more problematic. Not because the features that were released were bad, though things like news summaries proved too problematic to keep around. Apple Intelligence itself didn't land until iOS 18.1 arrived in late October, a month after iOS 18 and the iPhone 16 were released. iOS 18.2 was where the real improvements came into play, and that didn't arrive until late December. iOS 18.3 and 18.4 landed throughout the first few months of 2025, but by that point the number of useful new features had dropped dramatically. The problem wasn't the state of Apple Intelligence, though, and more of how Apple handled it. Simply put, it looked like Apple didn't want to be seen lagging behind its rivals, then overestimated what it could accomplish. WWDC is where Apple tells us what's going on with all its software, and it would be a mistake not to give Apple Intelligence the attention it needs. This is the first anniversary of its reveal, and despite all the problems Apple can't afford to be seen ignoring it. I'm not saying that WWDC needs to be an all-Apple Intelligence show. Google I/O did that, and it was far too much AI for any normal person to handle. But that doesn't mean Apple can brush AI to the wayside and treat it like Siri was treated for so many years. If that happens, Apple might as well be throwing in the towel on the AI race. We all know that the company is behind the likes of Google and OpenAI, but that doesn't mean the company's AI ambitions are dead. There's plenty of time to improve, and potentially catch up. In a best-case scenario Apple would admit that it dropped the ball with Apple Intelligence, and pledges to do better going forward. I don't see that happening. Apple is not known for willingly admitting its mistakes. But I also don't see Apple spending a great deal of time on AI either. Not just because it has a bunch of major design revamps to get us through in a keynote that can only be so long. But also because I'm sure Apple doesn't want to risk making the same mistakes as last year. No doubt we'll be hearing a lot of impressive specs about Apple Intelligence and its adoption, and maybe some reveals on different smaller features that may be on the way. And that should be enough. AI isn't the focus of this year's releases based on what we've heard, and it shouldn't dominate the show. But it does still need attention and improvements so it can continue to grow. Apple has already made plenty of mistakes with AI, from jumping on the bandwagon late to screwing up the launch of the features when they were ready. So it's imperative that the company get itself into gear, and come up with an adequate strategy for future updates and AI features. WWDC is going to be the starting point for all of that, and the attention Apple Intelligence gets at the show is going to lay the groundwork for the next few years of Apple AI rollouts. And while we can't expect Apple to roll out another wave of announcements like the ones we saw last year, it needs to avoid ignoring the topic completely. Otherwise, if AI is just going to get tossed to the side because of some early hurdles, then Apple probably shouldn't have bothered investing in it in the first place.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store