OpenAI is rolling out a set of significant updates to its newish Responses API, aiming to make it easier for developers and enterprises to build intelligent, action-oriented agentic applications. The Responses API provides visibility into model decisions, access to real-time data, and integration capabilities that allowed agents to retrieve, reason over, and act on information. A key addition in this update is support for remote MCP servers. Developers can now connect OpenAI’s models to external tools and services such as Stripe, Shopify, and Twilio using only a few lines of code. This capability enables the creation of agents that can take actions and interact with systems users already depend on. To support this evolving ecosystem, OpenAI has joined the MCP steering committee. The update brings new built-in tools to the Responses API that enhance what agents can do within a single API call. A variant of OpenAI’s hit GPT-4o native image generation model is now available through the API under the model name “gpt-image-1.” It includes potentially helpful and fairly impressive new features like real-time streaming previews and multi-turn refinement. This enables developers to build applications that can produce and edit images dynamically in response to user input. Additionally, the Code Interpreter tool is now integrated into the Responses API, allowing models to handle data analysis, complex math, and logic-based tasks within their reasoning processes. The tool helps improve model performance across various technical benchmarks and allows for more sophisticated agent behavior. The file search functionality has also been upgraded. Developers can now perform searches across multiple vector stores and apply attribute-based filtering to retrieve only the most relevant content. This improves the precision of information agents use, enhancing their ability to answer complex questions and operate within large knowledge domains. Background mode allows for long-running asynchronous tasks, addressing issues of timeouts or network interruptions during intensive reasoning. Reasoning summaries, a new addition, offer natural-language explanations of the model’s internal thought process, helping with debugging and transparency. Encrypted reasoning items provide an additional privacy layer for Zero Data Retention customers. These allow models to reuse previous reasoning steps without storing any data on OpenAI servers, improving both security and efficiency.