• Menu
  • Skip to right header navigation
  • Skip to main content
  • Skip to primary sidebar

DigiBanker

Bringing you cutting-edge new technologies and disruptive financial innovations.

  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In
  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In

OpenAI’s API platform allows developers to express intent, not just configure model flows through built-in capabilities for knowledge retrieval, web search, and function calling for supporting real-world agent workflows

July 1, 2025 //  by Finnovate

Olivier Godement, Head of Product for OpenAI’s API platform, provided a behind-the-scenes look at how enterprise teams are adopting and deploying AI agents at scale. According to Godement, 2025 marks a real shift in how AI is being deployed at scale. With over a million monthly active developers now using OpenAI’s API platform globally, and token usage up 700% year over year, AI is moving beyond experimentation. Godement emphasized that current demand isn’t just about chatbots anymore. “AI use cases are moving from simple Q&A to actually use cases where the application, the agent, can do stuff for you.” This shift prompted OpenAI to launch two major developer-facing tools in March: the Responses API and the Agents SDK. Some enterprise use cases are already delivering measurable gains. Godement positioned the Responses API as a foundational evolution in developer tooling. Previously, developers manually orchestrated sequences of model calls. Now, that orchestration is handled internally. “The Responses API is probably the biggest new layer of abstraction we introduced since pretty much GPT-3.” It allows developers to express intent, not just configure model flows. “You care about returning a really good response to the customer… the Response API essentially handles that loop.” It also includes built-in capabilities for knowledge retrieval, web search, and function calling—tools that enterprises need for real-world agent workflows. Some enterprise use cases are already delivering measurable gains: Stripe, which uses agents to accelerate invoice handling, reporting “35% faster invoice resolution; ” Box, which launched knowledge assistants that enable “zero-touch ticket triage.” Other high-value use cases include customer support (including voice), internal governance, and knowledge assistants for navigating dense documentation. Godement offered a glimpse into the roadmap. OpenAI is actively working on: Multimodal agents that can interact via text, voice, images, and structured data; Long-term memory for retaining knowledge across sessions; Cross-cloud orchestration to support complex, distributed IT environments. What matters now is building a focused use case, empowering cross-functional teams, and being ready to iterate. The next phase of value creation lies not in novel demos—but in durable systems, shaped by real-world needs and the operational discipline to make them reliable.

Read Article

Category: AI & Machine Economy, Innovation Topics

Previous Post: « Study finds running gen AI models on the phones instead of in the cloud consumed anywhere from 75% to 95% less power, with associated sharp decreases in water consumption and overall carbon footprint
Next Post: Embedded Finance 2.0: SaaS platforms are chasing banking charters, adopting multi-tenant model for splitting deposits and baking compliance features such as automated KYC and ledger-level reporting into the API call »

Copyright © 2025 Finnovate Research · All Rights Reserved · Privacy Policy
Finnovate Research · Knyvett House · Watermans Business Park · The Causeway Staines · TW18 3BA · United Kingdom · About · Contact Us · Tel: +44-20-3070-0188

We use cookies to provide the best website experience for you. If you continue to use this site we will assume that you are happy with it.OkayPrivacy policy