• Menu
  • Skip to right header navigation
  • Skip to main content
  • Skip to primary sidebar

DigiBanker

Bringing you cutting-edge new technologies and disruptive financial innovations.

  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In
  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In

Increasingly, enterprise customers are systematically rejecting single-vendor AI strategies in favor of multi-model approaches that match specific LLMs to targeted use cases.

June 26, 2025 //  by Finnovate

 Armand Ruiz, VP of AI Platform at IBM detailed how Big Blue is thinking about generative AI and how its enterprise users are actually deploying the technology. A key theme that Ruiz emphasized is that at this point, it’s not about choosing a single LLM provider or technology. Increasingly, enterprise customers are systematically rejecting single-vendor AI strategies in favor of multi-model approaches that match specific LLMs to targeted use cases. IBM has its own open-source AI models with the Granite family, but it is not positioning that technology as the only choice, or even the right choice for all workloads. This enterprise behavior is driving IBM to position itself not as a foundation model competitor, but as what Ruiz referred to as a control tower for AI workloads. IBM’s response to this market reality is a newly released model gateway that provides enterprises with a single API to switch between different LLMs while maintaining observability and governance across all deployments. The technical architecture allows customers to run open-source models on their own inference stack for sensitive use cases while simultaneously accessing public APIs like AWS Bedrock or Google Cloud’s Gemini for less critical applications. “That gateway is providing our customers a single layer with a single API to switch from one LLM to another LLM and add observability and governance all throughout,” Ruiz said. The company has developed ACP (Agent Communication Protocol) and contributed it to the Linux Foundation. ACP is a competitive effort to Google’s Agent2Agent (A2A) protocol which was contributed by Google to the Linux Foundation. The agent orchestration protocols provide standardized ways for AI systems to interact across different platforms and vendors. IBM’s real-world deployment data suggests several critical shifts for enterprise AI strategy: Abandon chatbot-first thinking: Organizations should identify complete workflows for transformation rather than adding conversational interfaces to existing systems. The goal is to eliminate human steps, not improve human-computer interaction. Architect for multi-model flexibility: Rather than committing to single AI providers, enterprises need integration platforms that enable switching between models based on use case requirements while maintaining governance standards. Invest in communication standards: Organizations should prioritize AI tools that support emerging protocols like MCP, ACP, and A2A rather than proprietary integration approaches that create vendor lock-in.

Read Article

Category: Additional Reading

Previous Post: « Google announced its open-source Gemini-CLI that brings natural language command execution directly to developer terminals, offering extensibility architecture, built around the emerging MCP standard
Next Post: Paycode taps Algorand’s blockchain to migrate its digital payment infrastructure on-chain for offering secure, inclusive payments at scale to underserved and remote communities through offline-first payment systems »

Copyright © 2025 Finnovate Research · All Rights Reserved · Privacy Policy
Finnovate Research · Knyvett House · Watermans Business Park · The Causeway Staines · TW18 3BA · United Kingdom · About · Contact Us · Tel: +44-20-3070-0188

We use cookies to provide the best website experience for you. If you continue to use this site we will assume that you are happy with it.OkayPrivacy policy