• Menu
  • Skip to right header navigation
  • Skip to main content
  • Skip to primary sidebar

DigiBanker

Bringing you cutting-edge new technologies and disruptive financial innovations.

  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In
  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In

Model Context Protocol open standard architecture consisting of servers and clients will be key to building secure, two-way connections between AI agents’ data sources and tools as AI systems mature and start to maintain context

May 12, 2025 //  by Finnovate

AI agents have been all the rage over the last several months, which has led to a need to come up with a standard for how they communicate with tools and data, leading to the creation of the Model Context Protocol (MCP) by Anthropic. MCP is “an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools,” Anthropic wrote in a blog post announcing it was open sourcing the protocol. MCP can do for AI agents what USB does for computers, Lin Sun, senior director of open source at cloud native connectivity company Solo.io, explained. According to Keith Pijanowski, AI solutions engineer at object storage company MinIO, an example use case for MCP is an AI agent for travel that can book a vacation that adheres to someone’s budget and schedule. Using MCP, the agent could look at the user’s bank account to see how much money they have to spend on a vacation, look at their calendar to ensure it’s booking travel when they have time off, or even potentially look at their company’s HR system to make sure they have PTO left. MCP consists of servers and clients. The MCP server is how an application or data source exposes its data, while the MCP client is how AI applications connect to those data sources.  MinIO actually developed its own MCP server, which allows users to ask the AI agent about their MinIO installation like how many buckets they have, the contents of a bucket, or other administrative questions. The agent can also pass questions off to another LLM and then come back with an answer.  “Instead of maintaining separate connectors for each data source, developers can now build against a standard protocol. As the ecosystem matures, AI systems will maintain context as they move between different tools and datasets, replacing today’s fragmented integrations with a more sustainable architecture,” Anthropic wrote in its blog post. 

Read Article

Category: Additional Reading

Previous Post: « A new HPC architecture with “bring-your-own-code” (BYOC) approach would enable existing code to run unmodifieD; the underlying technology adapts to each application without new languages or significant code changes
Next Post: Talent development, right data infrastructure, industry-specific strategic bets, responsible AI governance and agentic architecture are key for scaling enterprise AI initiatives »

Copyright © 2025 Finnovate Research · All Rights Reserved · Privacy Policy
Finnovate Research · Knyvett House · Watermans Business Park · The Causeway Staines · TW18 3BA · United Kingdom · About · Contact Us · Tel: +44-20-3070-0188

We use cookies to provide the best website experience for you. If you continue to use this site we will assume that you are happy with it.