Visa has released new developer tools that allow AI agents to connect directly to Visa’s payment infrastructure, enabling what the company calls “agentic commerce” — a system where AI bots handle everything from product discovery to checkout completion based on consumer preferences and spending limits. Rather than browsing websites and manually completing purchases, consumers would set parameters for AI agents that then autonomously find, evaluate, and buy products across multiple merchants. Rubail Birwadker, Visa’s Global Head of Growth said “These agents will need to be trusted with payments, not only by users, but by banks and sellers as well.” Visa’s new offering centers on two key products: a Model Context Protocol (MCP) Server that provides secure access to Visa’s payment APIs, and the Visa Acceptance Agent Toolkit, which allows both technical and non-technical users to deploy AI-powered payment workflows using plain language commands. The MCP Server represents a significant technical breakthrough, providing AI agents with a standardized way to communicate with Visa’s trusted network without requiring custom integrations for each application. Developers can now move “from idea to functional prototype in hours instead of days or weeks,” according to the company. Visa has implemented multiple layers of protection, including immediate tokenization of card credentials, device-specific authentication, and what Birwadker calls “payment signals” and “payment instructions” that verify AI agent actions align with original consumer intent. “Your PII or your PAN is never going to be exposed,” Birwadker said, referring to personally identifiable information and primary account numbers. “We almost immediately take that pan, we convert it into a token, and we authenticate that token and tie it to a specific device for a specific application.” The company has also developed a matching process that prevents transaction completion until it confirms an AI agent’s intended purchase matches what the consumer originally requested. This addresses concerns about AI “hallucinations” — instances where language models generate incorrect or nonsensical outputs.