A group of researchers from Carnegie Mellon University proposed a new interoperability protocol governing autonomous AI agents’ identity, accountability and ethics. Layered Orchestration for Knowledgeful Agents, or LOKA, could join other proposed standards like Google’s Agent2Agent (A2A) and Model Context Protocol (MCP) from Anthropic. The open-source LOKA, which would enable agents to prove their identity, “exchange semantically rich, ethically annotated messages,” add accountability, and establish ethical governance throughout the agent’s decision-making process. LOKA builds on what the researchers refer to as a Universal Agent Identity Layer, a framework that assigns agents a unique and verifiable identity. The researchers said LOKA stands out because it establishes crucial information for agents to communicate with other agents and operate autonomously across different systems. LOKA could be helpful for enterprises to ensure the safety of agents they deploy in the world and provide a traceable way to understand how the agent made decisions. A fear many enterprises have is that an agent will tap into another system or access private data and make a mistake. LOKA will have to compete with other agentic protocols and standards that are now emerging. Protocols like MCP and A2A have found a large audience, not just because of the technical solutions they provide, but because these projects are backed by organizations people know. Anthropic started MCP, while Google backs A2A, and both protocols have gathered many companies open to use — and improve — these standards.