Microsoft’s AI Red Team has published a detailed taxonomy addressing the failure modes inherent to agentic architectures. Agentic AI systems are autonomous entities that observe and act upon their environment to achieve predefined objectives. These systems integrate capabilities such as autonomy, environment observation, interaction, memory, and collaboration. However, these features introduce a broader attack surface and new safety concerns. The report distinguishes between novel failure modes unique to agentic systems and amplification of risks already observed in generative AI contexts. Microsoft categorizes failure modes across security and safety dimensions. Novel Security Failures: Including agent compromise, agent injection, agent impersonation, agent flow manipulation, and multi-agent jailbreaks. Novel Safety Failures: Covering issues such as intra-agent Responsible AI (RAI) concerns, biases in resource allocation among multiple users, organizational knowledge degradation, and prioritization risks impacting user safety. Existing Security Failures: Encompassing memory poisoning, cross-domain prompt injection (XPIA), human-in-the-loop bypass vulnerabilities, incorrect permissions management, and insufficient isolation. Existing Safety Failures: Highlighting risks like bias amplification, hallucinations, misinterpretation of instructions, and a lack of sufficient transparency for meaningful user consent