A new paper from researchers at Princeton University and the Sentient Foundation found that certain agents—AI systems that can act beyond the realm of a chatbot—could be vulnerable to memory attacks that trick them into handing over cryptocurrency. Targeting agents created with the platform ElizaOS, the researchers were able to implant false memories or “malicious instructions” that manipulated shared context in a way that could lead to “unintended asset transfers and protocol violations which could be financially devastating.” They wrote that the vulnerabilities point to an “urgent need to develop AI agents that are both secure and fiduciarily responsible.” Tyagi said the paper focused on ElizaOS because it’s “the most popular open-source agentic framework in crypto,” and on cryptocurrency because its traders have most readily embraced these types of autonomous agentic payments. While these agents do protect against basic prompt injection attacks—inputs designed to exploit the LLM—more sophisticated actors might be able to manipulate the stored memory or contexts in which these agents operate. The researchers designed a benchmark to evaluate the defenses of blockchain-based agents against these types of attacks. They also argued that the vulnerabilities extend beyond just cryptocurrency-based or even financial agents: “The application of AI agents has led to significant breakthroughs in diverse domains such as robotics, autonomous web agents, computer use agents, and personalized digital assistance. We posit that [memory injection] represents an insidious threat vector in such general agentic frameworks.”