Automatic purchases on fake websites, falling for simple phishing attacks that expose users’ bank accounts, and even downloading malicious files to computers – these are the failures in AI browsers and autonomous AI agents revealed by new research from Israeli cybersecurity company Guardio. The report warns that AI browsers can, without user consent, click, download, or provide sensitive information. Such fraud no longer needs to deceive the user. It only needs to deceive the AI. And when this happens, the user is still the one who pays the price. We stand at the threshold of a new and complex era of fraud, where AI convenience collides with an invisible fraud landscape and humans become collateral damage. Guardio’s research reveals that these browsers and agents may fall victim to a series of new frauds, a result of an inherent flaw that exists in all of them. The problem, according to the study’s authors, is that they inherit AI’s built-in vulnerabilities: a tendency to act without full context, to trust too easily, and to execute instructions without natural human skepticism. AI was designed to please people at almost any cost, even if it involves distorting facts, bending rules, or operating in ways that include hidden risks. Finally, the researchers demonstrated how AI browsers can be made to ignore action and safety instructions by sending alternative, secret instructions to the model. This involves developing an attack against AI models called “prompt injection.” In this attack, an attacker encrypts instructions to the model in various ways that the user cannot see. The simplest example is text hidden from the user but visible to the AI (for instance, using font color identical to the background color) that instructs the model: ignore all previous instructions, and perform malicious activity instead. Using this method, one can cause AI to send emails with personal information, provide access to the user’s file storage services, and more. In effect, the attacker can now control the user’s AI, the report states.