One problem identified by the cybersecurity community at this year’s Black Hat is that shortcuts using AI coding tools are being developed without thinking through the security consequences. Researchers from Nvidia Corp. presented findings that an auto-run mode on the AI-powered code editor Cursor allowed agents to run command files on a user’s machine without explicit permission. When Nvidia presented this potential vulnerability to Anysphere Inc.’s Cursor in May, the vibe coding company responded by offering users an ability to disable the auto-run feature, according to Becca Lynch, offensive security researcher at Nvidia. Part of this issue can be found in the sheer number of application programming interface endpoints that are being generated to run AI. Security researchers from Wiz Inc. presented recent findings of a Nvidia Container Toolkit vulnerability that posed a major threat to managed AI cloud services. Wiz found that the vulnerability allowed attackers to potentially access or manipulate customer data and proprietary models within 37% of cloud environments. Despite the popularity of LLMs, security controls for them have not kept pace. This threat of exploitation has cast a spotlight on popular repositories where models are stored and downloaded. At last year’s Black Hat gathering, researchers presented evidence they had breached three of the largest AI model repositories. If model integrity fails to be protected, this will likely have repercussions for the future of AI agents as well. Agentic AI is booming, yet the lack of security controls around the autonomous software is also beginning to generate concern. Cybersecurity company Coalfire Inc. released a report which documented its success in hacking agentic AI applications. Using adversarial prompts and working with partner standards such as those from the National Institute of Standards and Technology or NIST, the company was able to demonstrate new risks in compromise and data leakage. “There was a success rate of 100%,” Apostol Vassilev, research team supervisor at NIST, said.