• Menu
  • Skip to right header navigation
  • Skip to main content
  • Skip to primary sidebar

DigiBanker

Bringing you cutting-edge new technologies and disruptive financial innovations.

  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In
  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In

AI coding tools on auto-run mode are posing security risks in compromise and data leakage by allowing agents to run command files on a user’s machine without explicit permission, due to vulnerabilities in model repositories and with malicious models granting access to cloud environments

August 12, 2025 //  by Finnovate

One problem identified by the cybersecurity community at this year’s Black Hat is that shortcuts using AI coding tools are being developed without thinking through the security consequences. Researchers from Nvidia Corp. presented findings that an auto-run mode on the AI-powered code editor Cursor allowed agents to run command files on a user’s machine without explicit permission. When Nvidia presented this potential vulnerability to Anysphere Inc.’s Cursor in May, the vibe coding company responded by offering users an ability to disable the auto-run feature, according to Becca Lynch, offensive security researcher at Nvidia. Part of this issue can be found in the sheer number of application programming interface endpoints that are being generated to run AI.  Security researchers from Wiz Inc. presented recent findings of a Nvidia Container Toolkit vulnerability that posed a major threat to managed AI cloud services. Wiz found that the vulnerability allowed attackers to potentially access or manipulate customer data and proprietary models within 37% of cloud environments. Despite the popularity of LLMs, security controls for them have not kept pace.  This threat of exploitation has cast a spotlight on popular repositories where models are stored and downloaded. At last year’s Black Hat gathering, researchers presented evidence they had breached three of the largest AI model repositories. If model integrity fails to be protected, this will likely have repercussions for the future of AI agents as well. Agentic AI is booming, yet the lack of security controls around the autonomous software is also beginning to generate concern. Cybersecurity company Coalfire Inc. released a report which documented its success in hacking agentic AI applications. Using adversarial prompts and working with partner standards such as those from the National Institute of Standards and Technology or NIST, the company was able to demonstrate new risks in compromise and data leakage. “There was a success rate of 100%,” Apostol Vassilev, research team supervisor at NIST, said.

Read Article

Category: Additional Reading

Previous Post: « CIOs can adopt a 4-stage framework for scaling agentic capabilities that starts with information retrieval agents for fetching data and progressing to simple orchestration in a single domain, complex orchestration with multi-step workflows and multi-agent orchestration across domains and systems
Next Post: New hires at PwC to be like managers, reviewing and supervising AI perform routine, repetitive audit tasks like data gathering and processing and focusing on “more advanced and value-added work” »

Copyright © 2025 Finnovate Research · All Rights Reserved · Privacy Policy
Finnovate Research · Knyvett House · Watermans Business Park · The Causeway Staines · TW18 3BA · United Kingdom · About · Contact Us · Tel: +44-20-3070-0188

We use cookies to provide the best website experience for you. If you continue to use this site we will assume that you are happy with it.