• Menu
  • Skip to right header navigation
  • Skip to main content
  • Skip to primary sidebar

DigiBanker

Bringing you cutting-edge new technologies and disruptive financial innovations.

  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In
  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In

A hacker was able to infiltrate a plugin for an Amazon generative AI assistant after obtaining stolen credentials and making unauthorized changes, including secretly instructing it to delete files

August 5, 2025 //  by Finnovate

Coders who use artificial intelligence to help them write software are facing a growing problem, and Amazon.com Inc. is the latest company to fall victim. A hacker was recently able to infiltrate a plugin for an Amazon generative AI assistant1 after obtaining stolen credentials and making unauthorized changes, including secretly instructing it to delete files from the computers it was used on. The incident points to a gaping hole in the security practices of AI coding tools that has gone largely unnoticed in the race to capitalize on the technology. The hacker effectively showed how easy it could be to manipulate artificial intelligence tools — through a public repository like Github — with the the right prompt. Amazon ended up shipping a tampered version of the plugin to its users, and any company that used it risked having their files deleted. Fortunately for Amazon, the hacker deliberately kept the risk for end users low in order to highlight the vulnerability, and the company said it “quickly mitigated” the problem. But this won’t be the last time hackers try to manipulate an AI coding tool for their own purposes, thanks to what seems to be a broad lack of concern about the hazards. More than two-thirds of organizations are now using AI models to help them develop software, but 46% of them are using those AI models in risky ways, according to the 2025 State of Application Risk Report by Israeli cyber security firm Legit Security. “Artificial intelligence has rapidly become a double-edged sword,” the report says, adding that while AI tools can make coding faster, they “introduce new vulnerabilities.” It points to a so-called visibility gap, where those overseeing cyber security at a company don’t know where AI is in use, and often find out it’s being applied in IT systems that aren’t secured properly. The risks are higher with companies using “low-reputation” models that aren’t well known, including open-source AI systems from China. Dive into the shadow world of hackers and cyber-espionage. The flaw was discovered by the Swedish startup’s competitor, Replit; Lovable responded on Twitter by saying, “We’re not yet where we want to be in terms of security.” One temporary fix is — believe it or not — for coders to simply tell AI models to prioritize security in the code they generate. Another solution is to make sure all AI-generated code is audited by a human before it’s deployed. That might hamper the hoped-for efficiencies, but AI’s move-fast dynamic is outpacing efforts to keep its newfangled coding tools secure, posing a new, uncharted risk to software development. The vibe coding revolution has promised a future where anyone can build software, but it comes with a host of potential security problems too.

Read Article

Category: Essential Guidance

Previous Post: « Embedded payments are seeing rising adoption in the parking sector through AI-recognition tech that lets customers just drive in and scan a QR code to enter their credit card information the first time they park, with automatic vehicle identification and charges applied on subsequent trips

Copyright © 2025 Finnovate Research · All Rights Reserved · Privacy Policy
Finnovate Research · Knyvett House · Watermans Business Park · The Causeway Staines · TW18 3BA · United Kingdom · About · Contact Us · Tel: +44-20-3070-0188

We use cookies to provide the best website experience for you. If you continue to use this site we will assume that you are happy with it.