A new report from application security posture management company Apiiro Ltd. details a tenfold increase in security findings among Copilot users, peaking in mid-2025. Two primary factors were found to be driving the surge: open-source dependencies and secure coding issues. AI-assisted developers were found to be more prone to design-level flaws versus conventional developers, who were more likely to introduce logic mistakes. The architectural weaknesses are more costly to remediate and harder to catch later on, creating a structural challenge for organizations trying to balance speed with security. Secrets exposure was also found to diverge between developers. Developers working with Copilot leaked higher volumes of cloud credentials, while non-Copilot users were more likely to expose generic application programming interface tokens. The key takeaway is that AI assistance may inadvertently amplify risks related to cloud identity and credential management. The findings in the report include that developers using AI tools on average generate three to four times more commits on average, but the contributions were consolidated into fewer, larger pull requests, or proposed code changes. The increased throughput was found to accelerate delivery but also add complexity for application security teams — since traditional review processes are now insufficient to keep up with the scale and intricacy of AI-assisted code. The report also details how average pull request sizes and commit volumes have sharply increased as AI coding assistance has been adopted. AI-assisted developers were found to produce more code but open fewer pull requests. Larger, more complex code submissions are noted as elevating the risk of shallow reviews and missed vulnerabilities. Apiiro’s researchers warn that though AI code assistants can drive dramatic improvements in developer productivity, they also introduce new categories of risk that organizations must address