• Menu
  • Skip to right header navigation
  • Skip to main content
  • Skip to primary sidebar

DigiBanker

Bringing you cutting-edge new technologies and disruptive financial innovations.

  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In
  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In

Microsoft’s AI agent can analyze and classify malware in the wild at scale without human intervention by reverse engineering suspect software files using forensic tools such as decompilers and binary analysis to deconstruct the code with a precision of 0.98

August 8, 2025 //  by Finnovate

Microsoft Corp. introduced a new AI agent that can analyze and classify malware in the wild at scale, without human intervention. The newly minted AI model, named Project Ire, can reverse engineer suspect software files and use forensic tools such as decompilers and binary analysis to deconstruct the code in order to determine whether the file is hostile or safe. “It was the first reverse engineer at Microsoft, human or machine, to author a conviction case — a detection strong enough to justify automatic blocking — for a specific advanced persistent threat malware sample, which has since been identified and blocked by Microsoft Defender,” the Ire research team said. According to the company, when tested against a public dataset of Windows drivers, Project Ire achieved a precision of 0.98 and a recall of 0.83. In terms of pattern recognition and detection, this is very good. It means the software can determine that a file is bad about 98% of the time without a false positive. It was also reasonably likely to find malware about 83% of the time when it casts a net. So, it catches most threats, but it might miss a few. Project Ire uses advanced reasoning models to address problems by stripping away these defenses using specialized tools like an engineer and autonomously evaluates their outputs as it iteratively attempts to classify the behavior of the software. In a real-world scenario involving 4,000 “hard target” files that had not been classified by automated systems and were pending expert review, the AI agent  achieved a precision of 0.89, meaning nine out of 10 files were correctly flagged as malicious. Its recall was 0.26, meaning that the system detected around a quarter of all actual malware that passed through its dragnet. It also had only a 4% false positive rate, which is when the software claims a safe file is malware.

Read Article

Category: Cybersecurity, Innovation Topics

Previous Post: « Embedded payments are seeing rising adoption in the parking sector through AI-recognition tech that lets customers just drive in and scan a QR code to enter their credit card information the first time they park, with automatic vehicle identification and charges applied on subsequent trips

Copyright © 2025 Finnovate Research · All Rights Reserved · Privacy Policy
Finnovate Research · Knyvett House · Watermans Business Park · The Causeway Staines · TW18 3BA · United Kingdom · About · Contact Us · Tel: +44-20-3070-0188

We use cookies to provide the best website experience for you. If you continue to use this site we will assume that you are happy with it.