Microsoft Corp. introduced a new AI agent that can analyze and classify malware in the wild at scale, without human intervention. The newly minted AI model, named Project Ire, can reverse engineer suspect software files and use forensic tools such as decompilers and binary analysis to deconstruct the code in order to determine whether the file is hostile or safe. “It was the first reverse engineer at Microsoft, human or machine, to author a conviction case — a detection strong enough to justify automatic blocking — for a specific advanced persistent threat malware sample, which has since been identified and blocked by Microsoft Defender,” the Ire research team said. According to the company, when tested against a public dataset of Windows drivers, Project Ire achieved a precision of 0.98 and a recall of 0.83. In terms of pattern recognition and detection, this is very good. It means the software can determine that a file is bad about 98% of the time without a false positive. It was also reasonably likely to find malware about 83% of the time when it casts a net. So, it catches most threats, but it might miss a few. Project Ire uses advanced reasoning models to address problems by stripping away these defenses using specialized tools like an engineer and autonomously evaluates their outputs as it iteratively attempts to classify the behavior of the software. In a real-world scenario involving 4,000 “hard target” files that had not been classified by automated systems and were pending expert review, the AI agent achieved a precision of 0.89, meaning nine out of 10 files were correctly flagged as malicious. Its recall was 0.26, meaning that the system detected around a quarter of all actual malware that passed through its dragnet. It also had only a 4% false positive rate, which is when the software claims a safe file is malware.