CrowdStrike Holdings and Microsoft have announced a strategic collaboration to address confusion in identifying and tracking cyberthreat actors across security platforms. The partnership aims to create a shared mapping system that aligns adversary attribution across both companies’ threat intelligence ecosystems, eliminating ambiguity caused by inconsistent naming. The “Rosetta Stone” for cyber threat intelligence links adversary identifiers across vendor ecosystems without mandating a single naming standard. This enables defenders to make faster, more confident decisions, correlate threat intelligence across sources, and better disrupt threat actor activity before it causes harm. The collaboration will start with a shared analyst-led effort to harmonize adversary naming between CrowdStrike and Microsoft’s threat research teams. Microsoft and CrowdStrike aim to continue working together to expand this effort and maintain a shared threat actor mapping resource for the global cybersecurity community.
New malware campaign exploits Open WebUI plugin system used for making enhancements to large LLMs, to deploy AI-generated payloads targeting both Linux and Windows systems
A new report from cloud-native application security firm Sysdig Inc. details one of the first instances of a LLM being weaponized in an active malware campaign. Discovered by Sysdig’s Threat Research Team, the malware campaign involved exploiting misconfigured instances of Open WebUI, a widely used self-hosted artificial intelligence interface, to deploy malicious, AI-generated payloads targeting both Linux and Windows systems. The attack began when a training system using Open WebUI deployed by one of Sysdig’s customers was mistakenly exposed to the internet with administrative privileges and no authentication. The exposure to the internet allowed anyone to execute commands on the system, dangerous mistake attackers are well aware of and actively scanning for. Open WebUI, which has more than 95,000 stars on GitHub, allows extensible enhancements for large LLMs via custom Python scripts. The attacker exploited the feature by uploading a malicious, obfuscated Python script through Open WebUI’s plugin system. The system’s internet exposure and lack of safeguards provided an easy entry point for the attacker to execute commands and deploy further malicious payloads. The uploaded Python script was obfuscated using PyObfuscator and also contained a distinctive style indicative of AI-generated code. The script, which underwent multiple decoding layers, downloaded and executed crypto miners targeting Monero and Ravencoin networks, while establishing persistence via a systemd service masquerading as “ptorch_updater.” Notably, the use of inline format string variables, a common feature in AI-generated code, was prevalent throughout the malicious script. Sysdig’s researchers confirmed that parts of the code were likely AI-generated or heavily AI-assisted, a trend that could signify a shift towards the rapid development of malware using generative AI tools. The good news, as much as there can be in malware cases, Sysdig’s runtime threat detection was able to identify the threat in real time. Using a combination of YARA rules, behavioral detections and threat intelligence, Sysdig detected the suspicious activity, including unauthorized code compilation, domain lookups, and the use of known miner communication protocols.
BlueVoyant’s Software Bill of Materials (SBOM) solution automates the ingestion, analysis, and tracking of open-source software components from both first and third-party vendors to help reduce supply chain risks
BlueVoyant has introduced a Software Bill of Materials (SBOM) management offering to help organizations manage and reduce third-party software risks. The new feature automates the ingestion, analysis, and tracking of software components from third-party vendors, enhancing BlueVoyant’s Supply Chain Defense. The collaboration with cybersecurity company Manifest aims to provide security teams with insights into software risk exposure and dependencies that may impact business operations. Key benefits include automated vendor risk management, allowing organizations to request SBOMs from vendors, view risk levels for products, and integrate this data into wider risk management activities. The solution assembles an enterprise-wide inventory of open source software components across both first and third-party products, allowing scanning of OSS repositories to assess risk before implementation. BlueVoyant’s Supply Chain Defense solution has been recognized with industry awards, including winning the Cybersecurity Excellence Awards for Supply Chain and being featured in the 2025 Gartner Market Guide for Third-Party Risk Management Technology Solutions.
Zero Networks’s agentless microsegmentation solution creates secure micro-perimeters around every device on a network, including OT and IoT devices, and enforces least-privilege access controls by dynamically learning network behavior
Israeli cybersecurity startup Zero Networks Ltd. raised $55 million in new funding to expand its team and scale up go-to-market efforts worldwide. Zero Networks offers automated, agentless microsegmentation solutions that prevent lateral movement and block ransomware attacks. The company’s platform simplifies the implementation of zero-trust security by dynamically learning network behavior and enforcing least-privilege access controls without the need for manual configurations or additional agents. The company offers a Saas solution that learns network traffic and creates security policies that restrict user and machine access to only necessary assets. The platform also deploys multifactor authentication for access to sensitive protocols to stop hackers from moving laterally through an organization. Microsegmentation is core to the company’s offering. The platform creates secure micro-perimeters around every device on a network, including operational technology and internet of things devices. The platform ensures that only necessary network permissions are maintained by monitoring network traffic and applying deterministic rule creation, thereby reducing the risk of unauthorized access. Zero Networks also offers identity segmentation, which restricts admin and service account access strictly to operational needs. The approach prevents privileged account abuse by enforcing multifactor authentication and just-in-time access, to enhance an organization’s security posture against credential-based attacks.
Mind Security’s tech can autonomously detect sensitive data at “machine speed” by combining data security posture and data loss prevention in one unified platform and employing a multilayer classification system
Mind Security Inc., a provider of AI automated data loss prevention solutions to help businesses avoid costly breaches, has raised $30 million in early-stage funding led by Paladin Capital Group and Crosspoint Capital Partners. Mind provides AI-native data loss prevention solutions and insider risk management programs that will autonomously detect sensitive data, assist with risk issues and stop them before they get out the door. The company says it can provide this solution at “machine speed” by providing a “unique approach by combining a data security posture and data loss prevention in one unified platform.” Mind said its platform employs a multilayer classification system to identify sensitive data and minimize false positives. This is something that traditional systems make difficult for teams due to excessive alerts and manual overviews, which can overburden data loss prevention teams. According to Mind, prevention starts with real-time detection and blocking of attempts, either malicious or inadvertent, to remove sensitive data from within company firewalls. When such data is on the move, the company says, its AI-native platform leaps into action and makes an autonomous decision based on business context to determine normal user behaviors versus high-risk behaviors that should be stopped.
Google warns of social engineering scheme targeting Salesforce users that steals data on a large scale and then try to extort the targeted company
Google Threat Intelligence Group warned that an organization specializing in voice phishing (vishing) is targeting Salesforce users. The attackers, dubbed UNC6040, have repeatedly been successful in recent months in breaching networks through social engineering schemes. UNC6040’s operators contact companies by telephone, impersonate IT support personnel, and trick employees into granting the attackers access or sharing credentials that can be used to steal the organization’s Salesforce data. In all observed cases, attackers relied on manipulating end users, not exploiting any vulnerability inherent to Salesforce. Once they have compromised the Salesforce instance, the attackers steal data on a large scale and then try to extort the targeted company. In some instances, extortion activities haven’t been observed until several months after the initial UNC6040 intrusion activity, which could suggest that UNC6040 has partnered with a second threat actor that monetizes access to the stolen data. GTIG suggested in its blog post that companies defend against social engineering threats by adhering to the principle of least privilege, managing access to connected applications rigorously, enforcing IP-based access restrictions, leveraging advanced security monitoring and policy enforcement with Salesforce Shield, and enforcing multifactor authentication universally.
Mind prevents data leaks using AI, combining a data security posture and data loss prevention in one unified platform.” Mind said its platform employs a multilayer classification system to identify sensitive data I
Mind Security Inc., a provider of AI automated data loss prevention solutions to help businesses avoid costly breaches, has raised $30 million in early-stage funding led by Paladin Capital Group and Crosspoint Capital Partners. Mind provides AI-native data loss prevention solutions and insider risk management programs that will autonomously detect sensitive data, assist with risk issues and stop them before they get out the door. The company says it can provide this solution at “machine speed” by providing a “unique approach by combining a data security posture and data loss prevention in one unified platform.” Mind said its platform employs a multilayer classification system to identify sensitive data and minimize false positives. This is something that traditional systems make difficult for teams due to excessive alerts and manual overviews, which can overburden data loss prevention teams. According to Mind, prevention starts with real-time detection and blocking of attempts, either malicious or inadvertent, to remove sensitive data from within company firewalls. When such data is on the move, the company says, its AI-native platform leaps into action and makes an autonomous decision based on business context to determine normal user behaviors versus high-risk behaviors that should be stopped.
American iPhones maybe targeted in spyware attacks with a pattern of deleting potential evidence, mirroring techniques where attackers ‘clean up’ after themselves
A new report from the team at iVerify warns that a “previously unknown” vulnerability in iOS maybe enabled a highly targeted attack on iPhones in the U.S. as well as Europe. This flaw was not in the core messaging architecture itself, but in its nickname feature. “Any increase in the size of a codebase is going to introduce attack opportunities,” iVerify told. And that’s the case here. When a user updates their profile, “nickname, photo, or wallpaper,” this triggers “a ‘Nickname Update’ on a recipient’s device.” Trivial though it might seem, that nickname update process is a data transmission from one device to another, it’s implicitly trusted data and it’s within the secure enclave. “This vulnerability was present in iOS versions up to 18.1.1 and fixed in iOS 18.3.1.” While there’s no doubting the flaw and the fix, there is no concrete proof it was exploited in the wild. “We analyzed crash data from nearly 50,000 devices,” iVerify says, “and found that the imagent crashes related to Nickname Updates are exceedingly rare, comprising less than 0.001% of all crash logs collected.” But those rare instances appeared only on “devices belonging to individuals likely to be targeted by sophisticated threat actors.” iVerify reports that forensic examination of one affected device “provided evidence suggesting exploitation: several directories related to SMS attachments and message metadata were modified and then emptied just 20 seconds after the imagent crash occurred. This pattern of deleting potential evidence mirrors techniques observed in confirmed spyware attacks where attackers ‘clean up’ after themselves.”
Amazon’s AI security agents can simulate both attackers and defenders, identify the attacks, extract the signatures from the attacks and build news in minutes for proactive threat detection
Amazon is leveraging AI not just to automate tasks, but to actively defend systems, with AI agents harnessed to simulate both attackers and defenders. Defensive agents train protocols for proactive threat detection, generate digital signatures and respond in minutes — far faster than traditional methods, according to Steve Schmidt, senior vice president and chief security officer at Amazon. Amazon.com Inc. is redefining security at the intersection of AI, physical safety and public-private collaboration. Its evolving strategy blends digital resilience with real-world safeguards to counter the speed and complexity of today’s threat landscape. “We also build tools that are defenders — their job is to identify the attacks and to extract from the attacks the signatures, which our systems can use to prevent access in the future,” he said. “What we measure now is, instead of situations where it used to take days, weeks or months to build new signatures for attacks, these agents can do it in minutes — and it’s really transformative for our ability to defend systems.” While AI unlocks accuracy and expediency, a human in the loop is still critical for validating actions before deployment. Eventually, some agents could become fully autonomous in low-risk environments, but high-stakes systems will continue to require human oversight, Schmidt added.
Princeton University study says agents may be vulnerable to memory attacks that trick them into handing over cryptocurrency
A new paper from researchers at Princeton University and the Sentient Foundation found that certain agents—AI systems that can act beyond the realm of a chatbot—could be vulnerable to memory attacks that trick them into handing over cryptocurrency. Targeting agents created with the platform ElizaOS, the researchers were able to implant false memories or “malicious instructions” that manipulated shared context in a way that could lead to “unintended asset transfers and protocol violations which could be financially devastating.” They wrote that the vulnerabilities point to an “urgent need to develop AI agents that are both secure and fiduciarily responsible.” Tyagi said the paper focused on ElizaOS because it’s “the most popular open-source agentic framework in crypto,” and on cryptocurrency because its traders have most readily embraced these types of autonomous agentic payments. While these agents do protect against basic prompt injection attacks—inputs designed to exploit the LLM—more sophisticated actors might be able to manipulate the stored memory or contexts in which these agents operate. The researchers designed a benchmark to evaluate the defenses of blockchain-based agents against these types of attacks. They also argued that the vulnerabilities extend beyond just cryptocurrency-based or even financial agents: “The application of AI agents has led to significant breakthroughs in diverse domains such as robotics, autonomous web agents, computer use agents, and personalized digital assistance. We posit that [memory injection] represents an insidious threat vector in such general agentic frameworks.”