Researchers have published the most comprehensive survey to date of so-called “OS Agents” — artificial intelligence systems that can autonomously control computers, mobile phones and web browsers by directly interacting with their interfaces. “OS Agents can complete tasks autonomously and have the potential to significantly enhance the lives of billions of users worldwide,” the researchers note. The attack methods they document read like a cybersecurity nightmare. “Web Indirect Prompt Injection” allows malicious actors to embed hidden instructions in web pages that can hijack an AI agent’s behavior. Even more concerning are “environmental injection attacks” where seemingly innocuous web content can trick agents into stealing user data or performing unauthorized actions. Consider the implications: an AI agent with access to your corporate email, financial systems, and customer databases could be manipulated by a carefully crafted web page to exfiltrate sensitive information. Traditional security models, built around human users who can spot obvious phishing attempts, break down when the “user” is an AI system that processes information differently. The survey reveals a concerning gap in preparedness. While general security frameworks exist for AI agents, “studies on defenses specific to OS Agents remain limited.” This isn’t just an academic concern — it’s an immediate challenge for any organization considering deployment of these systems. Some commercial systems achieve success rates above 50% on certain benchmarks — impressive for a nascent technology — but struggle with others. The researchers categorize evaluation tasks into three types: basic “GUI grounding” (understanding interface elements), “information retrieval” (finding and extracting data), and complex “agentic tasks” (multi-step autonomous operations). The pattern is telling: current systems excel at simple, well-defined tasks but falter when faced with the kind of complex, context-dependent workflows that define much of modern knowledge work. They can reliably click a specific button or fill out a standard form, but struggle with tasks that require sustained reasoning or adaptation to unexpected interface changes. This performance gap explains why early deployments focus on narrow, high-volume tasks rather than general-purpose automation.
Synack’s premier Penetration Testing as a Service (PTaaS) platform to deliver proactive, risk-based security validation featuring a human-in-the-loop approach, by fusing autonomous AI capabilities and agentic AI architecture
Synack unveiled its agentic AI architecture, Sara (Synack Autonomous Red Agent). Sara enhances Synack’s premier Penetration Testing as a Service (PTaaS) platform to deliver proactive, risk-based security validation featuring a human-in-the-loop approach. By fusing autonomous AI capabilities with the expert human analysis of the Synack Red Team, organizations can autonomously reduce risk across their attack surface. This next-generation platform embodies an AI-versus-AI model, where AI-powered validation—supervised and guided by human judgment—counters machine-driven reconnaissance and attacks. The result is a powerful, adaptive solution that mirrors real-world adversary behavior while minimizing risk and false positives. The Sara agentic AI architecture delivers scalable, adaptable assessment of attack surface risk. Sara Triage, a core component of Synack’s new Active Offense product, is available immediately to provide autonomous triage of discovered vulnerabilities, validating those that are truly exploitable. Sara Pentest will follow later this year to conduct full-scope, objective-based penetration tests in concert with the Synack Red Team. Sara’s human-in-the-loop architecture ensures discovery of logic flaws, chained exploits and nuanced vulnerabilities, bridging the gap between automated detection and human intuition. The model’s other benefits in the Synack platform include: Integrated Management of Human and Agent Testing: Human researchers and agents collaborate to reduce attack risk in one centralized interface; Scalable Human-in-the-Loop Analysis: 1,500+ security researchers are available on-demand for human analysis of AI-discovered findings; Agent Thinking Visibility: Easily review agentic AI decisions, including detailed ‘proof of exploitability’ information; Rapid Attack Surface Coverage: Flexibly deploy agent and human testing across the managed attack surface; Reporting and Analytics: Access real-time and historic analysis of agentic and human-led testing results to understand vulnerability root cause and drive corrective action.
SonicWall launches Generation 8 firewalls with unified cloud management, embedded zero trust access, real-time co-managed security, and industry-first cyber warranty for MSP scalability
Cybersecurity firm SonicWall launched new firewalls as part of its Generation 8 portfolio, positioning the company as a go-to platform for managed service providers and managed security service providers. SonicWall’s platform is anchored by its Unified Management system, a single cloud console that streamlines the management of firewalls, network policies, access controls and accounts to reduce operational complexity. Every firewall in the portfolio includes built-in Zero Trust Network Access licenses, enabling secure remote access that is easy to deploy and well-suited for modern cloud environments. To further support partners, SonicWall offers SonicSentry Co-Managed Security, providing optional 24/7 monitoring, patching and monthly reporting from its team of experts. Each managed firewall also comes with an industry-first embedded cyber warranty, delivering $200,000 in coverage through the Managed Protection Security Suite for added peace of mind. SonicWall’s Generation 8 release features eight new firewall models, ranging from the ultra-compact TZ280 to the high-performance NSa 5800. Each is engineered, the company claims, to deliver best-in-class security, performance, and scalability for small offices, distributed environments and midsized enterprises. Each model includes cloud-native management that has been built for service providers through SonicWall Unified Management, built-in zero trust capabilities, the latest SonicOS enhancements and is protected by SonicWall’s embedded cyber warranty. The entire Generation 8 lineup can also be purchased with MPSS, enabling co-managed security services delivered by the SonicSentry team of security professionals. SonicWall’s platform is built to address real-world MSP challenges, supporting everything from cloud-first organizations and remote workforces to distributed enterprises. The platform delivers small to medium-sized businesses and midmarket security firms with embedded zero trust, allows for centralized oversight in multitenant environments, and offers compliance-friendly co-management with built-in monthly health reports.
New wave of injection attacks exploits input manipulation, bypassing traditional checks; as a result liveness detection is no longer a value-added feature — it is now an essential component of security
Jumio warns about the rise of injection attacks as one of the most sophisticated and difficult-to-detect threats in identity verification processes. Unlike conventional identity spoofing methods —injection attacks bypass traditional fraud detection methods by manipulating the input channel itself. Instead of presenting an image or video in front of the camera, attackers alter the system at its source, compromising the integrity of the digital process. Their effectiveness can result in financial fraud, creation of fake identities, evasion of regulatory controls, and loss of user trust and strategic partner confidence. Given this scenario, liveness detection is no longer a value-added feature — it is now an essential component of security. To combat injection attacks, systems must be able to distinguish between a real person in front of a camera and a manipulated video source. Effective identity verification technologies against injection attacks should: Differentiate between a legitimate source and an emulated one, identifying whether the video comes from a real camera or software emulator. Accurately match the presented face with the ID document, ensuring biometric consistency. Detect invisible clues such as synthetic artifacts, repetitions, or inconsistencies in lighting, textures, and depth. Recognize suspicious patterns, like reused backgrounds in multiple attempts or pre-recorded videos presented as live input.
New Federal Reserve toolkits provide foundational knowledge and practical resources on scam and check fraud tactics, empowering payments professionals to recognize, prevent, and collaborate on defense
Federal Reserve has released two new toolkits: the Scams Mitigation Toolkit (Off-site) and Check Fraud Mitigation Toolkit (Off-site). The toolkits are intended to support education and increase awareness about scams and check fraud, enable the payments industry to better identify and fight them, and foster industry collaboration on fraud and scams mitigation. The initial releases of the Scams Mitigation Toolkit and Check Fraud Mitigation Toolkit focus on building foundational knowledge about different types of scams and check fraud; the tactics and human vulnerabilities that often enable these to succeed; and common scenarios that financial institutions, service providers, other businesses and individuals may encounter. In the fourth quarter of 2025, second releases of these two toolkits will offer additional insights and resources. These toolkits were developed by the Federal Reserve to help educate the industry about scams and check fraud. Insights for these toolkits were provided through interviews with industry experts, publicly available research and tea m member expertise. The toolkits are not intended to result in any regulatory or reporting requirements, imply any liabilities for fraud loss, or confer any legal status, legal definitions, or legal rights or responsibilities. While use of these toolkits throughout the industry is encouraged, their utilization is voluntary at the discretion of each individual entity. Absent written consent, the toolkits may not be used in a manner that suggests the Federal Reserve endorses a third-party product or service.
Quantum-Safe 360 Alliance publishes white paper, guiding enterprises through PQC migration with best practices, crypto-agile strategies, and expertise from Keyfactor, IBM, Thales, and Quantinuum
The Quantum-Safe 360 Alliance, including members Keyfactor, IBM Consulting, Thales, and Quantinuum, unveiled its first comprehensive guide to help organizations navigate the global transition to post-quantum cryptography (PQC). The white paper marks the formal debut of the Quantum-Safe 360 Alliance, an evolving collective of industry leaders with unparalleled expertise spanning cryptographic design and deployment, public key infrastructure (PKI) and certificate lifecycle management, crypto-agile development practices, and quantum-safe cryptography. Collaborating to help enterprises tackle the challenges of PQC transitions, the Alliance’s white paper signals a coordinated, public effort to provide clear guidance and accelerate preparedness for the quantum era. Drawing upon each Alliance member’s deep proficiency and diverse capabilities, the white paper highlights the urgency of quantum-safe preparedness and the risks of inaction and provides actionable guidance on building stronger crypto-agility and starting PQC transitions. Formed to promote a unified, cross-industry approach, the Alliance aims to provide coordinated expertise and interoperable solutions to help enterprises safeguard data in the quantum era. By pooling resources and knowledge, the Alliance aims to help enterprises navigate the quantum era, including supplying organizations with cybersecurity best practices and interoperable solutions designed to work cohesively across platforms and industries. Key topics the white paper addresses include: The necessity of cryptographic agility to adapt to evolving threats; The challenges enterprises face in securing internal buy-in for PQC and strategies to overcome them; Case studies highlighting the value of holistic post-quantum preparation guided by the expertise and skills of Alliance members; A strategic roadmap for enterprises to adopt cryptographic agility; and, Best practices and tools for implementing a quantum-safe infrastructure, including PKI management, key lifecycle strategies, and quantum-generated randomness for enhanced security.
New Gmail phishing wave exploits fake security warnings; Google urges users to check account activity directly, never via email links, to prevent hijacking
Google has confirmed that Gmail attacks are surging, as hackers steal passwords to gain access to accounts. This also means a surge in “suspicious sign in prevented” emails, Google’s warning that “it recently blocked an attempt to access your account.” Attackers know this — that Gmail user concerns are heightened by security warnings, and they use this to frame their attacks. “Sometimes hackers try to copy the ‘suspicious sign in prevented’ email,” Google warns, “to steal other people’s account information,” which then gives those hackers access to user accounts. If you receive this Google email warning, do not click on any link or button within the email itself. Instead, “go to your Google Account, on the left navigation panel, click security, and on the recent security events panel, click to review security events.” If any of the events raise concerns — times or locations or devices you do not recognize — then “on the top of the page click secure your account” to change your password. If you do click a link from within this email or any other email purporting to come from Google, you will be taken to a sign-in page that will be a malicious fake. If you enter your user name and password into that page, you risk them being stolen by hackers to hijack your account. And that will give them access to everything.
Fraudsters in New Zealand deploy AI-generated deepfake videos on Facebook, mimicking prominent finance experts to entice victims into WhatsApp groups, malware installs, and ultimately steal their funds
New Zealand’s financial watchdog has issued warnings about sophisticated scammers using deepfake technology to impersonate well-known local finance experts and commentators on Facebook. The Financial Markets Authority (FMA) identified fake Facebook pages targeting investors by mimicking prominent figures from the local media. These fraudulent accounts use artificially generated videos of the personalities to promote what appear to be legitimate WhatsApp investment advisory groups. The scam begins when victims encounter Facebook or Instagram advertisements featuring deepfake videos of these respected financial voices. The AI-generated content shows the impersonated figures discussing their supposed investment successes and encouraging viewers to join exclusive WhatsApp groups for free trading advice. Once victims join these WhatsApp groups, they encounter what appears to be a thriving community of successful investors. However, most group members are fake accounts controlled by the scammers, all praising their supposed mentor’s investment guidance and sharing fabricated success stories. The scammers frequently ask victims to install software on their devices, which turns out to be malware or remote access tools. This gives fraudsters access to sensitive personal and financial information stored on victims’ computers and phones. When investors attempt to withdraw their funds, they hit the final trap. Scammers claim victims must pay additional fees before accessing their money. Even after paying these bogus charges, victims never receive their investments back.
IVIX plans to enhance its government-used platform leveraging LLM scrapers, graph-based link analysis, and multimodal AI to uncover connected illicit financial activities with high accuracy
IVIX Tech Inc., a startup that helps government agencies detect money laundering and other financial crimes, has raised $60 million in funding. IVIX provides a software platform that scans the web for signs of illicit financial activity. The platform collects that data using scrapers powered by large language models. IVIX’s scrapers aggregate information from blockchains, short-term rental platforms, e-commerce marketplaces and other sources. The platform automatically organizes the raw data it collects to ease analysis. As part of the process, IVIX removes duplicate records and turns the remaining information into a form that lends itself better to processing. Text-based addresses associated with illicit financial activity are translated into coordinates, while data embedded in images is extracted using multimodal neural networks. After organizing the data points it ingests, the platform groups together items that are tied to the same entity. This allows users to identify transactions that are seemingly unrelated but were in fact carried out by the same bad actor. IVIX links together records by organizing them in graphs, a data structure that highlights connections between different pieces of information. The graphs make it easier to spot unusual fund transfers. IVIX looks for such activity by analyzing the frequency, size and timing of transactions. It also evaluates other factors, such as whether transactions made on different platforms appear to follow the same patterns. After identifying suspicious financial activity, IVIX can find the entity behind it. The company claims that its platform performs the task with 99% accuracy. That removes the need for law enforcement officials to perform the task manually, which speeds up financial investigations.
IBM reveals 97% of AI-related breaches stem from improper access controls, with shadow AI usage adding $670K to breach costs amid rising governance gaps
Recent IBM research finds an “AI oversight gap” among organizations that had experienced data breaches. “Consider this: a staggering 97% of breached organizations that experienced an AI-related security incident say they lacked proper AI access controls,” the company said in promoting findings from its Cost of a Data Breach Report. In addition, 63% of the surveyed organizations said they had no AI governance policies in place to manage AI or keep workers from using “shadow AI,” IBM said. “This AI oversight gap is carrying heavy financial and operational costs,” the company’s announcement added. “The report shows that having a high level of shadow AI—where workers download or use unapproved internet-based AI tools—added an extra $670,000 to the global average breach cost.” In addition, AI-related breaches carried a ripple effect, leading to “broad data compromise and operational disruption,” which can keep organizations from processing sales orders, delivering customer service and managing supply chains. The report also contains some positive news: average global data breach costs have declined for the first time in five years, from $4.88 million to $4.44 million, a 9% decrease. Faster breach containment driven by AI-powered defenses,” the company said, with organizations able to identify and contain a breach within a mean time of 241 days, the lowest that figure has been in nine years.