A new study by Juniper Research, the foremost experts in fintech and payment markets, forecasts a 153% surge in fraud by 2030; rising from $23 billion in 2025 to $58.3 billion in 2030. This is driven by an evolution of fraud techniques such as synthetic identity fraud, which is when identities are created using a mix of real, stolen, and fake information to create new personas to open accounts and apply for credit. This fraud innovation is driving a surge in investment in new fraud prevention techniques. Synthetic identity threats are becoming more sophisticated; leveraging AI to quickly create convincing new identities based on existing, stolen information. This allows the identities to stay under the radar for longer, and steal more money from banks. As these identities are partly based on genuine information, they can pass traditional, static fraud checks; forcing financial institutions to upgrade their fraud detection and prevention techniques. To combat this, banks must verify identity throughout the customer lifecycle. Biometric behavioural analysis, such as typing rhythms or touch patterns, plays a crucial role; identifying anomalies in real-time. According to Lorien Carter, Senior Research Analyst at Juniper Research, “The rise in fraudulent transactions has effects reaching beyond fraud loss. The recent spate of banks being fined for failing to correctly identify high-risk transactions, such as Monzo, Barclays, and TD Bank, displays that regulators are taking this issue extremely seriously. Financial institutions must increase investment in their fraud detection teams and technology to avoid further monetary and reputational losses.“
Microsoft fortifies Teams security with automated blocking of dangerous executables and real-time malicious URL detection, integrating Defender Allow/Block controls to combat social engineering attacks. Microsoft has announced significant security enhancements for Teams, introducing robust protections against malicious file types and dangerous URLs. In a dual-pronged update, Microsoft will automatically block potentially dangerous executables and warn users about malicious URLs in chats and channels. The security update marks a shift in how Microsoft Teams handles potential threats, implementing automated detection and blocking mechanisms at the platform level. Microsoft’s roadmap entries 499892 and 499893 detail that Teams will now scan both file attachments and embedded URLs for malicious content before they reach users. Files containing executables can, once clicked, instruct a computer or platform to run a certain program, potentially downloading malware or Trojans. URLs can lead users to sites that deliver malware to their computers. This proactive approach minimizes the human factor that has made Teams users vulnerable to social engineering attacks, where legitimate-looking attachments or URLs contain malicious payloads designed to compromise corporate networks. Additionally, Microsoft announced in the Microsoft 365 Message Center that Teams now integrates with the Microsoft Defender for Office 365 Tenant Allow/Block List. This enables security administrators to block incoming communications (chats, channels, meetings, and calls) from blocked domains, automatically delete existing communications from users in blocked domains, and manage blocked external domains in Microsoft Teams via the Microsoft Defender portal. Such control eliminates the ability for malicious files or URLs to remain within a system long after they are identified.
Microsoft has announced significant security enhancements for Teams, introducing robust protections against malicious file types and dangerous URLs. In a dual-pronged update, Microsoft will automatically block potentially dangerous executables and warn users about malicious URLs in chats and channels. The security update marks a shift in how Microsoft Teams handles potential threats, implementing automated detection and blocking mechanisms at the platform level. Microsoft’s roadmap entries 499892 and 499893 detail that Teams will now scan both file attachments and embedded URLs for malicious content before they reach users. Files containing executables can, once clicked, instruct a computer or platform to run a certain program, potentially downloading malware or Trojans. URLs can lead users to sites that deliver malware to their computers. This proactive approach minimizes the human factor that has made Teams users vulnerable to social engineering attacks, where legitimate-looking attachments or URLs contain malicious payloads designed to compromise corporate networks. Additionally, Microsoft announced in the Microsoft 365 Message Center that Teams now integrates with the Microsoft Defender for Office 365 Tenant Allow/Block List. This enables security administrators to block incoming communications (chats, channels, meetings, and calls) from blocked domains, automatically delete existing communications from users in blocked domains, and manage blocked external domains in Microsoft Teams via the Microsoft Defender portal. Such control eliminates the ability for malicious files or URLs to remain within a system long after they are identified.
Mastercard-Alloy onboarding platform combines 200+ risk tools with digital identity verification and open finance to reduce fraud, accelerate onboarding, and enhance funding security across channels.
Alloy, an identity and fraud prevention platform provider, has announced a new global partnership with Mastercard to launch an enhanced customer onboarding solution for financial institutions and fintechs. The new Mastercard Alloy joint onboarding solution will leverage identity verification and open finance to streamline the end-to-end onboarding process while combating fraud. The Mastercard Alloy joint onboarding solution provides a consistent identity risk strategy and onboarding experience across channels. Alloy intends to leverage Mastercard’s best-in-class global digital identity verification capabilities and suite of open finance-powered account opening solutions to help financial institutions and fintechs manage fraud and identity risk and secure account funding throughout the customer lifecycle. Mastercard products will be integrated and pre-configured in Alloy for seamless deployment. In addition to pre-built integrations to Mastercard products, customers will receive access to over 200 risk and identity solutions available through Alloy, helping to improve customer conversion rates, reducing manual reviews, and ensuring comprehensive end-to-end coverage. Dennis Gamiello, EVP, Global Head of Identity at Mastercard said, “This joint onboarding solution will be a game-changer in the fight to reduce fraud and deliver a seamless and secure customer experience.”
Research exposes AI browsers’ vulnerability to automated fraud, including unauthorized purchases and phishing, stemming from AI’s blind trust and lack of human skepticism
Automatic purchases on fake websites, falling for simple phishing attacks that expose users’ bank accounts, and even downloading malicious files to computers – these are the failures in AI browsers and autonomous AI agents revealed by new research from Israeli cybersecurity company Guardio. The report warns that AI browsers can, without user consent, click, download, or provide sensitive information. Such fraud no longer needs to deceive the user. It only needs to deceive the AI. And when this happens, the user is still the one who pays the price. We stand at the threshold of a new and complex era of fraud, where AI convenience collides with an invisible fraud landscape and humans become collateral damage. Guardio’s research reveals that these browsers and agents may fall victim to a series of new frauds, a result of an inherent flaw that exists in all of them. The problem, according to the study’s authors, is that they inherit AI’s built-in vulnerabilities: a tendency to act without full context, to trust too easily, and to execute instructions without natural human skepticism. AI was designed to please people at almost any cost, even if it involves distorting facts, bending rules, or operating in ways that include hidden risks. Finally, the researchers demonstrated how AI browsers can be made to ignore action and safety instructions by sending alternative, secret instructions to the model. This involves developing an attack against AI models called “prompt injection.” In this attack, an attacker encrypts instructions to the model in various ways that the user cannot see. The simplest example is text hidden from the user but visible to the AI (for instance, using font color identical to the background color) that instructs the model: ignore all previous instructions, and perform malicious activity instead. Using this method, one can cause AI to send emails with personal information, provide access to the user’s file storage services, and more. In effect, the attacker can now control the user’s AI, the report states.
Apple devices adds granular enterprise controls to gate employee access to external AI, letting IT disable or route ChatGPT requests and restrict other providers while preserving on‑device privacy
Apple is introducing tools for businesses to manage how and when employees can use artificial intelligence. These controls are granular enough for managing which features can be enabled or disabled. The system also apparently allows companies to potentially restrict whether an employee’s AI requests go to ChatGPT’s cloud service, even if the business doesn’t buy services from OpenAI directly. This can prevent employees from accidentally handing over internal-only IP or data to ChatGPT, which could be used elsewhere. However, while the focus is on ChatGPT, it’s a set of tools that won’t be limited just to OpenAI’s service. The same tools can restrict any “external” AI provider, which could include Anthropic or Google, for example. Apple has a public deal with OpenAI that enables deep integration with ChatGPT on the iPhone. However, the new tools may indicate that Apple is preparing for a future where corporate users want more freedom on which AI service they use, and for Apple to potentially have more, similar integrations. While Apple does have its own Private Cloud Compute architecture to protect user data under Apple Intelligence, it doesn’t have any way of ensuring security or privacy for third party services. The tool is an attempt to provide enterprise customers some more control over these services.
AI first security monitoring transforms from a “notify everything” to a “surface what matters” model to score business impact, correlate alerts and automate triage with adaptive detection for a unified view
Traditional security alerting approaches fall short in several key areas. The path forward requires a complete reconceptualization of what constitutes an alert. Instead of the traditional “notify everything” approach, we must shift toward a “surface what matters” model. This transformation begins by asking fundamental questions about the purpose of security monitoring. Modern AI and security workflows incorporate more sophisticated measurements: Business Impact Scoring: Each alert receives a contextual risk score based on affected assets, potential data exposure, and business criticality. Alert Correlation: Instead of individual alerts, AI systems present unified incident narratives that connect related events across your environment. Resolution Intelligence: The system learns from past incidents to predict resolution paths and automate early remediation steps. Analyst Efficiency: Success metrics now include reduced cognitive load and improved analyst satisfaction, in addition to alert volume. Simply adding AI to existing systems is not sufficient for an intelligent alerting architecture. What you need is a full-on redesign that includes: Unified Data Foundation: Need an integrated platform that brings all the security telemetry for analysis rather than disparate tools with fragmented visibility between silos. Adaptive Detection Engines: Automatically tune detection thresholds based on environmental changes and history, resulting in a significant reduction in false positives. Automated Triage Workflows: The first step in an AI-powered system, where the bulk of routine alert assessment is automated so that your analysts can focus their time on high-value investigation and other response activities. Contextual enrichment: Each alert is supplemented with the right user, asset, and threat intelligence data for faster understanding and decision-making.
AI first security monitoring transforms from a “notify everything” to a “surface what matters” model to score business impact, correlate alerts and automate triage with adaptive detection for a unified view
Traditional security alerting approaches fall short in several key areas. The path forward requires a complete reconceptualization of what constitutes an alert. Instead of the traditional “notify everything” approach, we must shift toward a “surface what matters” model. This transformation begins by asking fundamental questions about the purpose of security monitoring. Modern AI and security workflows incorporate more sophisticated measurements: Business Impact Scoring: Each alert receives a contextual risk score based on affected assets, potential data exposure, and business criticality. Alert Correlation: Instead of individual alerts, AI systems present unified incident narratives that connect related events across your environment. Resolution Intelligence: The system learns from past incidents to predict resolution paths and automate early remediation steps. Analyst Efficiency: Success metrics now include reduced cognitive load and improved analyst satisfaction, in addition to alert volume. Simply adding AI to existing systems is not sufficient for an intelligent alerting architecture. What you need is a full-on redesign that includes: Unified Data Foundation: Need an integrated platform that brings all the security telemetry for analysis rather than disparate tools with fragmented visibility between silos. Adaptive Detection Engines: Automatically tune detection thresholds based on environmental changes and history, resulting in a significant reduction in false positives. Automated Triage Workflows: The first step in an AI-powered system, where the bulk of routine alert assessment is automated so that your analysts can focus their time on high-value investigation and other response activities. Contextual enrichment: Each alert is supplemented with the right user, asset, and threat intelligence data for faster understanding and decision-making.
Cyber risk becomes a data‑resilience mandate: enterprises facing frequent attacks shift to integrity, availability, and recoverability metrics as AI automation triages alerts and accelerates restoration across hybrid stacks
Cybersecurity is the No. 1 risk facing enterprises today, and yet organizations remain dangerously unprepared. Data from a recent survey done by theCUBE Research quantifies this reality. Three-quarters of the respondents came from large enterprises with more than 1,000 employees, while the remaining quarter represented midmarket firms. Importantly, the survey deliberately balanced perspectives from both information technology and cybersecurity professionals, giving us a rare A/B comparison between operational and security-centric worldviews. Nearly two-thirds of respondents reported experiencing at least one cyberattack in the past 12 months that led to financial or operational harm. Alarmingly, nearly a third of enterprises were hit more than once in that same period. Operational disruption (38%) topped the list, with downtime and system outages emerging as the most common — and most costly — business impact. Financial loss (33%) was the next most cited outcome, reinforcing the direct revenue and margin implications of attacks. Data compromise was pervasive, including personal data loss (31%), data exposure (30%), and both recoverable (28%) and irrecoverable (24%) corruption or encryption. Governance and compliance failures were widespread, with data governance exposures (25%), public relations fallout (25%), legal consequences (23%), and other compliance failures (16%) all prominently cited. Taken together, the message is that organizations are not just facing a threat to confidentiality, but to the integrity and availability of their most critical resource — i.e. data.
Surge in coordinated scans targets Microsoft RDP auth servers suggesting setting up of future credential-based attacks, such as brute force or password-spray attacks
Internet intelligence firm GreyNoise reports that it has recorded a significant spike in scanning activity consisting of nearly 1,971 IP addresses probing Microsoft Remote Desktop Web Access and RDP Web Client authentication portals in unison, suggesting a coordinated reconnaissance campaign. The researchers say that this is a massive change in activity, with the company usually only seeing 3–5 IP addresses a day performing this type of scanning. GreyNoise says that the wave in scans is testing for timing flaws that could be used to verify usernames, setting up future credential-based attacks, such as brute force or password-spray attacks. GreyNoise also says that 1,851 shared the same client signature, and of those, approximately 92% were already flagged as malicious. The IP addresses predominantly originate from Brazil and targeted IP addresses in the United States, indicating it may be a single botnet or toolset conducting the scans. The researchers say that the timing of the attack coincides with the US back-to-school season, when schools and universities may be bringing their RDP systems back online. However, the surge in scans could also indicate that a new vulnerability may have been found, as GreyNoise has previously found that spikes in malicious traffic commonly precede the disclosure of new vulnerabilities. Windows admins managing RDP portals and exposed devices should make sure their accounts are properly secured with multi-factor authentication, and if possible, place them behind VPNs.
Cloudflare launches AI‑SPM to centrally visualize Shadow AI alongside new posture management, prompt‑level guardrails, and gateway policies to help teams block risky apps, limit uploads, and monitor usage
Cloudflare is introducing AI Security Posture Management (AI-SPM) into Cloudflare One, its Zero Trust platform to allow organizations to safeguard against a range of potential threats posed by the wide adoption of AI tools, enabling businesses to move faster with the confidence that AI is being used safely by all teams. Now, with the availability of all features, security teams will be able to: Discover how employees are using AI: With Cloudflare’s new Shadow AI Report, security teams can get instant insights from their traffic to gain a clear, data-driven picture of their organization’s AI usage. This granular view allows them to see not just that an employee is using an AI app, but which AI app, and what users are accessing it. Protect against Shadow AI: Cloudflare Gateway makes it easy to automatically enforce AI policies at the edge of Cloudflare’s network, ensuring consistent security for every employee, no matter where they work. Security teams can choose to fully block unapproved AI applications, limit the types of data uploaded into AI applications, and complete reviews of AI tools, to ensure they continue to meet security and privacy standards. Safeguard sensitive data without fully restricting AI usage: AI Prompt Protection allows security teams to identify potentially dangerous or risky employee interactions with AI models, and flag those prompts and responses. Policies can now be enforced inline at the prompt level to mitigate risk early on, and warn the employee about, or block them from, submitting sensitive data—like source code—being entered into an untrusted AI provider. This will give security teams the control they need to monitor company data that may be sent outside the organization, without fully restricting employees’ usage of AI tools. Gain visibility of AI model interactions with tools outside the business: Zero Trust MCP Server Control consolidates all MCP tool calls—a request from an AI model or application to a server to execute a specific task—into a single dashboard. This visibility ultimately allows all MCP traffic, regardless of origin, to be routed through Cloudflare for increased control and access management. Now, with centralized insights, security teams can set user-level policies at both the gateway and individual MCP server levels.