Alloy, an identity and fraud prevention platform provider, has announced a new global partnership with Mastercard to launch an enhanced customer onboarding solution for financial institutions and fintechs. The new Mastercard Alloy joint onboarding solution will leverage identity verification and open finance to streamline the end-to-end onboarding process while combating fraud. The Mastercard Alloy joint onboarding solution provides a consistent identity risk strategy and onboarding experience across channels. Alloy intends to leverage Mastercard’s best-in-class global digital identity verification capabilities and suite of open finance-powered account opening solutions to help financial institutions and fintechs manage fraud and identity risk and secure account funding throughout the customer lifecycle. Mastercard products will be integrated and pre-configured in Alloy for seamless deployment. In addition to pre-built integrations to Mastercard products, customers will receive access to over 200 risk and identity solutions available through Alloy, helping to improve customer conversion rates, reducing manual reviews, and ensuring comprehensive end-to-end coverage. Dennis Gamiello, EVP, Global Head of Identity at Mastercard said, “This joint onboarding solution will be a game-changer in the fight to reduce fraud and deliver a seamless and secure customer experience.”
Darktrace acquires Mira Security to boost encrypted traffic visibility- with policy control and compliance capabilities that allow administrators to decrypt traffic based on predefined rules
Machine learning cybersecurity firm Darktrace PLC has acquired network traffic visibility solutions company Mira Security Inc. for an undisclosed price. Mira Security specializes in encrypted traffic orchestration with solutions that allow organizations to detect, decrypt and analyze encrypted network traffic at scale. The company’s offerings are purpose-built to provide full traffic visibility without compromising privacy, performance, or compliance mandates. Mira Security’s main offering, its Encrypted Traffic Orchestration platform, includes support for both physical appliances and virtual deployments. ETO can intercept SSL/TLS and SSH traffic across any port, decrypting it for analysis and re-encrypting it before forwarding, without the need for complex re-architecting or performance degradation. Mira also offers granular policy control and compliance capabilities that allow administrators to decrypt traffic based on predefined rules while enforcing blocking of outdated or insecure encryption protocols and managing what data is visible to different tools to ensure sensitive information remains protected. The platform additionally supports full visibility into TLS 1.3 traffic, a major challenge for many existing cybersecurity tools due to the protocol’s stricter encryption practices. The combination of Darktrace and Mira Security is said by Darktrace to close the encrypted data blind spot without impacting network performance or requiring complex re-architecting. The closer integration of Mira Security’s in-line decryption capabilities with Darktrace’s existing analysis and understanding of encrypted traffic will also provide organizations with more in-depth visibility across on-premises, cloud and hybrid environments.
IBM taps Infosec’s platform to offer discovery, classification, and lifecycle management of cryptographic assets across hybrid and distributed environments, supporting creation of scalable quantum-safe architecture
IBM Consulting and InfoSec Global are partnering to deliver advanced cryptographic discovery and inventory solutions across all industries and geographies. The rapid advancement of quantum computing poses a growing threat to cryptographic security, as quantum computers can break traditional cryptography, exposing vulnerabilities across digital operations. Organizations worldwide are requiring cryptographic assets to be inventoried, assessed, and modernized. IBM Consulting’s global delivery network and quantum safe security expertise will be combined with InfoSec Global’s AgileSec platform to accelerate customers’ transition to post-quantum cryptography and enable a risk-driven transformation to enterprise-wide cryptographic agility and compliance. The AgileSec platform enables the discovery, classification, and lifecycle management of cryptographic assets across hybrid and distributed environments. The partnership will enable IBM Consulting and InfoSec Global to jointly develop, market, and deliver cryptographic posture management solutions, helping clients tackle their most complex quantum-safe challenges. Client benefits of the IBM Consulting and InfoSec Global partnering could include: Addressing the risk of cryptographic blind spots and supporting adherence to compliance frameworks from NIST, the Federal Financial Institutions Examination Council (FFIEC), and regulatory expectations; Accelerating modernization without costly re-platforming for crypto agility in-place; and Creating a future-ready and scalable quantum-safe architecture with measurable return on investment.
Research exposes AI browsers’ vulnerability to automated fraud, including unauthorized purchases and phishing, stemming from AI’s blind trust and lack of human skepticism
Automatic purchases on fake websites, falling for simple phishing attacks that expose users’ bank accounts, and even downloading malicious files to computers – these are the failures in AI browsers and autonomous AI agents revealed by new research from Israeli cybersecurity company Guardio. The report warns that AI browsers can, without user consent, click, download, or provide sensitive information. Such fraud no longer needs to deceive the user. It only needs to deceive the AI. And when this happens, the user is still the one who pays the price. We stand at the threshold of a new and complex era of fraud, where AI convenience collides with an invisible fraud landscape and humans become collateral damage. Guardio’s research reveals that these browsers and agents may fall victim to a series of new frauds, a result of an inherent flaw that exists in all of them. The problem, according to the study’s authors, is that they inherit AI’s built-in vulnerabilities: a tendency to act without full context, to trust too easily, and to execute instructions without natural human skepticism. AI was designed to please people at almost any cost, even if it involves distorting facts, bending rules, or operating in ways that include hidden risks. Finally, the researchers demonstrated how AI browsers can be made to ignore action and safety instructions by sending alternative, secret instructions to the model. This involves developing an attack against AI models called “prompt injection.” In this attack, an attacker encrypts instructions to the model in various ways that the user cannot see. The simplest example is text hidden from the user but visible to the AI (for instance, using font color identical to the background color) that instructs the model: ignore all previous instructions, and perform malicious activity instead. Using this method, one can cause AI to send emails with personal information, provide access to the user’s file storage services, and more. In effect, the attacker can now control the user’s AI, the report states.
Apple’s AI models are trained to refuse requests when necessary and to adapt their tone depending on where the user lives.
A recent machine learning update from Apple reveals how iOS 26 brings faster, safer AI that was trained without your texts, your photos, or your permission. Apple’s training pipeline starts with Applebot, the company’s web crawler. It collects data from sites that allow it, pulling in pages from across the internet in multiple languages. But it’s not scraping everything it finds. Applebot prioritizes clean, structured web pages and uses signals like language detection and topic analysis to filter out junk. It also handles complex websites by simulating full-page loading and running JavaScript. That allows it to gather content from modern pages that rely on interactive design. The goal is to collect useful, high-quality material without ever touching your private information. Instead of gathering more data at any cost, the company is focused on building smarter datasets from cleaner, publicly available sources. Once the data is collected, Apple trains the models in stages. It starts with supervised examples that show the model how to respond in different situations. Then it uses reinforcement learning, with real people rating model responses, to fine-tune the results. Apple also built a safety system that identifies categories like hate speech, misinformation, and stereotypes. The models are trained to refuse requests when necessary and to adapt their tone depending on where the user lives. Features powered by Apple Intelligence now respond faster, support more languages, and stay on track when given complex prompts. The Writing Tools can follow specific instructions without drifting off-topic. The image parser can turn a photo of a flyer into a calendar event, even if the design is cluttered. And all of that happens without Apple seeing what you type or share. If the model needs help from the cloud, Private Cloud Compute handles the request in encrypted memory, on servers Apple cannot access. For users, the big shift is that Apple Intelligence feels more useful without giving up control. For developers, the new Foundation Models framework offers structured outputs, safer tool integration, and Swift-native design. Developers can now use its on-device foundation model through the new Foundation Models framework. That gives third-party apps direct access to the same model that powers Apple Intelligence across iOS 26. Apple isn’t just matching competitors in model size. Its 3 billion-parameter model is optimized for Apple Silicon using 2-bit quantization and KV-cache sharing. That gives it a performance and efficiency edge without relying on the cloud. Developers get faster results, lower costs, and tighter user privacy. Instead of relying on external APIs or background network calls, apps can now integrate powerful AI locally and privately.
FTC data reveals the number of older adults (60 and above) who said they lost $10,000 or more to impersonation scams quadrupled between 2020 and 2024, while the number who lost more than $100,000 increased eightfold
A growing number of older adults are losing large sums of money to impersonation scams, according to the Federal Trade Commission (FTC). The number of people 60 and over who said they lost $10,000 or more to this form of fraud quadrupled between 2020 and 2024, while the number who lost more than $100,000 increased eightfold, the FTC said. In this form of fraud, scammers impersonate government agencies or businesses, contact consumers to alert them to a fake problem involving their accounts or their identity, and try to persuade them to transfer money to “keep it safe.” The scammers try to create a sense of urgency by telling consumers that their accounts are being used by someone else, that their Social Security number or other information is being used to commit crimes, or that their online accounts have been hacked. After persuading consumers to transfer their money, deposit cash into bitcoin ATMs, or hand cash or gold to couriers, the scammers steal those assets. “While younger people report losing money to these imposters too, reports of losses in the tens and hundreds of thousands of dollars are much more likely to be filed by older adults, and those numbers have soared,” the FTC said in a Consumer Protection Data Spotlight.
Apple devices adds granular enterprise controls to gate employee access to external AI, letting IT disable or route ChatGPT requests and restrict other providers while preserving on‑device privacy
Apple is introducing tools for businesses to manage how and when employees can use artificial intelligence. These controls are granular enough for managing which features can be enabled or disabled. The system also apparently allows companies to potentially restrict whether an employee’s AI requests go to ChatGPT’s cloud service, even if the business doesn’t buy services from OpenAI directly. This can prevent employees from accidentally handing over internal-only IP or data to ChatGPT, which could be used elsewhere. However, while the focus is on ChatGPT, it’s a set of tools that won’t be limited just to OpenAI’s service. The same tools can restrict any “external” AI provider, which could include Anthropic or Google, for example. Apple has a public deal with OpenAI that enables deep integration with ChatGPT on the iPhone. However, the new tools may indicate that Apple is preparing for a future where corporate users want more freedom on which AI service they use, and for Apple to potentially have more, similar integrations. While Apple does have its own Private Cloud Compute architecture to protect user data under Apple Intelligence, it doesn’t have any way of ensuring security or privacy for third party services. The tool is an attempt to provide enterprise customers some more control over these services.
Unit21’s integration of Fingerprint’s device intelligence, which collects and analyzes over 100 signals from the browser, device, and network with its AML platform to help detect complex fraud types such as credential stuffing and geolocation spoofing in real-time
Unit21 announced its new device intelligence capabilities designed to help fintechs combat the ongoing threat of fraud. The company’s fraud-fighting platform now incorporates Fingerprint’s device intelligence, which collects and analyzes over 100 signals from the browser, device, and network to flag potential fraud patterns, such as repeated login attempts across multiple user accounts, in real time. Unit21 is the most flexible real-time fraud and AML platform that empowers fintechs to build and adapt faster than fraudsters without the need for complex coding, cumbersome reporting processes, or lengthy analyses. With access to persistent, highly accurate device IDs and real-time Smart Signals, such as Bot Detection, VPN Detection, and more, fintechs using Fingerprint and Unit21 can expand their arsenal of insights to combat bad actors. These newly added capabilities help tackle complex fraud types, including: Credential stuffing: Detects bot activity and repeated login attempts across multiple accounts from the same device. Elder & emergency scams: Identifies potentially suspicious activity such as new or unrecognized devices accessing an account and IP geolocation mismatches, which can signal scammers attempting to exploit vulnerable users. Tech support scams: Detects use of virtual machines, developer tools and abnormal device behavior, such as unusual spikes in activity, as well as new logins from unfamiliar devices or locations. Geolocation spoofing: Detects mismatched time zones, use of proxies, and other methods fraudsters use to evade detection.
Picus Security report reveals decline in defensive performance, with overall prevention effectiveness dropping from 69% in 2024 to 62% this year and data exfiltration prevention rates falling to just 3%, down from 9% last year
New report from cybersecurity validation startup Picus Security reveals that nearly half of enterprise environments had at least one password cracked during testing, a dramatic increase from last year and that attacks using valid credentials succeeded 98% of the time. The report details a worrying decline in defensive performance, with overall prevention effectiveness dropping from 69% in 2024 to 62% this year. Data exfiltration prevention rates were also found to have fallen to just 3%, down from 9% last year, making it the least prevented attack vector for the third year in a row. On the ransomware front, BlackByte was found to remain the hardest ransomware to stop, with only a 26% prevention rate, followed by BabLock at 34% and Maori at 41%. Discovery tactics such as System Network Configuration Discovery and Process Discovery were blocked less than 12% of the time, underscoring persistent blind spots in early detection. Detection performance was found to remain a weak link as while log coverage held steady at 54%, only 14% of simulated attacks generated alerts. 50% of detection rule failures stemmed from logging issues, with other problems tied to configuration errors and performance bottlenecks. Domain administrator compromises fell from 24% to 19% and access to domain admin accounts dropping from 40% to 22%, reflecting stronger lateral movement defenses and better network segmentation. MacOS endpoint security was also saw rapid improvement, jumping from 23% to 76% prevention effectiveness, outpacing Linux at 69% and closing in on Windows at 79%.
AI first security monitoring transforms from a “notify everything” to a “surface what matters” model to score business impact, correlate alerts and automate triage with adaptive detection for a unified view
Traditional security alerting approaches fall short in several key areas. The path forward requires a complete reconceptualization of what constitutes an alert. Instead of the traditional “notify everything” approach, we must shift toward a “surface what matters” model. This transformation begins by asking fundamental questions about the purpose of security monitoring. Modern AI and security workflows incorporate more sophisticated measurements: Business Impact Scoring: Each alert receives a contextual risk score based on affected assets, potential data exposure, and business criticality. Alert Correlation: Instead of individual alerts, AI systems present unified incident narratives that connect related events across your environment. Resolution Intelligence: The system learns from past incidents to predict resolution paths and automate early remediation steps. Analyst Efficiency: Success metrics now include reduced cognitive load and improved analyst satisfaction, in addition to alert volume. Simply adding AI to existing systems is not sufficient for an intelligent alerting architecture. What you need is a full-on redesign that includes: Unified Data Foundation: Need an integrated platform that brings all the security telemetry for analysis rather than disparate tools with fragmented visibility between silos. Adaptive Detection Engines: Automatically tune detection thresholds based on environmental changes and history, resulting in a significant reduction in false positives. Automated Triage Workflows: The first step in an AI-powered system, where the bulk of routine alert assessment is automated so that your analysts can focus their time on high-value investigation and other response activities. Contextual enrichment: Each alert is supplemented with the right user, asset, and threat intelligence data for faster understanding and decision-making.