Alloy, an identity and fraud prevention platform provider, has announced a new global partnership with Mastercard to launch an enhanced customer onboarding solution for financial institutions and fintechs. The new Mastercard Alloy joint onboarding solution will leverage identity verification and open finance to streamline the end-to-end onboarding process while combating fraud. The Mastercard Alloy joint onboarding solution provides a consistent identity risk strategy and onboarding experience across channels. Alloy intends to leverage Mastercard’s best-in-class global digital identity verification capabilities and suite of open finance-powered account opening solutions to help financial institutions and fintechs manage fraud and identity risk and secure account funding throughout the customer lifecycle. Mastercard products will be integrated and pre-configured in Alloy for seamless deployment. In addition to pre-built integrations to Mastercard products, customers will receive access to over 200 risk and identity solutions available through Alloy, helping to improve customer conversion rates, reducing manual reviews, and ensuring comprehensive end-to-end coverage. Dennis Gamiello, EVP, Global Head of Identity at Mastercard said, “This joint onboarding solution will be a game-changer in the fight to reduce fraud and deliver a seamless and secure customer experience.”
Banks and credit unions prioritize AI for fraud detection but pace deployments cautiously as leadership cites data handling accuracy gaps and legacy compatibility alongside privacy and security hurdles
Banks and credit unions are universally worried about fraud, but are also concerned that security tools don’t adequately protect underlying data. For Flushing Financial’s John Buran, the benefits of bank automation are clear, but so are the fears. The CEO of the $8.8 billion-asset Flushing Financial told American Banker that automation systems are poised to oversee “vast amounts of personal and financial data,” but create questions surrounding “consent, data handling and storage.” “Although automation brings efficiency and innovation benefits, the concerns about data security and privacy risks in banking automation are in my opinion well founded and should remain a critical area for continuous focus and improvement,” Buran said. Buran is not alone. Worries about data security and privacy are holding many back from using advanced automation such as artificial intelligence, according to new research from American Banker. “Fraud teams are already overwhelmed by the number of alerts they are investigating, so many have to focus on the high-dollar losses and manage lower-dollar losses through their dispute processes,” said John Meyer, managing director in Cornerstone Advisors’ Business Intelligence and Data Analytics practice.
Darktrace acquires Mira Security to boost encrypted traffic visibility- with policy control and compliance capabilities that allow administrators to decrypt traffic based on predefined rules
Machine learning cybersecurity firm Darktrace PLC has acquired network traffic visibility solutions company Mira Security Inc. for an undisclosed price. Mira Security specializes in encrypted traffic orchestration with solutions that allow organizations to detect, decrypt and analyze encrypted network traffic at scale. The company’s offerings are purpose-built to provide full traffic visibility without compromising privacy, performance, or compliance mandates. Mira Security’s main offering, its Encrypted Traffic Orchestration platform, includes support for both physical appliances and virtual deployments. ETO can intercept SSL/TLS and SSH traffic across any port, decrypting it for analysis and re-encrypting it before forwarding, without the need for complex re-architecting or performance degradation. Mira also offers granular policy control and compliance capabilities that allow administrators to decrypt traffic based on predefined rules while enforcing blocking of outdated or insecure encryption protocols and managing what data is visible to different tools to ensure sensitive information remains protected. The platform additionally supports full visibility into TLS 1.3 traffic, a major challenge for many existing cybersecurity tools due to the protocol’s stricter encryption practices. The combination of Darktrace and Mira Security is said by Darktrace to close the encrypted data blind spot without impacting network performance or requiring complex re-architecting. The closer integration of Mira Security’s in-line decryption capabilities with Darktrace’s existing analysis and understanding of encrypted traffic will also provide organizations with more in-depth visibility across on-premises, cloud and hybrid environments.
IBM taps Infosec’s platform to offer discovery, classification, and lifecycle management of cryptographic assets across hybrid and distributed environments, supporting creation of scalable quantum-safe architecture
IBM Consulting and InfoSec Global are partnering to deliver advanced cryptographic discovery and inventory solutions across all industries and geographies. The rapid advancement of quantum computing poses a growing threat to cryptographic security, as quantum computers can break traditional cryptography, exposing vulnerabilities across digital operations. Organizations worldwide are requiring cryptographic assets to be inventoried, assessed, and modernized. IBM Consulting’s global delivery network and quantum safe security expertise will be combined with InfoSec Global’s AgileSec platform to accelerate customers’ transition to post-quantum cryptography and enable a risk-driven transformation to enterprise-wide cryptographic agility and compliance. The AgileSec platform enables the discovery, classification, and lifecycle management of cryptographic assets across hybrid and distributed environments. The partnership will enable IBM Consulting and InfoSec Global to jointly develop, market, and deliver cryptographic posture management solutions, helping clients tackle their most complex quantum-safe challenges. Client benefits of the IBM Consulting and InfoSec Global partnering could include: Addressing the risk of cryptographic blind spots and supporting adherence to compliance frameworks from NIST, the Federal Financial Institutions Examination Council (FFIEC), and regulatory expectations; Accelerating modernization without costly re-platforming for crypto agility in-place; and Creating a future-ready and scalable quantum-safe architecture with measurable return on investment.
Research exposes AI browsers’ vulnerability to automated fraud, including unauthorized purchases and phishing, stemming from AI’s blind trust and lack of human skepticism
Automatic purchases on fake websites, falling for simple phishing attacks that expose users’ bank accounts, and even downloading malicious files to computers – these are the failures in AI browsers and autonomous AI agents revealed by new research from Israeli cybersecurity company Guardio. The report warns that AI browsers can, without user consent, click, download, or provide sensitive information. Such fraud no longer needs to deceive the user. It only needs to deceive the AI. And when this happens, the user is still the one who pays the price. We stand at the threshold of a new and complex era of fraud, where AI convenience collides with an invisible fraud landscape and humans become collateral damage. Guardio’s research reveals that these browsers and agents may fall victim to a series of new frauds, a result of an inherent flaw that exists in all of them. The problem, according to the study’s authors, is that they inherit AI’s built-in vulnerabilities: a tendency to act without full context, to trust too easily, and to execute instructions without natural human skepticism. AI was designed to please people at almost any cost, even if it involves distorting facts, bending rules, or operating in ways that include hidden risks. Finally, the researchers demonstrated how AI browsers can be made to ignore action and safety instructions by sending alternative, secret instructions to the model. This involves developing an attack against AI models called “prompt injection.” In this attack, an attacker encrypts instructions to the model in various ways that the user cannot see. The simplest example is text hidden from the user but visible to the AI (for instance, using font color identical to the background color) that instructs the model: ignore all previous instructions, and perform malicious activity instead. Using this method, one can cause AI to send emails with personal information, provide access to the user’s file storage services, and more. In effect, the attacker can now control the user’s AI, the report states.
A zero‑click exploit chain using WhatsApp and an Apple flaw enabled data theft from specific devices for about 90 days before patches went out
WhatsApp has fixed a security bug in its iOS and Mac apps that was being used to stealthily hack into the Apple devices of “specific targeted users.” The vulnerability, known officially as CVE-2025-55177, was used alongside a separate flaw found in iOS and Macs, which Apple fixed last week and tracks as CVE-2025-43300. Apple said at the time that the flaw was used in an “extremely sophisticated attack against specific targeted individuals.” Now we know that dozens of WhatsApp users were targeted with this pair of flaws. Donncha Ó Cearbhaill, who heads Amnesty International’s Security Lab, described the attack in a post on X as an “advanced spyware campaign” that targeted users over the past 90 days, or since the end of May. Ó Cearbhaill described the pair of bugs as a “zero-click” attack, meaning it does not require any interaction from the victim, such as clicking a link, to compromise their device. The two bugs chained together allow an attacker to deliver a malicious exploit through WhatsApp that’s capable of stealing data from the user’s Apple device. Per Ó Cearbhaill, who posted a copy of the threat notification that WhatsApp sent to affected users, the attack was able to “compromise your device and the data it contains, including messages.”
Apple’s AI models are trained to refuse requests when necessary and to adapt their tone depending on where the user lives.
A recent machine learning update from Apple reveals how iOS 26 brings faster, safer AI that was trained without your texts, your photos, or your permission. Apple’s training pipeline starts with Applebot, the company’s web crawler. It collects data from sites that allow it, pulling in pages from across the internet in multiple languages. But it’s not scraping everything it finds. Applebot prioritizes clean, structured web pages and uses signals like language detection and topic analysis to filter out junk. It also handles complex websites by simulating full-page loading and running JavaScript. That allows it to gather content from modern pages that rely on interactive design. The goal is to collect useful, high-quality material without ever touching your private information. Instead of gathering more data at any cost, the company is focused on building smarter datasets from cleaner, publicly available sources. Once the data is collected, Apple trains the models in stages. It starts with supervised examples that show the model how to respond in different situations. Then it uses reinforcement learning, with real people rating model responses, to fine-tune the results. Apple also built a safety system that identifies categories like hate speech, misinformation, and stereotypes. The models are trained to refuse requests when necessary and to adapt their tone depending on where the user lives. Features powered by Apple Intelligence now respond faster, support more languages, and stay on track when given complex prompts. The Writing Tools can follow specific instructions without drifting off-topic. The image parser can turn a photo of a flyer into a calendar event, even if the design is cluttered. And all of that happens without Apple seeing what you type or share. If the model needs help from the cloud, Private Cloud Compute handles the request in encrypted memory, on servers Apple cannot access. For users, the big shift is that Apple Intelligence feels more useful without giving up control. For developers, the new Foundation Models framework offers structured outputs, safer tool integration, and Swift-native design. Developers can now use its on-device foundation model through the new Foundation Models framework. That gives third-party apps direct access to the same model that powers Apple Intelligence across iOS 26. Apple isn’t just matching competitors in model size. Its 3 billion-parameter model is optimized for Apple Silicon using 2-bit quantization and KV-cache sharing. That gives it a performance and efficiency edge without relying on the cloud. Developers get faster results, lower costs, and tighter user privacy. Instead of relying on external APIs or background network calls, apps can now integrate powerful AI locally and privately.
FTC data reveals the number of older adults (60 and above) who said they lost $10,000 or more to impersonation scams quadrupled between 2020 and 2024, while the number who lost more than $100,000 increased eightfold
A growing number of older adults are losing large sums of money to impersonation scams, according to the Federal Trade Commission (FTC). The number of people 60 and over who said they lost $10,000 or more to this form of fraud quadrupled between 2020 and 2024, while the number who lost more than $100,000 increased eightfold, the FTC said. In this form of fraud, scammers impersonate government agencies or businesses, contact consumers to alert them to a fake problem involving their accounts or their identity, and try to persuade them to transfer money to “keep it safe.” The scammers try to create a sense of urgency by telling consumers that their accounts are being used by someone else, that their Social Security number or other information is being used to commit crimes, or that their online accounts have been hacked. After persuading consumers to transfer their money, deposit cash into bitcoin ATMs, or hand cash or gold to couriers, the scammers steal those assets. “While younger people report losing money to these imposters too, reports of losses in the tens and hundreds of thousands of dollars are much more likely to be filed by older adults, and those numbers have soared,” the FTC said in a Consumer Protection Data Spotlight.
Apple devices adds granular enterprise controls to gate employee access to external AI, letting IT disable or route ChatGPT requests and restrict other providers while preserving on‑device privacy
Apple is introducing tools for businesses to manage how and when employees can use artificial intelligence. These controls are granular enough for managing which features can be enabled or disabled. The system also apparently allows companies to potentially restrict whether an employee’s AI requests go to ChatGPT’s cloud service, even if the business doesn’t buy services from OpenAI directly. This can prevent employees from accidentally handing over internal-only IP or data to ChatGPT, which could be used elsewhere. However, while the focus is on ChatGPT, it’s a set of tools that won’t be limited just to OpenAI’s service. The same tools can restrict any “external” AI provider, which could include Anthropic or Google, for example. Apple has a public deal with OpenAI that enables deep integration with ChatGPT on the iPhone. However, the new tools may indicate that Apple is preparing for a future where corporate users want more freedom on which AI service they use, and for Apple to potentially have more, similar integrations. While Apple does have its own Private Cloud Compute architecture to protect user data under Apple Intelligence, it doesn’t have any way of ensuring security or privacy for third party services. The tool is an attempt to provide enterprise customers some more control over these services.
OpenAI will route sensitive chats to GPT‑5‑thinking via a real‑time router, add parental controls and enable default age‑appropriate rules with distress alerts
OpenAI plans to route sensitive conversations to reasoning models like GPT-5 and roll out parental controls within the next month — part of an ongoing response to recent safety incidents involving ChatGPT failing to detect mental distress. Experts attribute these issues to fundamental design elements: the models’ tendency to validate user statements and their next-word prediction algorithms, which cause chatbots to follow conversational threads rather than redirect potentially harmful discussions. OpenAI thinks that at least one solution to conversations that go off the rails could be to automatically reroute sensitive chats to “reasoning” models. “We recently introduced a real-time router that can choose between efficient chat models and reasoning models based on the conversation context. We’ll soon begin to route some sensitive conversations—like when our system detects signs of acute distress—to a reasoning model, like GPT‑5-thinking, so it can provide more helpful and beneficial responses, regardless of which model a person first selected.” OpenAI says its GPT-5 thinking and o3 models are built to spend more time thinking for longer and reasoning through context before answering, which means they are “more resistant to adversarial prompts.” The AI firm also said it would roll out parental controls in the next month, allowing parents to link their account with their teen’s account through an email invitation. Soon, parents will be able to control how ChatGPT responds to their child with “age-appropriate model behavior rules, which are on by default.” Parents will also be able to disable features like memory and chat history. Perhaps the most important parental control that OpenAI intends to roll out is that parents can receive notifications when the system detects their teenager is in a moment of “acute distress.”