Experian announced the integration of Incode Technologies, Inc. (“Incode”) into the Experian Ascend Platform™. This collaboration will enable seamless, secure and efficient identity validation for over 1,800 global clients across industries including financial services, automotive, healthcare and digital marketing. Through this partnership, Incode’s advanced identity validation and real-time metadata analysis will be offered as an optional component within Experian’s CrossCore Document Verification suite in North America, with global expansion planned. Incode’s AI-driven technology strengthens Experian’s identity and fraud solutions by verifying and connecting identity elements such as government-issued IDs, facial recognition, liveness checks, and real time metadata. Identity verification is central to Experian’s identity and fraud portfolio, helping organizations combat cybercrime while maintaining a seamless customer experience. This integration provides stronger protection against synthetic identity and application fraud, as well as higher accuracy in detection and workforce identity.
Circle and Paxos pilot “know‑your‑issuer” with Bluprynt to trace tokens to verified issuers, curbing counterfeit stablecoins and aiding auditors and regulators amid new U.S. rules
Stablecoin heavyweights Circle Internet Group Inc. and Paxos Trust Co. have piloted a new way to prevent copycats and help companies verify their digital asset holdings. The firms partnered with Bluprynt, a fintech startup using cryptography and blockchain technology to provide issuer verification when stablecoins are released by a company. The pilot provided a way to trace back a token to the verified issuer, using Bluprynt’s technology. Bluprynt’s technology gives “provenance upfront, reducing complexity, and providing regulators and investors with the transparency they need.” He noted that could help curb losses due to counterfeit tokens and impersonation attacks. It’s another sign that parts of the digital asset industry are maturing as they seek to meet new regulatory requirements being established in jurisdictions across the globe. Stablecoins are digital assets pegged to non-volatile assets, such as US dollars, and can be used as a cash equivalent for payments. The technology could benefit auditors, financial crime-fighters, and investors. Circle’s USDC is the second-largest stablecoin by market value, and Paxos issues and operates the blockchain infrastructure behind PayPal Inc.’s stablecoin, PYUSD. The number of firms offering stablecoins is expected to grow with the recently-enacted GENIUS Act, which provides a framework for dollar-backed stablecoins.
Anthropic and OpenAI run first cross‑lab safety tests: o3 and o4‑mini align strongly, GPT‑4o/4.1 show misuse concerns, and all models exhibit varying sycophancy under stress
AI startups Anthropic and OpenAI said that they evaluated each other’s public models, using their own safety and misalignment tests. Sharing this news and the results in separate blog posts, the companies said they looked for problems like sycophancy, whistleblowing, self-preservation, supporting human misuse and capabilities that could undermine AI safety evaluations and oversight. OpenAI wrote in its post that this collaboration was a “first-of-its-kind joint evaluation” and that it demonstrates how labs can work together on issues like these. Anthropic wrote in its post that the joint evaluation exercise was meant to help mature the field of alignment evaluations and “establish production-ready best practices.” Reporting the findings of its evaluations, Anthropic said OpenAI’s o3 and o4-mini reasoning models were aligned as well or better than its own models overall, the GPT-4o and GPT-4.1 general-purpose models showed some examples of “concerning behavior,” especially around misuse, and both companies’ models struggled to some degree with sycophancy. OpenAI wrote in its post that it found that Anthropic’s Claude 4 models generally performed well on evaluations stress-testing their ability to respect the instruction hierarchy, performed less well on jailbreaking evaluations that focused on trained-in safeguards, generally proved to be aware of their uncertainty and avoided making statements that were inaccurate, and performed especially well or especially poorly on scheming evaluation, depending on the subset of testing. Both companies said in their posts that for the purpose of testing, they relaxed some model-external safeguards that otherwise would be in operation but would interfere with the tests. They each said that their latest models, OpenAI’s GPT-5 and Anthropic’s Opus 4.1, which were released after the evaluations, have shown improvements over the earlier models.
Pangea’s AI Guardrail Platform gives enterprises runtime control without slowing development velocity
Pangea it has been named a winner in SiliconANGLE’s 2025 TechForward Awards in the AI-Powered Threat Detection Category. Pangea recently released the industry’s most comprehensive AI Guardrail Platform to address critical security gaps as enterprises deploy hundreds of generative AI projects. From prompt injection prevention to compliance-driven data redaction, the platform prevents sensitive data leakage, blocks harmful content, and provides real-time protection powered by intelligence from partners like CrowdStrike, DomainTools, and ReversingLabs. With more than 99% efficacy against sophisticated prompt injection techniques, including token smuggling and multilingual attacks, Pangea’s AI Guardrail Platform gives enterprises runtime control without slowing development velocity. Customers like Grand Canyon Education are using Pangea to secure enterprise-wide AI deployments, accelerate time to market, and maintain strict compliance standards while safely scaling generative AI initiatives. The TechForward Awards recognize the technologies and solutions driving business forward. As the trusted voice of enterprise and emerging tech, SiliconANGLE applies a rigorous editorial lens to highlight innovations reshaping how businesses operate in our rapidly changing landscape.
Three‑pillar defense for deepfakes and financial fraud: awareness programs and simulations, codified escalation and legal playbooks, and layered AI detection beyond watermarks to secure high‑value payments
To manage the threat of deepfake financial fraud, organizations should consider focusing on three key areas: People. Many people are unaware of the potential for deepfake fraud, often because they don’t understand it or assume that it won’t affect their company. Organizations should educate personnel and other stakeholders about what deepfake financial fraud is and how to identify and escalate suspected incidents of it. Tabletop exercises, interactive scenarios that simulate an attack, can also help test the organization’s response to deepfake incidents. Whichever educational approach is taken, it’s important to consider a routine or ongoing training program that can keep pace with the quickly evolving deepfake fraud landscape. Processes. Organizations should develop playbooks for handling both suspected deepfake threats and successful attacks. An effective playbook outlines clearly the who, what, where, and when of a swift, coordinated response, including how to escalate threats, who should lead the response, and when to review processes to ensure they are up to date. Other important processes include deepfake detection measures, legal considerations, and even public-private partnerships for content authenticity validation. Technology. As deepfakes become more sophisticated, human detection of synthetic content is becoming more challenging and sometimes impossible. GenAI tools that use metadata watermarks or labels to identify and flag synthetic content can help see what the human eye may be unable to. However, since bad actors can also remove watermarks, these types of tools perform best when used in conjunction with deepfake detection software across platforms.
TransUnion’s third‑party CX app breach in July 2025 exposes 4.46M customers’ personal data (names, birthdates and SSNs taken)
TransUnion said a third-party data breach affected more than 4.4 million customers. The credit reporting agency revealed the breach in a filing with the Maine’s attorney general’s office. The company said the breach on July 28 involved unauthorized access of a third-party application that contained customers’ personal data for its U.S. consumer support operations. The incident was discovered July 30. “The information was limited to specific data elements and did not include credit reports or core credit information,” the company wrote, but did not specify what types of data were involved. In a separate data breach disclosure filed Thursday with Texas’ attorney general’s office, TransUnion said that the stolen personal information included customers’ names, birthdates and Social Security numbers.
Paxos backed USDG’s MiCA‑compliant stablecoin (partners include Mastercard, Robinhood and Worldpay) gains Aleo’s zk smart contracts for confidential transactions, on‑chain KYC/AML verification, and encrypted treasury operations
Aleo’s non-profit Foundation announced that it had joined the Paxos-backed Global Dollar Network (GDN), an ecosystem built around USDG, a fully regulated U.S. dollar stablecoin issued by Paxos and backed by major partners including Anchorage Digital, Kraken, Mastercard, Paxos, Robinhood, Worldpay, and others. The Aleo Foundation plans on using USDG for on-chain treasury management and vendor payments, all while leveraging its native blockchain’s privacy-preservation setup (enabling the processing of stablecoin transactions in a fully encrypted manner). Not only that, as the first L1 to join the GDN, Aleo will incorporate its zero-knowledge (zK) and private smart contract capabilities into the latter’s ecosystem, which already spans established networks like Solana, Ethereum, and even newcomers like Ink. Hailed as a privacy-first blockchain for programmable payments, Aleo ensures that transaction details (such as who paid whom and how much) stay confidential at all times. Zero-knowledge proofs (ZKP) on Aleo allows transactions to be validated by smart contracts without ever exposing sensitive details. In practice, Aleo’s confidential payment apps can verify things like KYC/AML without publishing payer identities or amounts, meaning that businesses can run payrolls or vendor payments on-chain privately, keeping exact salaries or supplier deals hidden from rivals and public view.
DocuSign‑branded Apple Pay emails use urgent refunds, Cyrillic sender tricks, and security‑code links to lure victims into calls that cause credential theft
Phishing scams are becoming more sophisticated, with a new tactic involving fake DocuSign emails that appear to confirm Apple Pay purchases. These emails often include realistic details like order IDs, charge amounts, and even a support number. However, the number connects victims to scammers, not Apple or any legitimate company. Some emails also contain a DocuSign link and a security code to make the message seem more authentic. The scam works by alarming recipients with a fake charge and urging them to call if they don’t recognize it. Once on the call, scammers pose as support agents and claim the user’s account is compromised. They may request sensitive information like Apple ID credentials, banking details, or ask the user to install remote access software. In some cases, they demand payment for fake reversal or protection fees. Red flags include unexpected DocuSign receipts, strange characters in the sender’s email address (like Cyrillic letters), and urgent language. It’s important to remember that companies like Apple do not send billing receipts via DocuSign. These scams aim to create panic and trick users into giving up personal data or access to their devices.
Banks and credit unions prioritize AI for fraud detection but pace deployments cautiously as leadership cites data handling accuracy gaps and legacy compatibility alongside privacy and security hurdles
Banks and credit unions are universally worried about fraud, but are also concerned that security tools don’t adequately protect underlying data. For Flushing Financial’s John Buran, the benefits of bank automation are clear, but so are the fears. The CEO of the $8.8 billion-asset Flushing Financial told American Banker that automation systems are poised to oversee “vast amounts of personal and financial data,” but create questions surrounding “consent, data handling and storage.” “Although automation brings efficiency and innovation benefits, the concerns about data security and privacy risks in banking automation are in my opinion well founded and should remain a critical area for continuous focus and improvement,” Buran said. Buran is not alone. Worries about data security and privacy are holding many back from using advanced automation such as artificial intelligence, according to new research from American Banker. “Fraud teams are already overwhelmed by the number of alerts they are investigating, so many have to focus on the high-dollar losses and manage lower-dollar losses through their dispute processes,” said John Meyer, managing director in Cornerstone Advisors’ Business Intelligence and Data Analytics practice.
A zero‑click exploit chain using WhatsApp and an Apple flaw enabled data theft from specific devices for about 90 days before patches went out
WhatsApp has fixed a security bug in its iOS and Mac apps that was being used to stealthily hack into the Apple devices of “specific targeted users.” The vulnerability, known officially as CVE-2025-55177, was used alongside a separate flaw found in iOS and Macs, which Apple fixed last week and tracks as CVE-2025-43300. Apple said at the time that the flaw was used in an “extremely sophisticated attack against specific targeted individuals.” Now we know that dozens of WhatsApp users were targeted with this pair of flaws. Donncha Ó Cearbhaill, who heads Amnesty International’s Security Lab, described the attack in a post on X as an “advanced spyware campaign” that targeted users over the past 90 days, or since the end of May. Ó Cearbhaill described the pair of bugs as a “zero-click” attack, meaning it does not require any interaction from the victim, such as clicking a link, to compromise their device. The two bugs chained together allow an attacker to deliver a malicious exploit through WhatsApp that’s capable of stealing data from the user’s Apple device. Per Ó Cearbhaill, who posted a copy of the threat notification that WhatsApp sent to affected users, the attack was able to “compromise your device and the data it contains, including messages.”