Persado, a provider of AI-powered content compliance and performance solutions for marketing, today launched Persado Marketing Compliance AI, the first agentic AI platform purpose-built for financial services marketing and legal teams to speed time to market of customer communications. The enterprise-grade solution integrates regulatory compliance analysis, performance prediction scoring, and brand fit insights, so companies can identify and rapidly resolve risks within content, shortening legal reviews by up to 90%.Persado’s first Marketing Compliance AI solution is designed for large and mid-size retail banks and credit unions. The solution leverages AI agents and builds on a decade of content insights gleaned from working with 8 of the 10 largest U.S. banks. In turn, marketers can rapidly analyze, edit, and finalize copy, achieving (on average): 90% reduction in review time; 85% reduction in compliance rejections; 80%+ reduction in campaign cycle time. Persado Marketing Compliance AI applies multi-agent AI that continuously learns from consent orders, public comments, and evolving regulations—and refine analysis with every interaction, providing institutions with smarter, more precise insights over time. AI agents include: Regulation agents; Marketing agents; Library and oversight agents. Additional solution capabilities include analysis of copy in PDF, text, and image formats for adherence to Federal, state, and local laws, a library of high-risk expressions, copy performance scoring, disclaimer analysis, customizable compliance guidelines, and more. Persado also offers customizable, integrated workflows that enable marketing and legal to collaborate in the platform in real time, leveraging the agentic output to streamline decision making.
Banking groups led by ABA want the SEC to revoke its cybersecurity incident disclosure requirement because of need for confidentiality about critical infrastructure
American banking groups want the Securities and Exchange Commission (SEC) to revoke its cybersecurity incident disclosure requirements. These groups, led by the American Bankers Association (ABA), wrote to the SEC last week, contending that disclosing cybersecurity incidents “directly conflicts with confidential reporting requirements intended to protect critical infrastructure and warn potential victims.” Joining the ABA were the Securities Industry and Financial Markets Association, the Bank Policy Institute, Independent Community Bankers of America, and the Institute of International Bankers, who argue the rule hinders regulatory efforts to bolster national cybersecurity. The letter was flagged in a report Monday (May 26) by Cointelegraph, which noted that the rule in question — the SEC’s Cybersecurity Risk Management rule, published in July 2023 — requires companies to quickly disclose incidents such as data breaches or hacks. But the banking groups say this rule was flawed from the beginning and has been problematic in practice since going into effect. The letter said that the “complex and narrow disclosure delay mechanism” interferes with incident response and law enforcement, while also breeding “market confusion” between mandatory and voluntary disclosures.
Breaking encryption with a quantum computer just got 20 times easier following modular exponentiations getting twice as fast and packing more useful data into the same space to improve error correction
Google just released a new research paper, and it could be a big deal for Bitcoin and online security. Their quantum research has found that it might take 20 times less power and effort for a quantum computer to break RSA encryption – the technology that protects things like bank accounts and Bitcoin wallets – than experts thought earlier. the breakthrough has come from two places: better algorithms and smarter error correction. Researchers have made two big improvements in how quantum computers handle encryption. The first is that they have managed to make the modular exponentiations twice as fast. Then, they have also packed more useful data into the same space to improve error correction. However, the security implications are of a much serious nature. RSA and similar systems go against the global secure communications, ranging from banking to digital signatures.
IBM’s two-pronged approach to modern application management involves automating applications with AI and managing them through observability, aided by AI-generated problem summaries in plain English to simplify triage
AI, observability and automation at scale are converging to redefine how modern applications are built, monitored and optimized. IBM Corp.’s approach is two-pronged — automating applications with AI and creating a conducive environment, through observability, to manage them. Chris Farrell, group product manager of Instana observability at IBM. “We’re focused on both those things at the same time, simultaneously. One of the things that we’re doing is putting AI into the observability aspect of managing the applications. We have recently released integration with watsonx to create summarizations of problems in plain English so that anyone can get a summarization and print it out.” Central to IBM’s approach is the integration of AI into observability tooling, particularly through Instana and its connection with watsonx. This powerful combination enables AI-generated problem summaries in plain English, simplifying issue triage for both technical and non-technical teams. Additionally, IBM is taking steps toward AI-based remediation. With watsonx, problems can be detected and suggestions — or even automated actions — can be triggered to resolve them. This shift reduces the time between incident detection and resolution, enhancing uptime and operational efficiency, according to Farrell.
Sifflet’s AI-native data observability platform replaces manual triage, alert sprawl, and static rule sets with context-aware automation to help data teams scale data quality and reduce incident response times
Sifflet, the AI-native data observability platform, has shared an early look at their upcoming system of AI agents designed to help modern data teams scale data quality and reliability, reduce incident response times, and stay ahead of complexity. The new agents extend Sifflet’s core observability capabilities with a new layer of intelligence: Sentinel analyzes system metadata to recommend precise monitoring strategies; Sage recalls past incidents, understands lineage, and identifies root causes in seconds; Forge suggests contextual, ready-to-review fixes grounded in historical patterns. Sifflet’s AI-native approach is already helping customers to handle these workloads with existing functionality. Sifflet’s AI agents address the growing challenge and go one step further by replacing manual triage, alert sprawl, and static rule sets with context-aware automation that augments human teams. Sanjeev Mohan, founder of SanjMo and former VP Analyst at Gartner “Rather than relying on static monitoring, these agents bring memory, reasoning, and automation into the fold, helping teams move from alert fatigue to intelligent, context-aware resolution.” The agentic system is fully embedded in Sifflet’s AI-native platform and will soon be available to select customers in private beta.
Fenergo’s agentic AI for compliance allows users to interact with all operational, policy and entity data through natural language and harness real-time insights on process efficiency, operations and risk
Fenergo, a Dublin-based provider of client lifecycle management and compliance solutions, has launched its FinCrime Operating System. The system uses “agentic AI” to help firms cope with rising operational costs and compliance demands. The FinCrime OS unifies client lifecycle events, including onboarding, KYC, screening, ID&V, and transaction monitoring, on a single platform. The system can automate tasks and save up to 93% of operational costs. Fenergo’s initial six AI agents can streamline periodic KYC reviews, cutting review timeframes by up to 45%. The Six AI agents available today include: Data sourcing agent: Sources data from one or more third-party data provider, compares against entity data and auto-completes tasks; Screening agent: Runs screening checks against third-party integrations, auto-resolves hits and returns results to providers; Document agent: Extracts, classifies and links documents using AI to automate document-management processes; Significance agent: Performs a check against data changes to determine significance to define next action; Autocompletion agent: Automates the completion of tasks based on pre-defined rules, policy and configured guardrails; and Insights agent: Fenergo’s co-pilot allows users to interact with all operational, policy and entity data through natural language and harness real-time insights on process efficiency, operations and risk.
Fenergo launches compliance operating system, eyes big cost savings
Fenergo, a Dublin-based provider of client lifecycle management and compliance solutions, has launched its FinCrime Operating System. The system uses “agentic AI” to help firms cope with rising operational costs and compliance demands. The FinCrime OS unifies client lifecycle events, including onboarding, KYC, screening, ID&V, and transaction monitoring, on a single platform. The system can automate tasks and save up to 93% of operational costs. Fenergo’s initial six AI agents can streamline periodic KYC reviews, cutting review timeframes by up to 45%. The Six AI agents available today include: Data sourcing agent: Sources data from one or more third-party data provider, compares against entity data and auto-completes tasks; Screening agent: Runs screening checks against third-party integrations, auto-resolves hits and returns results to providers; Document agent: Extracts, classifies and links documents using AI to automate document-management processes; Significance agent: Performs a check against data changes to determine significance to define next action; Autocompletion agent: Automates the completion of tasks based on pre-defined rules, policy and configured guardrails; and Insights agent: Fenergo’s co-pilot allows users to interact with all operational, policy and entity data through natural language and harness real-time insights on process efficiency, operations and risk.
Agentic AI’s role in taking down DanaBot malware-as-a-service through orchestrating predictive threat modeling cuts months of forensic analysis to weeks validates its value for SOC teams
U.S. Department of Justice unsealed a federal indictment in Los Angeles against 16 defendants of DanaBot, a Russia-based malware-as-a-service (MaaS) operation responsible for orchestrating massive fraud schemes, enabling ransomware attacks and inflicting tens of millions of dollars in financial losses to victims. Agentic AI played a central role in dismantling DanaBot, orchestrating predictive threat modeling, real-time telemetry correlation, infrastructure analysis and autonomous anomaly detection. These capabilities reflect years of sustained R&D and engineering investment by leading cybersecurity providers, who have steadily evolved from static rule-based approaches to fully autonomous defense systems. Taking down DanaBot validated agentic AI’s value for Security Operations Centers (SOC) teams by reducing months of manual forensic analysis into a few weeks. All that extra time gave law enforcement the time they needed to identify and dismantle DanaBot’s sprawling digital footprint quickly. DanaBot’s takedown signals a significant shift in the use of agentic AI in SOCs. SOC Analysts are finally getting the tools they need to detect, analyze, and respond to threats autonomously and at scale, attaining the greater balance of power in the war against adversarial AI. Agentic AI directly addresses a long-standing challenge, starting with alert fatigue. Microsoft research reinforces this advantage, integrating gen AI into SOC workflows and reducing incident resolution time by nearly one-third. DanaBot’s dismantling signals a broader shift underway: SOCs are moving from reactive alert-chasing to intelligence-driven execution. At the center of that shift is agentic AI. SOC leaders getting this right aren’t buying into the hype. They’re taking deliberate, architecture-first approaches that are anchored in metrics and, in many cases, risk and business outcomes.
Banks are experimenting with customer “security scores,” which evaluate risk and proactively offer context-specific insights before a transaction takes place
Financial fraud is increasingly a psychological threat, not a technical one. At times of financial stress, banks need to focus more on identifying, supporting and defending vulnerable customers, not just protecting platforms and data. To effectively counter today’s scams, banks need to think beyond detection and toward true prevention. That means equipping fraud and security teams with AI tools that are constantly trained on the latest scam trends and human vulnerabilities, and have the ability not only to detect scams, but also to intervene and prevent them in real-time. New approaches such as AI-powered “scam prevention agents” can be embedded within banking apps to deliver personalized warnings, verify transaction safety, and even simulate real-time conversations that help customers recognize and break free from a scammer’s influence. Same AI agents could also offer post-scam support and remediation for victims, while feeding data from their reports back into the detection and prevention models to protect other customers. Some banks are also experimenting with customer “security scores,” which evaluate risk based on behavioral patterns, transaction histories, and exposure to red-flag scenarios. These scores can trigger proactive communication, before a transaction takes place, offering users context-specific insights or education. Rather than blanket emails about general scam awareness, these systems deliver highly tailored insights and can provide alerts like: “This recipient has been flagged in other scam cases,” or “This transaction appears unusual based on your history.” Institutional alignment is a key part of an organization’s scam prevention strategy. Effective financial institutions are establishing cross-functional “cyber-fraud” fusion teams that bring together fraud prevention, cybersecurity, compliance, and behavioral science. These task forces respond and anticipate scams, building response playbooks and accelerating time-to-intervention. The most effective models also include support from executive leadership, marketing, and customer service, creating a truly enterprise-wide fraud prevention strategy. By integrating advanced analytics, AI-driven risk scoring, and behavioral insights, banking institutions can anticipate and intercept fraudulent schemes before they inflict significant harm. In doing so, they protect not only their bottom line but the essential relationship of trust with their customers.
Banks must implement prompt consumer revocation mechanisms, provide third parties with limited-access keys and issue time-limited data access tokens with require periodic reauthentication to secure open banking data in the absence of CFPB 1033 rule
The Consumer Financial Protection Bureau’s 1033 rule, which would have put security guardrails around movement of data, is likely to be scrapped, given the agency itself, under a new administration, has said the rule is unlawful and needs to be set aside. So, banks and fintechs need to continue to police themselves and use industry standards and general principles of security, privacy and reliability. For banks, this means not only building APIs and authentication systems, but also implementing strict security oversight, monitoring third-party connections, and keeping detailed records of data access requests and responses. There is no specific technology or protocol mandated for APIs — banks can choose the technical implementation — but there have been calls for standardized, machine-readable formats and reliable performance. To guide this, the CFPB had intended to recognize standard-setting bodies, or SSBs, that develop qualified industry standards, or QISs, for data sharing. Adhering to an SSB’s standards (for formatting, authentication, security and so on) would have served as a safe harbor “indicia of compliance for data providers” under the CFPB rule. One standard-setting body has been recognized by the CFPB: the Financial Data Exchange, or FDX. The CFPB has received one other application, from the Canada-based Digital Governance Standards Institute, or DGSI.