Broadridge Financial Solutions, a global Fintech leader, has reported record activity on its Distributed Ledger Repo (DLR) platform, processing an average of $339 billion in daily repo transactions in September. This represents a 21% increase from August’s average and a 650% increase year-over-year, indicating the rapid adoption of tokenized settlement. DLR, the world’s largest institutional platform for settling tokenized real assets, uses tokenization and smart contracts to accelerate collateral velocity, improve liquidity management, and reduce trade processing costs. Broadridge is committed to bridging traditional and digital financial ecosystems.
Stablecoin legislation GENIUS drives strategic shifts as processors, wallets and exchanges realign to support only compliant coins, prepare new controls and plan market entry timelines ahead of phased implementation through 2026–2027
A new federal regulatory framework for payment stablecoins marks a pivotal moment for digital assets in the United States by establishing a legal definition and oversight by banking regulators that may enable commercial scaling. The new GENIUS framework provides several pathways for institutions to become issuers, with regulatory oversight based on the issuer’s legal entity structure. It will create opportunities for a wider range of issuers in the U.S., including non-banks. The requirements generally take effect in late 2026 or early 2027, depending in part on when regulators issue implementation guidance. The GENIUS Act’s requirements will affect not only banks, but also non-bank issuers, payment processors, digital asset service providers, corporates, and customers. Each group faces unique operational, compliance, and business model decisions as they navigate the new framework and position themselves in the evolving stablecoin market. Each group also needs to consider risks in areas such as operations, cybersecurity, fraud, tax, regulation, and reputation. Subsidiaries of federally and state-chartered banks, non-bank entities, and uninsured national banks are among the entities eligible to become payment stablecoin issuers under the new regulatory framework. Issuers will be required to comply with prudential standards similar to those for traditional banks and to maintain one-to-one reserves of high-quality liquid assets, publish monthly attestation reports, and adhere to rigorous risk management and compliance measures. The act prohibits payment stablecoins from paying interest, although rewards may be offered by other parties. The requirements of the GENIUS Act will likely drive strategic decisions, significant operational changes, and new compliance processes regarding whether and how to participate in the stablecoin market. Companies that facilitate the transfer, settlement, or processing of payments also face strategic decisions about whether, when, and how to begin or scale payment stablecoin transactions. After evaluating the risks and opportunities, companies can take proactive measures to comply with the new regulatory framework now that payment stablecoins have been legitimized for use by financial institutions. Exchanges, custodians, and wallet providers will need to adjust their operations to meet the new regulatory standards, particularly related to risk management, reporting, and customer protection. They should prepare for the transition period by updating their compliance and risk management frameworks so that they only support or list stablecoins issued by entities that meet the new law’s standards. These providers should also monitor regulatory developments and adapt their offerings to remain compliant and competitive.
Verifone launches Commander Fleet to accept WEX and other fleet cards through a single POS; unifying heavy/light fleet and consumer transactions along with fleet data capture like odometer readings
Verifone announced the launch of Commander Fleet, a first-of-its-kind solution enabling existing Verifone customers to accept both WEX® and other leading commercial fleet (fuel) cards through a single POS integration. Commander Fleet consolidates fleet acceptance into one streamlined POS system, delivering significant operational efficiencies and a better experience for drivers. Powered by the trusted Verifone Commander platform, Commander Fleet supports heavy fleet, light fleet, and consumer transactions on one system, while meeting all required fleet-specific data capture needs, such as odometer readings. For unbranded sites, the solution integrates with Commander Payments, providing a complete payments package without added infrastructure. By unifying fleet card processing, operators gain: Lower total cost of ownership – One device for all fleet and consumer card payments. Eliminates duplicate hardware, licenses, and service contracts—delivering measurable, long-term savings. Operational efficiency & faster throughput – A single, streamlined POS reduces toggling and checkout friction. Faster transactions move more trucks through the line and create more upsell opportunities. Simplified training & higher productivity – One system for all transactions means shorter employee training times, fewer errors, and smoother operations at every register. Unified reporting & actionable insights – A single view of all WEX activity enables faster reconciliation, smarter staffing, and data-driven promotions. Future-ready flexibility – Built on the Commander platform for easy expansion as fleet networks evolve—protecting your investment for years to come.
Nacha updates International ACH Transactions (IAT) rules to boost cross‑border efficiency; adding date‑of‑birth for sanctions screening, clarified IAT definition and mandatory IAT contacts
Nacha voting members approved five Nacha Operating Rules changes aimed at increasing the awareness and efficiency of International ACH Transactions, or IATs. “The U.S. ACH Network has supported the capability to send and receive cross-border ACH payments for decades,” said Jane Larimer, Nacha President and CEO. “These Rules changes should make IATs easier and more efficient to use.” The existing Nacha Rules for IATs became effective in 2009, replacing previous cross-border ACH payments with a new transaction rule set and format that enabled compliance with the Travel Rule and Office of Foreign Assets Control (OFAC) sanction programs. Approximately 121 million IATs were made in 2024. One of the approved Rules refines the definition of an IAT with the goal of making it easier for ACH Originators and Originating Depository Financial Institutions (ODFIs) to determine whether a payment should be classified as an IAT. The other approved Rules are aimed at transaction and data efficiency. They include adding the capability to carry a person’s date of birth for sanctions screening; adding a new return reason to indicate an issue with sanctions screening as distinct from other return reasons; recognizing the possibility that the financial agency outside the U.S. is a non-traditional account-holding institution or organization; and requiring U.S. financial institutions to register IAT-specific contacts in Nacha’s ACH Contact Registry.
Experts suggest multi-agent testing orchestration model; with specialized agents handling natural language understanding, test plan execution, application change detection with healing and failure triage automatically routing to developers
C-level executives want their companies to use AI agents to move faster, therefore driving vendors to deliver AI agent-driven software, and every software delivery team is looking for ways to add agentic capabilities and automation to their development platforms. By parallel coding with co-pilots, some pundits are speculating that developers could increase their code output by 10 times. “The only purpose of adopting agents is productivity, and the unlock for that is verifiability,” said David Colwell, vice president of artificial intelligence, Tricentis, an agentic AI-driven testing platform. “The best AI agent is not the one that can do the work the fastest. The best AI agent is the one that can prove that the work was done correctly the fastest.” “When you prompt AI to write a test, one agent will understand the user’s natural language commands, and another will start to execute against that plan and write actions into the test, while another agent understands what changed in the application and how the test should be healed,” said Andrew Doughty, founder and chief executivce of SpotQA, creator of Virtuoso QA. “And then if there is a failure, an agent can look into the history of that test object, and then triage it automatically and send it over to developers to investigate.” “We’ve found that customers don’t need large model-based AIs to do very specific testing tasks. You really want smaller models that have been tuned and trained to do specific tasks, with fine-grained context about the system under test to deliver consistent, meaningful results,” said Matt Young, president, Functionize Inc.
TD Securities’ ChatGPT-powered assistant cuts idea generation from hours to minutes by querying the bank’s proprietary research and also provides citations, summaries, text-to-SQL tables and charts for traders and sales desks
TD Securities launched an AI virtual assistant in June that lets traders query the bank’s own equity research, and the early results look positive. “It’s a massive time save,” Dan Bosman, chief information officer at TD Securities, told American Banker. “It’s not unusual for people in the capital markets group to receive ten 20-page PDFs simultaneously in their email inbox. “Even if you’re the fastest reader and you’re really skim reading, you have to take that first 30 minutes in the day to pore over that before you can make your first call,” Bosman said. “Now with the tool, you’re able to get those insights and make those calls within minutes, as opposed to waiting half an hour.” Some people will still read the actual reports when they have time, he said. “The real need is, we put out so much content and folks on a trade floor are constantly bombarded with news and signals,” Bosman said. “Part of what we do is try to reduce that signal-to-noise ratio, getting the right signals to our sales traders and ultimately out to our clients as quickly as possible.” In the broader picture, this initiative is one of many at TD Bank Group, whose CEO Raymond Chun has set ambitious goals for AI at the firm. “Across TD, we’re deploying the capabilities needed to drive speed, such as AI-powered virtual assistants, AI-enabled adjudication, predictive tools and new applications,” Chun said at a recent investor day. “These new capabilities are already driving strong outcomes. We’re approving mortgages in hours instead of days. We’re pre-approving credit cards with data-driven insights for millions of clients. We’re producing reports in minutes versus hours or days, and we’re responding to clients in just a few seconds, significantly shortening call and wait times.” The bank is aiming to get $1 billion in annual value from AI, half through revenue increases and half through cost savings. TD now has 2,500 data scientists, engineers, data analysts and experts building proprietary platforms and applications. “This is huge,” Chun said. “AI is fast becoming fundamental to business and to client experience.” Elsewhere in TD Securities, Bosman’s team has given software developers coding assistants. “A year ago, folks were saying, we need this, and now seeing it’s being woven into their careers and what they’re doing,” he said.
Infinitus partners with Outshift by Cisco to leverage orchestration layer for discoverable AI agents built on MCP and A2A open protocols; streamlining prior authorization and insurance verifications that typically take 24-48 hours into seconds
Infinitus Systems, Announced that the company has partnered with Outshift by Cisco to streamline healthcare operations, leveraging the orchestration layer of discoverable, secure, and distributed AI agents in healthcare. Healthcare clinicians and staff face a flood of time-consuming tasks, from patient communication and follow-ups to prior authorization and insurance verifications. Infinitus AI agents streamline these vital but labor-intensive processes – which can include many thousands of faxes, phone calls, and voicemail exchanges managed by overburdened staff – to improve patient access and outcomes. Once streamlined using Infinitus AI agents, tasks that typically take 24-48 hours can instead be completed in seconds to a couple of hours. As part of this collaboration, Infinitus has contributed to AGNTCY – an open-source framework under the Linux Foundation dedicated to advancing interoperability for the Internet of Agents. AGNTCY’s mission is to establish shared standards, promote agent discoverability, and build trust among AI agents, enabling secure and coordinated multi-agent workflows in mission-critical environments like healthcare. By publishing its MCP implementation to the Agent Directory, Infinitus ensures its healthcare-focused agents can seamlessly integrate with agents from other organizations.
Together AI announces ATLAS adaptive speculator system delivering 400% inference speedup using dual-speculator architecture combining heavyweight static model trained on broad data with lightweight adaptive model learning continuously from live traffic patterns in real-time
Together AI announced research and a new system called ATLAS (AdapTive-LeArning Speculator System) that aims to help enterprises overcome the challenge of static speculators. The technique provides a self-learning inference optimization capability that can help to deliver up to 400% faster inference performance than a baseline level of performance available in existing inference technologies such as vLLM. The system addresses a critical problem: as AI workloads evolve, inference speeds degrade, even with specialized speculators in place. ATLAS uses a dual-speculator architecture that combines stability with adaptation: The static speculator – A heavyweight model trained on broad data provides consistent baseline performance. It serves as a “speed floor.” The adaptive speculator – A lightweight model learns continuously from live traffic. It specializes on-the-fly to emerging domains and usage patterns. The confidence-aware controller – An orchestration layer dynamically chooses which speculator to use. It adjusts the speculation “lookahead” based on confidence scores. The technical innovation lies in balancing acceptance rate (how often the target model agrees with drafted tokens) and draft latency. As the adaptive model learns from traffic patterns, the controller relies more on the lightweight speculator and extends lookahead. This compounds performance gains. Together AI’s testing shows ATLAS reaching 500 tokens per second on DeepSeek-V3.1 when fully adapted. More impressively, those numbers on Nvidia B200 GPUs match or exceed specialized inference chips like Groq’s custom hardware.
Reflection AI’s autonomous coding agent Asimov reads everything from emails to slack messages, project notes to documentation, in addition to the code, to learn everything about how and why the app was created
AI startup Reflection AI has developed an autonomous agent known as Asimov. It has been trained to understand how software is created by ingesting not only code, but the entirety of a business’ data to try to piece together why an application or system does what it does. Co-founder and Chief Executive Misha Laskin said that Asimov reads everything from emails to slack messages, project notes to documentation, in addition to the code, to learn everything about how and why the app was created. He explained that he believes this is the simplest and most natural way for AI agents to become masters at coding. Asimov is actually a collection of multiple smaller AI agents that are deployed inside customer’s cloud environments so that the data remains within their control. Asimov’s agents then cooperate with one another to try and understand the underlying code of whatever piece of software they’ve been assigned to, so they can answer any questions that human users might have about it. There are several smaller agents designed to retrieve the necessary data, and they work with a larger “reasoning” agent that collects all of their findings and tries to generate coherent answers to user’s questions.
Reflection AI’s autonomous coding agent Asimov reads everything from emails to slack messages, project notes to documentation, in addition to the code, to learn everything about how and why the app was created
AI startup Reflection AI has developed an autonomous agent known as Asimov. It has been trained to understand how software is created by ingesting not only code, but the entirety of a business’ data to try to piece together why an application or system does what it does. Co-founder and Chief Executive Misha Laskin said that Asimov reads everything from emails to slack messages, project notes to documentation, in addition to the code, to learn everything about how and why the app was created. He explained that he believes this is the simplest and most natural way for AI agents to become masters at coding. Asimov is actually a collection of multiple smaller AI agents that are deployed inside customer’s cloud environments so that the data remains within their control. Asimov’s agents then cooperate with one another to try and understand the underlying code of whatever piece of software they’ve been assigned to, so they can answer any questions that human users might have about it. There are several smaller agents designed to retrieve the necessary data, and they work with a larger “reasoning” agent that collects all of their findings and tries to generate coherent answers to user’s questions.