Red Hat Inc. and Intel Corp.’s collaboration is all about translating open source code into efficient AI solutions, including the use of a virtual large language model. vLLM is a library of open-source code that functions as an inference server, forming a layer between Red Hat’s models and Intel’s accelerators. “What we’re working with Red Hat to do is minimize that complexity, and what does the hardware architecture and what does all the infrastructure software look like, and make that kind of seamless,” Chris Tobias, general manager of Americas technology leadership and platform ISV account team at Intel said. “You can just worry about, ‘Hey, what kind of application do I want to go with, and what kind of business problem do I wanna solve?’ And then, ideally, that gets you into a cost-effective solution.” Intel and Red Hat have worked on a number of proof-of-concepts together, and Intel is fully compatible with OpenShift AI and Red Hat Linux Enterprise AI. Their collaborations have so far seen success from customers hoping to adopt AI without breaking the bank, according to King. “Our POC framework has different technical use cases, and now that vLLM becomes more central and on the stage for Red Hat, we’re seeing a lot of interest for vLLM-based POCs from our customers,” he said. “[It’s] really simple for a model to be able to make itself ready day zero for how it can best run on an accelerator.”
Some of the biggest U.S. banks are exploring whether to team up to issue a joint stablecoin: WSJ reports
Some of the biggest U.S. banks are exploring whether to team up to issue a joint stablecoin, The Wall Street Journal reported on Thursday. The conversations have so far involved companies co-owned by JPMorgan Chase, Bank of America, Citigroup, Wells Fargo and other large commercial banks, the report said, citing people familiar with the matter. The Reuters Tariff Watch newsletter is your daily guide to the latest global trade and tariff news. Sign up here. However, the newspaper said that the bank consortium discussions are in early, conceptual stages and could change. Reuters could not immediately confirm the report. Citigroup, Bank of America and Wells Fargo declined to comment on the WSJ report, while JPMorgan did not respond to a Reuters’ request for comment outside of regular business hours. Stablecoins, a type of cryptocurrency designed to maintain a constant value, usually pegged to a fiat currency such as the U.S. dollar, are commonly used by crypto traders to move funds between tokens. One bank consortium possibility that has been discussed would be a model that lets other banks use the stablecoin, in addition to the co-owners of the Clearing House and Early Warning Services, the Journal said, citing unnamed sources. Some regional and community banks have also considered whether to pursue a separate stablecoin consortium, it added. Trump has promised to be the “crypto president,” popularizing its mainstream use in the U.S. He has said he backs crypto because it can improve the banking system and increase the dominance of the dollar. Reporting by Pretish M J in Bengaluru; Editing by Sonia Cheema. Our Standards: The Thomson Reuters Trust Principles.
Citi’s new PayTo enables institutional clients to initiate account-to-account pull payments enabling a transparent and instant process for clients
Citi announced that it is live with PayTo initiator, enabling its clients to access a faster, cost effective, and more secure alternative to credit cards, debit cards or direct debits. Through PayTo, Citi’s institutional clients can initiate account-to-account pull payments. This means clients’ customers can pay directly from their bank account, in real-time, enabling a transparent and instant process for clients. PayTo offers Citi clients seamless reconciliation and fee reduction benefits as it reduces reliance on card fees and decreases the likelihood of chargebacks. PayTo can be used for everyday transactions such as in-app payments or e-commerce payments, outsourced payroll, utility bills, flight bookings, subscriptions and digital wallet top-ups. Through PayTo, we’re truly giving our clients access to the future of payments. We anticipate strong take-up of this offer as clients welcome the benefits for themselves and their end customer,” said Kirstin Renner, Citi Australia and New Zealand Head of Treasury and Trade Solutions, Services. This offering is the latest in a suite of innovative solutions offered by Citi’s Services business, including Spring by Citi, an end-to-end digital payments service enabling e-commerce and B2B funds flow globally and Real-Time Funding for cross-border transactions for corporate clients.
TD to spend $1B in two-year span on compliance fixes- deploying machine learning to “increase investigative productivity,” and additional reporting and controls for cash management activities
TD Bank Group plans to invest $1 billion over a two-year period to beef up its anti-money-laundering controls, after compliance failures led to historic regulatory penalties and handcuffed its U.S. growth. The bank also juggles a new restructuring plan, the scaling back of its American business and growing economic uncertainty due to U.S.tariff policies. The company had previously projected spending $500 million on anti-money-laundering remediation efforts during the fiscal year that ends in October, as it upgrades its training, analysis capabilities and protocols. TD Chief Financial Officer Kelvin Tran told analysts that the bank expects similar investments in the fiscal year that ends in October 2026. “We wanted to give the Street a sense of what 2026 was going to look like,” Salom said. “The composition of spend might change a little bit. It might be a little less remediation, more validation work, more lookbacks, monitor costs, et cetera. … But we think the overall spend level is going to be similar.” Across the first two quarters of 2025, the bank has invested $196 million on the anti-money-laundering compliance efforts. Salom said there will be an uptick in those expenses in the back half of the year as the company delves “into the meat of our remediation delivery programs.” TD plans to deploy machine learning technology in the third quarter to “increase investigative productivity,” along with additional reporting and controls for cash management activities. The bank feels confident about its expense guidance for 2025 and 2026, and those costs will eventually decline “at some point in the future,” Salom said. TD also said that it’s on track to meet its previous projection of a 10% reduction in U.S. assets by the end of October. At the end of April, the U.S. bank had about $399 billion of assets, putting it below the $434 billion cap imposed by the Office of the Comptroller of the Currency. The bank sold or ran off about $11 billion in U.S. loans during the second quarter, and announced plans to wind down a $3 billion point-of-sale financing business that services third party retailers in the U.S. TD also plowed ahead with plans to remix its bond portfolio by selling relatively low-yielding bonds to reinvest in higher-returning securities. Salom said the bank should meet its forecast of restructuring $50 billion of securities in the next few weeks. The bank expects to generate a benefit to net interest income of close to $500 million between November 2024 and October 2025, he said. “We think new CEO Raymond Chun is putting the bank on the right track,” wrote Maoyuan Chen, an equity analyst at Morningstar, in a note. “2025 will be a transitional year as TD is actively remediating its US anti-money-laundering system with elevated expenses and repositioning its US balance sheet for its asset cap growth limitations.”
JPMorganChase democratized employee access to gen AI but per-seat licensing costs model is a roadblock
JPMorganChase was the first big bank to roll out generative AI to almost all of its employees through a portal called LLM Suite. As of mid-May, it’s being used by 200,000 people. “We think that AI has the potential to really deliver amazing scale and efficiency as well as client benefit,” Teresa Heitsenrether, chief data and analytics officer, told. The bank, like many others, has used traditional AI and machine learning for years in areas like fraud detection, risk management and marketing. “But the big surprise really came with generative AI, which really opens up new possibilities for us,” Heitsenrether said. LLM Suite is an abstraction layer through which large language models like OpenAI’s GPT-4 are swapped in and out. The models are trained on proprietary JPMorganChase data. The bank’s lawyers use LLM Suite to analyze contracts. Bankers use it to prepare presentations for clients and to generate draft emails and reports. The project is “advanced in scope and ambition,” said Alex Jimenez, lead principal strategy consultant at Backbase. “Deploying a proprietary large language model at this scale is an industry-leading move. Unlike others, they aren’t just testing but embedding it deep into the daily workflows of bankers, compliance teams, technologists. The real advancement isn’t just the tech but the institutional integration.” This project is setting the tone for other banks, he said. “The rollout likely puts pressure on peer banks to accelerate or scale up their own gen AI initiatives. It is influencing vendor roadmaps and internal AI governance discussions across the industry.” The bank tests and vets new models for safety and security, as well as their applicability to different use cases, before bringing them into its LLM Suite. Some large language models are good at synthesis and reasoning, while others are good at coding or complex document analysis, Waldron said. Small models can be fine-tuned for specific tasks. Generative AI models generally have per-seat licensing costs, which can add up for a bank the size of JPMorganChase. “That’s been one of the roadblocks to widespread adoption, because business leaders naturally are asking the question up front, what’s the ROI for that particular person?” Waldron said. But because JPMorganChase built an internal platform, the only variable cost is compute, he said. If an employee doesn’t use it, the bank does not pay for it. “That value proposition turned out to be very desirable to business leaders,” Waldron said. For its overall AI adoption and use of AI, JPMorganChase has been at the top of Evident’s AI Index from the scorecard’s launch in 2023.Real-time, accurate data is important for these models to generate useful answers. The bank is gradually connecting its datasets to LLM Suite, including all of its news subscriptions and earnings transcript libraries. “When these get connected and distributed to the whole population, all of a sudden, employees can do things in an automated way that they could never do before,” Waldron said. (The bank will still pay for its news subscriptions, but for firmwide access rather than individual accounts.)
BOK Financial creates a content site offering timely articles and videos on economic and personal finance contributing to higher levels of “earned media” — exposure gained through social media sharing and other channels
Bankers are often reporters’ go-to sources for economic and personal finance coverage. BOK Financial’s CMO Sue Hermann thought the bank could get some direct benefit from that. The possibilities that can be realized when a bank decides to deploy its experts to produce “brand journalism” excited Sue Hermann, CMO at BOK Financial, parent of Bank of Oklahoma. Not only can brand journalism deliver meaningful content to customers and potential customers, rather than the usual pabulum, she says, but it can begin to improve the flagging degree of trust that studies still show the industry suffers from. Today, BOK Financial produces “The Statement,” a content site offering timely articles and videos. The site features four sub-channels — “Your Money,” “Your Business,” “Perspectives” and “Community.” Since 2019 the bank’s team of internal experts and writers, as well as freelance writers, produce approximately 150 articles or videos annually. Hermann says the critical difference is “creating a need, rather than selling a thing. Not talking about checking accounts, but helping people understand the importance of long-term planning for their financial needs.” Brand journalism “is a long-term play and it takes a long time for some people to get on board,” says Megan Ryan, the bank’s director of content strategy. This not only includes superiors who want proof that the technique produces results, but even experts within the bank. She and Hermann say that often the best people on a given subject area start out feeling that they’re just bankers, and not media material. But the bank has tracked reader and viewer behavior in multiple ways and Hermann says the content team is garnering results. The bank tracks return users and Hermann says people come back for more articles and videos. (The bank filters bots and employees out of its figures.) In addition, The Statement contributes to higher levels of “earned media” — exposure gained through social media sharing and other channels. Building exposure for the bank in this way, rather than pouring on email after email and then sending those who click through to a page about checking (yes, this is a bugaboo for Hermann), she says. “There is huge value in delivering information in a way that isn’t salesy, because that aligns with our brand and developing long-term relationships — doing what’s best for the client,” says Hermann. Hermann says the bank learned early on that making a success out of this technique takes dedication and regularity. Another helpful element is cross-pollination. Something setting BOK Financial apart from some other large banks is that both marketing and corporate communications report to Hermann as CMO. In the early days, the two functions tended not to leave their swim lanes, as Hermann calls the divide, but now more sharing of ideas and information regarding The Statement occurs. Meetings with line-of-business staff sometimes prompt marketing staff to ask what questions the bankers are hearing from their customers. This may pinpoint an issue and then the right approach to address it has to be settled. The idea is not just to chime along with other media, but to add a viewpoint of bank experts or a round-up informed by that expertise. Hermann and Ryan says its helpful to have professional journalists on the staff or as regular freelancers, because they are not only comfortable with the need to crank out articles on a timely basis, but also the ability to drop.
Capital One Auto Refinance division uses ‘Swiss cheese’ approach to fraud prevention – a combination of risk prevention software and alternative data used to verify transactions
Capital One is using a “Swiss cheese” approach, for which a combination of risk prevention software and alternative data is used to verify transactions, Head of Auto Refinance Allison Qin said. “You have to have a multilayered approach. One slice might have a hole in it, but if you have 20 slices stacked up, you’re less likely to make it through the stack of cheese.” — Allison Qin, Capital One. The lender uses Capital One credit card transaction history and biometric data in its proprietary fraud prevention models, Qin said. Auto lenders’ total estimated loss exposure from fraud reached $9.2 billion in 2024, a 16.5% year-over-year rise, according to risk management platform Point Predictive’s March 25 report. “Fraud is continuously evolving and getting harder to spot, so it’s imperative that dealers and lenders work together to solve [industry fraud],” Qin said. Like lenders, dealerships are implementing multiple fraud prevention systems. Morgan Automotive Group, with more than 75 retail locations in the state, uses an “eyes wide open” approach in which dealers are vigilant about identifying scams, Justin Buzzell, finance vice president of the group, said. For Morgan Automotive Group, those protections include: A red flags check, which looks at customer identification;
A Department of Highway Safety and Motor Vehicles check; A synthetic fraud check, which looks for mixes of real and fake information; A biometric scan; and Video records of all interactions with customers to show “we’ve done everything we could.” “If you pass all of that, we’ll sell you a car,” Buzzell said. Dealerships and lenders agree that notifying each other about fraudulent encounters helps the industry; however, there’s no easy place to do that yet, West American Loan Chief Executive and President Sean Murphy said. If a centralized portal, similar to e-contracting platform RouteOne, allowed dealers and lenders to share potential fraud signs, industry players could work together to stop scams, Murphy said.
Google’s new AI mode threatens the traditional internet as it leads to fewer clicks on the SERP results which harms websites that need ad revenue
Google is unveiling its “AI mode,” where instead of a list of search results, what the user get is Google‘s own take on a model-generated summary of whatever was put into the search box. A demo video from Google shows people asking various in-depth questions about a popsicle bridge, homebuying and other issues, and the new search engine feature replying accordingly, with cogent, multi-paragraph answers. However, Googling the kinds of phrases that we traditionally used in older days brings some vague results in AI Mode, as when I supplied the phrase “big dress,” which generated the following: “You seem to be interested in the term big dress, which can have several meanings,” and information on everything from voluminous ballgowns to the Guinness book of records. Basically, using AI results leads to fewer clicks on the Search Engine Results Page SERP results which harms websites that need ad revenue. Zero click searches where users just take the AI feedback don’t bring in web traffic to other parts of the Internet. Google CEO Sundar Pichai called AI Mode a “total reimagining of search.” “What all this progress means is that we are in a new phase of the AI platform shift, where decades of research are now becoming reality for people all over the world,” Pichai said in a press statement.
Trustworthy agentic AI is the next enterprise mandate- Accenture-ServiceNow’s platform enables development of enterprise-grade, trustworthy agentic AI using zero trust framework that enforces continuous verification and robust security measures throughout the AI lifecycle
Accenture PLC and ServiceNow are co-developing intelligent platforms that embed security, autonomy and interoperability at the core to deliver enterprise-grade AI agents that organizations not only use — but rely on. This partnership is redefining how trust is built into the very fabric of AI systems, according to Dave Kanter, senior managing director for Global ServiceNow Business Group lead at Accenture. Accenture’s AI Agent Zero Trust Model for Cyber Resilience is a cybersecurity framework developed to ensure the secure deployment and operation of AI agents within enterprise environments. It builds on the core principles of zero trust — where nothing is inherently trusted and everything must be verified. This model supports the development of trustworthy agentic AI by enforcing continuous verification and robust security measures throughout the AI lifecycle, according to Kanter. “You’ll hear our team talking about one of their favorite use cases — they call it Agent Zero,” he said. This use case, which is a complex workflow with decision-making, with intent-based guardrails, we can power that with the agentic AI. The first full-on agents built on the AI studio will be Agent Zero for three or four of our clients, and we expect those to go live in just a matter of days.” Unlike traditional AI, which typically follows predefined instructions or responds to prompts, agentic AI exhibits higher levels of independence, adaptability and initiative. As a result, it has the potential to redefine the workforce, Jacqui Canney, chief people and AI enablement officer at ServiceNow pointed out.
Combining agentic workflows with APIs enable scaling enterprise AI by abstracting the hardware and infrastructure complexities and offering modular, collaborative way for seamless integration across diverse environments
Agentic workflows are fast becoming the backbone of enterprise AI, enabling scalable automation that bridges on-prem systems and the cloud without adding complexity. Organizations adopting agentic workflows are increasingly turning to standard APIs and open-source platforms to simplify the deployment of AI at scale. By abstracting the hardware and infrastructure complexities, these workflows allow for seamless integration across diverse environments, giving companies the flexibility to shift workloads without rewriting code, according to Chris Branch, AI strategy sales manager at Intel. “With the agentic workflow combined with APIs, what you can do then is have a dashboard that runs multiple models simultaneously. What that agentic workflow with these APIs allows is for companies to run those on different systems at different times in different locations without changing any of their code.” This modularity also extends to inference use cases, such as chat interfaces, defect detection and IT automation. Each task might leverage a different AI model or compute resource, but with agentic workflows, they can all operate within a unified dashboard. Standards such as the Llama and OpenAI APIs are central to enabling this level of fluidity and collaboration between agents, according to Mungara. At the foundation of this vision is the Open Platform for Enterprise AI, which provides infrastructure-agnostic building blocks for generative AI. Supported by contributions from Advanced Micro Devices Inc., Neo4j Inc., Infosys Ltd. and others, OPEA allows enterprises to rapidly test, validate and deploy scalable solutions across cloud and on-prem infrastructure, Branch explained.