More Klarna customers are having trouble repaying their “buy now, pay later” loans, the short-term lender said. The disclosure corresponded with reports by lending platforms Bankrate and LendingTree, which cited an increasing share of all “BNPL” users saying they had fallen behind on payments. The late or missed installments are a sign of faltering financial health among a segment of the US population, some analysts say. This concern is consistent with previous research that has shown consumers spend more when BNPL is offered when checking out and that BNPL use leads to an increase in overdraft fees and credit card interest payments and fees. Industry watchers point to consumers taking out loans they can’t afford to pay back as a top risk of BNPL use. Without credit bureaus keeping track of the new form of credit, there are fewer safeguards and less oversight. Justine Farrell, chair of the marketing department at the University of San Diego’s Knauss School of Business, said that when consumers aren’t able to make loan payments on time, it worsens the economic stress they’re already experiencing. “Consumers’ financial positions feel more spread thin than they have in a long time,” said Farrell, who studies consumer behavior and BNPL services. The Consumer Federation of America and other watchdog organizations have expressed concern about the rollback of BNPL regulation as the use of the loans continues to rise. “By taking a head-in-the-sand approach to the new universe of fintech loans, the new CFPB is once again favoring Big Tech at the expense of everyday people,” said Adam Rust, director of financial services at the Consumer Federation of America.
Dell leverages ‘customer zero’ model—using internal teams as first adopters—to refine AI services in real time and accelerate scalable enterprise deployment
Customer zero is becoming a strategic advantage in the age of AI-powered services. Enterprises deploying artificial intelligence at scale are learning that the real advantage isn’t just in new tools, it’s in being their own first customer. This “customer zero” approach lets them test AI in-house, fine-tune it in real time and apply those insights externally. By embedding intelligence into workflows from day zero, they gain speed, precision and a repeatable model for real-world impact, according to Doug Schmitt, chief information officer of Dell Technologies Inc. and president of Dell Technologies Services Inc. Acting as customer zero gives Dell greater control over AI’s impact while grounding innovation in real business needs, not theory. By testing and refining AI internally, Dell builds credibility and sharpens its services-led approach, guiding customers from strategy to deployment. That firsthand experience helps reduce friction and deliver smoother, faster AI transformations across the enterprise, according to Scott Bils, vice president and general manager of product management, professional services, at Dell Technologies. “When you take a look at what we’re doing from a professional services standpoint, it’s really to help customers on their end-to-end journey and helping them drive AI transformation. It’s around helping them deploy,” he said. “We provide the consulting services and manage services around it all the way from day zero to day two plus. As we go through the journey internally at Dell, [it] provides us a tremendous amount of insight that we can then take to our customers and help accelerate their journey.” This internal-first mindset has also enabled Dell to digitize and refine its own processes, creating a blueprint that customers can follow. AI is layered into workflows that are already disciplined and well understood, allowing for both rapid experimentation and reliable results. That strong foundation of data, automation and process discipline provides fertile ground for scalable AI and LLM deployment, Schmitt noted. Customer zero is more than a model; it’s a mindset that blends internal accountability with innovation, giving organizations the confidence to build, test and deliver real-world AI outcomes. As AI factories mature and agents begin to automate decisions across the enterprise, that feedback loop between internal use and external delivery will be essential.
UnGPT.ai offers specialized tools for refining AI-generated text to appear more human-like, focusing on tone and flow enhancements
In the book “More Than Words,” writer-educator John Warner makes the case for renewing the concept of writing as a fundamentally human activity. Warner has argued before that much of what LLMs will destroy deserves to be destroyed. We should, he argues, take the advent of chatbots as “an opportunity to reconsider exactly what we value and why we value those things.” In education, writing has become performance rather than communication, and if we want students to simply follow a robotic algorithm to create a language product–well, that is exactly the task that a LLM is well-suited to perform. Warner encourages us to resist “technological determinism,” the argument that AI is inevitable and therefore we should neither resist or regulate it, as well as the huge hype, the manufactured sense that this is the future and you must get on board. Warner also points out the constant tendency to anthropomorphize AI, even though it is a machine that does not think, understand, or empathize, folks are constantly projecting those qualities onto the AI. Warner encourages renewing the sense of and appreciation for the human. And he calls on readers to explore their understanding of the field, in particular finding guides, people who have invested the time and study and thought to provide deeper insights into this growing field. At one point Warner responds to the notion that AI somehow improves on human work, noting that LLMs are machine. “To declare the machines superior means believing that what makes humans human is inherently inferior.” To those who argue that chatbots teamed up with humans will be able to create more, better, faster writing, Warner says no. “I’ll tell you why not. Because ChatGPT cannot write. Generating syntax is not the same thing as writing. Writing is an embodied act of thinking and feeling. Writing in communicating with intention. Yes, the existence of a product at the end of the process is an indicator that writing has happened, but by itself, it does not define what writing is or what it means to the writer or the audience for that writing.”
vLLM is a library of open-source code that functions as an inference server, forming a layer between Red Hat’s models and Intel’s accelerators translating open source code into efficient AI solutions
Red Hat Inc. and Intel Corp.’s collaboration is all about translating open source code into efficient AI solutions, including the use of a virtual large language model. vLLM is a library of open-source code that functions as an inference server, forming a layer between Red Hat’s models and Intel’s accelerators. “What we’re working with Red Hat to do is minimize that complexity, and what does the hardware architecture and what does all the infrastructure software look like, and make that kind of seamless,” Chris Tobias, general manager of Americas technology leadership and platform ISV account team at Intel said. “You can just worry about, ‘Hey, what kind of application do I want to go with, and what kind of business problem do I wanna solve?’ And then, ideally, that gets you into a cost-effective solution.” Intel and Red Hat have worked on a number of proof-of-concepts together, and Intel is fully compatible with OpenShift AI and Red Hat Linux Enterprise AI. Their collaborations have so far seen success from customers hoping to adopt AI without breaking the bank, according to King. “Our POC framework has different technical use cases, and now that vLLM becomes more central and on the stage for Red Hat, we’re seeing a lot of interest for vLLM-based POCs from our customers,” he said. “[It’s] really simple for a model to be able to make itself ready day zero for how it can best run on an accelerator.”
Some of the biggest U.S. banks are exploring whether to team up to issue a joint stablecoin: WSJ reports
Some of the biggest U.S. banks are exploring whether to team up to issue a joint stablecoin, The Wall Street Journal reported on Thursday. The conversations have so far involved companies co-owned by JPMorgan Chase, Bank of America, Citigroup, Wells Fargo and other large commercial banks, the report said, citing people familiar with the matter. The Reuters Tariff Watch newsletter is your daily guide to the latest global trade and tariff news. Sign up here. However, the newspaper said that the bank consortium discussions are in early, conceptual stages and could change. Reuters could not immediately confirm the report. Citigroup, Bank of America and Wells Fargo declined to comment on the WSJ report, while JPMorgan did not respond to a Reuters’ request for comment outside of regular business hours. Stablecoins, a type of cryptocurrency designed to maintain a constant value, usually pegged to a fiat currency such as the U.S. dollar, are commonly used by crypto traders to move funds between tokens. One bank consortium possibility that has been discussed would be a model that lets other banks use the stablecoin, in addition to the co-owners of the Clearing House and Early Warning Services, the Journal said, citing unnamed sources. Some regional and community banks have also considered whether to pursue a separate stablecoin consortium, it added. Trump has promised to be the “crypto president,” popularizing its mainstream use in the U.S. He has said he backs crypto because it can improve the banking system and increase the dominance of the dollar. Reporting by Pretish M J in Bengaluru; Editing by Sonia Cheema. Our Standards: The Thomson Reuters Trust Principles.
Citi’s new PayTo enables institutional clients to initiate account-to-account pull payments enabling a transparent and instant process for clients
Citi announced that it is live with PayTo initiator, enabling its clients to access a faster, cost effective, and more secure alternative to credit cards, debit cards or direct debits. Through PayTo, Citi’s institutional clients can initiate account-to-account pull payments. This means clients’ customers can pay directly from their bank account, in real-time, enabling a transparent and instant process for clients. PayTo offers Citi clients seamless reconciliation and fee reduction benefits as it reduces reliance on card fees and decreases the likelihood of chargebacks. PayTo can be used for everyday transactions such as in-app payments or e-commerce payments, outsourced payroll, utility bills, flight bookings, subscriptions and digital wallet top-ups. Through PayTo, we’re truly giving our clients access to the future of payments. We anticipate strong take-up of this offer as clients welcome the benefits for themselves and their end customer,” said Kirstin Renner, Citi Australia and New Zealand Head of Treasury and Trade Solutions, Services. This offering is the latest in a suite of innovative solutions offered by Citi’s Services business, including Spring by Citi, an end-to-end digital payments service enabling e-commerce and B2B funds flow globally and Real-Time Funding for cross-border transactions for corporate clients.
TD to spend $1B in two-year span on compliance fixes- deploying machine learning to “increase investigative productivity,” and additional reporting and controls for cash management activities
TD Bank Group plans to invest $1 billion over a two-year period to beef up its anti-money-laundering controls, after compliance failures led to historic regulatory penalties and handcuffed its U.S. growth. The bank also juggles a new restructuring plan, the scaling back of its American business and growing economic uncertainty due to U.S.tariff policies. The company had previously projected spending $500 million on anti-money-laundering remediation efforts during the fiscal year that ends in October, as it upgrades its training, analysis capabilities and protocols. TD Chief Financial Officer Kelvin Tran told analysts that the bank expects similar investments in the fiscal year that ends in October 2026. “We wanted to give the Street a sense of what 2026 was going to look like,” Salom said. “The composition of spend might change a little bit. It might be a little less remediation, more validation work, more lookbacks, monitor costs, et cetera. … But we think the overall spend level is going to be similar.” Across the first two quarters of 2025, the bank has invested $196 million on the anti-money-laundering compliance efforts. Salom said there will be an uptick in those expenses in the back half of the year as the company delves “into the meat of our remediation delivery programs.” TD plans to deploy machine learning technology in the third quarter to “increase investigative productivity,” along with additional reporting and controls for cash management activities. The bank feels confident about its expense guidance for 2025 and 2026, and those costs will eventually decline “at some point in the future,” Salom said. TD also said that it’s on track to meet its previous projection of a 10% reduction in U.S. assets by the end of October. At the end of April, the U.S. bank had about $399 billion of assets, putting it below the $434 billion cap imposed by the Office of the Comptroller of the Currency. The bank sold or ran off about $11 billion in U.S. loans during the second quarter, and announced plans to wind down a $3 billion point-of-sale financing business that services third party retailers in the U.S. TD also plowed ahead with plans to remix its bond portfolio by selling relatively low-yielding bonds to reinvest in higher-returning securities. Salom said the bank should meet its forecast of restructuring $50 billion of securities in the next few weeks. The bank expects to generate a benefit to net interest income of close to $500 million between November 2024 and October 2025, he said. “We think new CEO Raymond Chun is putting the bank on the right track,” wrote Maoyuan Chen, an equity analyst at Morningstar, in a note. “2025 will be a transitional year as TD is actively remediating its US anti-money-laundering system with elevated expenses and repositioning its US balance sheet for its asset cap growth limitations.”
JPMorganChase democratized employee access to gen AI but per-seat licensing costs model is a roadblock
JPMorganChase was the first big bank to roll out generative AI to almost all of its employees through a portal called LLM Suite. As of mid-May, it’s being used by 200,000 people. “We think that AI has the potential to really deliver amazing scale and efficiency as well as client benefit,” Teresa Heitsenrether, chief data and analytics officer, told. The bank, like many others, has used traditional AI and machine learning for years in areas like fraud detection, risk management and marketing. “But the big surprise really came with generative AI, which really opens up new possibilities for us,” Heitsenrether said. LLM Suite is an abstraction layer through which large language models like OpenAI’s GPT-4 are swapped in and out. The models are trained on proprietary JPMorganChase data. The bank’s lawyers use LLM Suite to analyze contracts. Bankers use it to prepare presentations for clients and to generate draft emails and reports. The project is “advanced in scope and ambition,” said Alex Jimenez, lead principal strategy consultant at Backbase. “Deploying a proprietary large language model at this scale is an industry-leading move. Unlike others, they aren’t just testing but embedding it deep into the daily workflows of bankers, compliance teams, technologists. The real advancement isn’t just the tech but the institutional integration.” This project is setting the tone for other banks, he said. “The rollout likely puts pressure on peer banks to accelerate or scale up their own gen AI initiatives. It is influencing vendor roadmaps and internal AI governance discussions across the industry.” The bank tests and vets new models for safety and security, as well as their applicability to different use cases, before bringing them into its LLM Suite. Some large language models are good at synthesis and reasoning, while others are good at coding or complex document analysis, Waldron said. Small models can be fine-tuned for specific tasks. Generative AI models generally have per-seat licensing costs, which can add up for a bank the size of JPMorganChase. “That’s been one of the roadblocks to widespread adoption, because business leaders naturally are asking the question up front, what’s the ROI for that particular person?” Waldron said. But because JPMorganChase built an internal platform, the only variable cost is compute, he said. If an employee doesn’t use it, the bank does not pay for it. “That value proposition turned out to be very desirable to business leaders,” Waldron said. For its overall AI adoption and use of AI, JPMorganChase has been at the top of Evident’s AI Index from the scorecard’s launch in 2023.Real-time, accurate data is important for these models to generate useful answers. The bank is gradually connecting its datasets to LLM Suite, including all of its news subscriptions and earnings transcript libraries. “When these get connected and distributed to the whole population, all of a sudden, employees can do things in an automated way that they could never do before,” Waldron said. (The bank will still pay for its news subscriptions, but for firmwide access rather than individual accounts.)
BOK Financial creates a content site offering timely articles and videos on economic and personal finance contributing to higher levels of “earned media” — exposure gained through social media sharing and other channels
Bankers are often reporters’ go-to sources for economic and personal finance coverage. BOK Financial’s CMO Sue Hermann thought the bank could get some direct benefit from that. The possibilities that can be realized when a bank decides to deploy its experts to produce “brand journalism” excited Sue Hermann, CMO at BOK Financial, parent of Bank of Oklahoma. Not only can brand journalism deliver meaningful content to customers and potential customers, rather than the usual pabulum, she says, but it can begin to improve the flagging degree of trust that studies still show the industry suffers from. Today, BOK Financial produces “The Statement,” a content site offering timely articles and videos. The site features four sub-channels — “Your Money,” “Your Business,” “Perspectives” and “Community.” Since 2019 the bank’s team of internal experts and writers, as well as freelance writers, produce approximately 150 articles or videos annually. Hermann says the critical difference is “creating a need, rather than selling a thing. Not talking about checking accounts, but helping people understand the importance of long-term planning for their financial needs.” Brand journalism “is a long-term play and it takes a long time for some people to get on board,” says Megan Ryan, the bank’s director of content strategy. This not only includes superiors who want proof that the technique produces results, but even experts within the bank. She and Hermann say that often the best people on a given subject area start out feeling that they’re just bankers, and not media material. But the bank has tracked reader and viewer behavior in multiple ways and Hermann says the content team is garnering results. The bank tracks return users and Hermann says people come back for more articles and videos. (The bank filters bots and employees out of its figures.) In addition, The Statement contributes to higher levels of “earned media” — exposure gained through social media sharing and other channels. Building exposure for the bank in this way, rather than pouring on email after email and then sending those who click through to a page about checking (yes, this is a bugaboo for Hermann), she says. “There is huge value in delivering information in a way that isn’t salesy, because that aligns with our brand and developing long-term relationships — doing what’s best for the client,” says Hermann. Hermann says the bank learned early on that making a success out of this technique takes dedication and regularity. Another helpful element is cross-pollination. Something setting BOK Financial apart from some other large banks is that both marketing and corporate communications report to Hermann as CMO. In the early days, the two functions tended not to leave their swim lanes, as Hermann calls the divide, but now more sharing of ideas and information regarding The Statement occurs. Meetings with line-of-business staff sometimes prompt marketing staff to ask what questions the bankers are hearing from their customers. This may pinpoint an issue and then the right approach to address it has to be settled. The idea is not just to chime along with other media, but to add a viewpoint of bank experts or a round-up informed by that expertise. Hermann and Ryan says its helpful to have professional journalists on the staff or as regular freelancers, because they are not only comfortable with the need to crank out articles on a timely basis, but also the ability to drop.
Capital One Auto Refinance division uses ‘Swiss cheese’ approach to fraud prevention – a combination of risk prevention software and alternative data used to verify transactions
Capital One is using a “Swiss cheese” approach, for which a combination of risk prevention software and alternative data is used to verify transactions, Head of Auto Refinance Allison Qin said. “You have to have a multilayered approach. One slice might have a hole in it, but if you have 20 slices stacked up, you’re less likely to make it through the stack of cheese.” — Allison Qin, Capital One. The lender uses Capital One credit card transaction history and biometric data in its proprietary fraud prevention models, Qin said. Auto lenders’ total estimated loss exposure from fraud reached $9.2 billion in 2024, a 16.5% year-over-year rise, according to risk management platform Point Predictive’s March 25 report. “Fraud is continuously evolving and getting harder to spot, so it’s imperative that dealers and lenders work together to solve [industry fraud],” Qin said. Like lenders, dealerships are implementing multiple fraud prevention systems. Morgan Automotive Group, with more than 75 retail locations in the state, uses an “eyes wide open” approach in which dealers are vigilant about identifying scams, Justin Buzzell, finance vice president of the group, said. For Morgan Automotive Group, those protections include: A red flags check, which looks at customer identification;
A Department of Highway Safety and Motor Vehicles check; A synthetic fraud check, which looks for mixes of real and fake information; A biometric scan; and Video records of all interactions with customers to show “we’ve done everything we could.” “If you pass all of that, we’ll sell you a car,” Buzzell said. Dealerships and lenders agree that notifying each other about fraudulent encounters helps the industry; however, there’s no easy place to do that yet, West American Loan Chief Executive and President Sean Murphy said. If a centralized portal, similar to e-contracting platform RouteOne, allowed dealers and lenders to share potential fraud signs, industry players could work together to stop scams, Murphy said.