BMO Financial Group is shuffling leadership roles in its North American personal and business banking group and hiring a longtime Bank of America executive to oversee three stateside businesses. Aron Levine will serve as group head and president of BMO U.S. He will lead the company’s U.S. personal and business banking, commercial banking and wealth management units, the bank said. Levine will join BMO’s executive committee and its U.S. management group. He will be based in Chicago, where BMO’s U.S. operations are headquartered, and he will report to BMO CEO Darryl White and BMO U.S. CEO Darrel Hackett. Levine’s hiring is one of several executive changes that BMO announced Thursday, most of which will become effective on July 7. They appear to be driven at least in part by the pending retirement of Erminia “Ernie” Johannson, the head of BMO’s North American personal and business banking group since 2020. Johannson plans to step down from that role in early 2026, the bank said. Johannson, a frequent American Banker Most Powerful Women in Banking honoree who was No. 9 on last year’s list, has played a key role in BMO’s U.S. expansion, most recently leading the 2023 integration of the company’s acquisition of Bank of the West. The deal catapulted BMO into the ranks of the top 20 banks in the nation based on assets. With Johannson’s expected exit, BMO is divvying up her responsibilities. Sharon Haward-Laird will be promoted to group head of Canadian commercial banking and North American shared services, as well as co-head of Canadian personal and commercial banking. Mat Mehrotra will become group head of Canadian personal and business banking, and will join Haward-Laird as co-head of Canadian personal and commercial banking, according to the company. Haward-Laird is currently BMO’s general counsel and will remain on the company’s executive committee after starting in her new role. Mehrotra, who is currently BMO’s chief digital officer and head of Canadian products, will join the executive committee. They both will report to White. Nadim Hirji will be promoted to vice chair of BMO commercial banking. Hirji has overseen BMO’s North American commercial banking business since 2023. His new role will focus on growth initiatives in commercial banking in Canada and the U.S. Mona Malone, BMO’s chief human resources officer and a member of the executive committee, will gain the additional title of chief administrative officer. Malone will lead marketing, communications, human resources, corporate estate and procurement. She will report to White and also serve on the U.S. management committee. Paul Noble, who is currently BMO’s chief legal officer, will succeed Haward-Laird as general counsel and become group head of legal and regulatory compliance. He will join BMO’s executive committee and report to White.
KeyBank’s clients have expressed interest in a variety of digital currencies and stablecoins and mostly they want KeyBank to hold those assets
The largest banks are moving ahead with stablecoins. JPMorganChase was first with JPM Coin, which is pegged to the U.S. dollar and used to process $1 billion of payments daily. Bank of America has said it’s planning to offer a fiat-backed stablecoin. Citi already has digital tokens it uses to transfer money internally across geographies; it is said to be considering issuing its own stablecoin as well. Many smaller banks laid plans for bitcoin custody a few years ago; those plans were kiboshed by regulators, and the banks remain gun-shy. Executives at Pathward and KeyBank both said their customers are asking about digital assets. “We’ve seen a lot of demand for stablecoin, especially as it relates to business to business and business to consumer payments, especially cross border use cases,” said Will Sowell, divisional president of banking as a service at Pathward Bank. “Our approach has been to build on ramps and off ramps for that. As a sponsor bank, there’s a large opportunity to participate.” KeyBank’s clients have expressed interest in a variety of digital currencies. Mostly they want KeyBank to hold those assets, according to Bennie Pennington, senior vice president, embedded finance at KeyBank. Leaders at Royal Business Bank and Piermont Bank struck a more cautious note. “We are monitoring the digital asset space,” said Rodrigo Suarez, chief banking officer at Piermont Bank. “We’re not necessarily planning anything specific right now. We need to make sure that what we’re doing is relevant, but not necessarily just following a trend.” Gary Fan, chief operating officer at Royal Business Bank, also said his bank is monitoring the digital asset space. “We have to see how the regulatory agencies play out,” Fan said. “We do have the regulatory agencies above us, and more or less those people are uncomfortable. It’s very, very difficult for us to enter new areas like that. For us specifically, we’re looking at it, but it’s probably not one of the top one or two priorities that we’re going to work on in the next 12 months.” Dara Tarkowski, managing partner at Actuate Law, said a wait-and-see approach makes sense from a legal perspective. “When you have a lack of a true regulatory framework, and you still have states breathing down your neck, banks need to always go to find their own true North Star,” she said. Banks need to go back to safety and soundness principles, and create policies and procedures that will safeguard customers, she said.
Morgan Stanley launches NIL “name, image and likeness” financial education program to explore some of the complex planning questions that come with NIL pay to student athletes
Morgan Stanley Global Sports & Entertainment and “name, image and likeness” technology and management firm TheLinkU are teaming up with National Collegiate Athletic Association conferences and university athletic departments to deliver financial education to student athletes. The program is launching as the compensation for athletes ranging from thousands of dollars to several million is leading to more efforts to boost financial literacy and explore some of the complex planning questions that come with NIL pay. And the rules governing that compensation — which has already shaken up college sports — may soon look much different, based on a current lawsuit. The involvement of Morgan Stanley was “really important for me and for the athletes, because it gives instant credibility,” since both the wealth management company’s unit and TheLinkU aim to give the players “the tools to be successful,” said founder Austin Elrod. His firm works with athletes, athletic departments and colleges to maximize their NIL through identification of opportunities, contract management technology and other services. The education could turn into client relationships for the Morgan Stanley advisors and executives coaching the students if “the athletes determine that they would like to take the next steps,” Elrod said. NIL payments have created something of a “wild wild west” for athletes who have, in some cases, been receiving large checks since the NCAA legalized the compensation in 2021, according to Pat Brown, a wealth manager in Creative Planning and the founder of Financial Literacy for Student Athletes. At the same time, the athletes are fielding interest from so-called agents or NIL firms demanding much higher commissions from them than those received by agents representing professional sports players, he noted. Brown, a former NCAA football player who became an advisor, speaks with college athletes and coaches them on financial topics on a pro bono basis. An environment of growing overtures to the players from licensed experts and potential bad actors alike “forces the younger student-athlete to do their due diligence, sooner rather than later,” Brown said. And they have valid concerns about what will happen if they, like most NCAA athletes, can’t play their sport professionally, he added. To that point, some aspects of NIL could see dramatic shifts in the rules for the compensation based on the pending settlement in the House v. NCAA case, a lawsuit filed by former student athletes over the revenue colleges receive from the broadcast rights to the sports. As many as 390,000 current and former athletes could get back payments amounting to $2.77 billion, and every Division I school could then share up to $20.5 million in media revenue with athletes from their colleges, starting on July 1. Any future NIL deals over $600 would need approval from a third-party clearinghouse deciding whether the contracts represent the “fair market value.” But the agreement between the parties hasn’t received final approval from the judge in the case. And legislative action by Congress or an executive order from the White House could alter the guidelines further. In that murky landscape, the more than 300 Morgan Stanley advisors in the sports and entertainment unit could offer the athletes some valuable financial education and advice. And that will be especially true after the terms of the House settlement will “allow billions of dollars to flow to student athletes from institutions,” said Elrod. Currently, the firms are planning advisors’ trips to speak with teams from three conferences — the Big 12, the Mid-American Conference (MAC) and Conference-USA — with more possible agreements on the way. Regardless of the complex negotiations ongoing over possible limits on NCAA rosters or future laws, advisors should educate themselves about the ramifications of the settlement and how working with college athletes is different from planning for pro sports players, he said.
Postman looks to streamline API and agentic AI development with Agent Mode that automates API design, testing, documentation, and monitoring through natural language inputs
Postman Inc. is rolling out a suite of AI-driven features aimed at transforming API development. The company’s latest introduction, Agent Mode, automates API design, testing, documentation, and monitoring through natural language inputs. This feature acts as a fully capable execution agent, streamlining development workflows and reducing manual effort. Beyond Agent Mode, Postman is enhancing real-time API observability, enterprise-ready integrations, and support for the Model Context Protocol (MCP), which standardizes the way AI agents interact with third-party tools. Developers will soon have the ability to create their own AI agents and deploy them in their workspaces, improving efficiency in both daily engineering tasks and broader operations. One standout addition is Postman Insights, which provides real-time tracking for API usage, failure patterns, and proactive debugging. The Repo Mode feature further simplifies testing, allowing developers to reproduce API failures for easier troubleshooting. Meanwhile, integration with the Model Context Protocol enables APIs to function as callable agent tools, generate MCP servers, and connect with Postman’s newly launched MCP server network. The company is also introducing workflow integrations designed to accelerate API delivery and shorten development cycles. The integration with GitHub enables real-time collection synchronization and branch-based governance, while Jira supports context-aware issue tracking. Postman is strengthening collaboration among developer teams by linking its platform with Slack and Microsoft Teams.
Palantir and fintech Bolt partner for Checkout 2.0 that delivers personalized flows that evolve with the user—prioritizing preferred payment methods, remembering prior selections and surfacing relevant information at just the right time
Bolt and Palantir have partnered to usher in a new era of intelligent ecommerce checkout—one that’s personalized, dynamic and deeply informed by data. Checkout 2.0, a self-learning, self-improving checkout, replaces static, form-based flows with an adaptive, real-time system that responds to each shopper’s unique preferences, behaviors and context. Rather than displaying the same interface to every shopper, Checkout 2.0 delivers personalized flows that evolve with the user—prioritizing preferred payment methods, remembering prior selections and surfacing relevant information at just the right time. Bolt will leverage Palantir’s platform to help scale Checkout 2.0 across enterprise retailers and expand it within Bolt’s recently launched SuperApp—an all-in-one finance and crypto hub that delivers real-time shopper signals. As both platforms evolve, Checkout 2.0 will bring deeper personalization and intelligence to every phase of the buying journey. Through this partnership, Bolt will integrate Palantir’s advanced decisioning engine to dynamically adapt checkout flows and enable smarter, contextually aware logic. Merchants will also benefit from intelligent post-checkout payment routing. Checkout 2.0 will evaluate transaction attributes—such as volume, category or geography—and select the optimal payment gateway to maximize authorization rates and reduce processing costs. This behind-the-scenes intelligence delivers better margins and a smoother experience. Checkout 2.0’s architecture includes: Self-learning shopper profiles that adapt over time based on usage, behavior and purchase history; Dynamic payment method reordering based on shopper preferences and device; Post-checkout routing optimization to improve processing economics in real time; Native crypto payment support.
New Automated Underwriting System (AUS) purpose-built for the Non-QM lending market shifts underwriting decisions to the start of the loan lifecycle by using verified borrower data and aligning directly with investor-specific guidelines
Prudent AI has launched the industry’s first Upfront Automated Underwriting System (AUS) purpose-built for the Non-QM lending market. The new system shifts underwriting decisions to the start of the loan lifecycle by using verified borrower data and aligning directly with investor-specific guidelines, enabling lenders and brokers to scale confidently with fewer exceptions and greater certainty. Prudent AI’s Upfront AUS introduces a fundamental shift: moving critical underwriting logic upstream, where it can deliver the highest impact — enabling faster, cleaner, and more compliant decisions before loans enter underwriting queues. Purpose-Built for Modern Lending Prudent AI’s Upfront AUS: Uses verified income, credit, and asset data at submission; Applies investor-specific guidelines; Assess eligibility and conditions to clear; Offers a dual-phase review: upfront qualification and downstream consistency; Is built from the ground up to support the complexity and flexibility of Non-QM lending. Real Impact for TPOs and Lenders include: TPOs benefit from faster submissions, fewer conditions, and clearer investor alignment; Lenders gain efficiency, reduce rework, and expand underwriting capacity without adding headcount; Operational teams get cleaner data pipelines and scalable exception management.
Uber adds a new type of account with a simpler UI for the elderly with features like ride updates for family members, saved destinations, and the ability to use a family member’s card for payments
Uber revealed a new type of account, called Senior Accounts, for older users that prioritizes a simpler app experience with features like ride updates for family members, saved destinations, and the ability to use a family member’s card for payments. Uber said Senior Accounts present a simpler app experience with larger text and icons, as well as less complex screens. Users can switch this mode on using the accessibility settings in the app. Users in the U.S. can now add older adults to their family account via the “Family” menu under the Accounts tab. Users who manage the family account can add their own payment methods, edit the list of saved destinations, book a ride for older adults, and contact drivers during a ride. People in the family group can also follow senior users’ rides. Uber said senior users can add their Medicare Flex card to pay for eligible medical visits. The company said it plans to make Senior Accounts available worldwide, though it didn’t specify when the feature would roll out to other countries. Uber added teen accounts in a few cities in the U.S. in 2023 and later rolled it out to more regions and countries.
Clearstream partners Azimut to develop DLT-based private funds solution – providing broader access to private market strategies, along with a liquidity option that will allow investors to unlock the illiquidity premium embedded in private asset portfolios
Deutsche Börse’s Clearstream announced a new DLT-based solution for private market funds that it developed in conjunction with asset manager Azimut. It leverages the funds distribution platform, FundsDLT, that Clearstream acquired in late 2023. Azimut was the very first asset manager to use FundsDLT. A key goal of the DLT platform from the beginning was to reduce costs through automation and enable greater transparency. It supports the permissioned sharing of information between the asset manager, distributor and client investor, including regular reporting and asset servicing. Clearstream said it now offers a new account model that supports multiple investor portfolios under a single Clearstream Custody account. For example, this might cater to wealth managers and their clients. This structure “enables Azimut to access anonymized, detailed insights into the portfolios of individual investors.” Giorgio Medda, CEO of Azimut Holding said, “This innovative platform will provide broader access to private market strategies, along with a liquidity option that will allow investors to unlock the illiquidity premium embedded in private asset portfolios.” “By leveraging Azimut’s expertise and our robust technology, we aim to set new standards, while maintaining the highest levels of security, compliance and investor trust,” said Philippe Seyll, CEO of Clearstream Fund Services.
Imandra Universe, a new platform enhances AI assistants like ChatGPT, Claude, and Cursor with advanced logical reasoning capabilities to perform complex reasoning tasks more accurately
Imandra Inc. has launched the Imandra Universe, a new platform that enhances AI assistants like ChatGPT, Claude, and Cursor with advanced logical reasoning capabilities. This platform allows these AI systems to perform complex reasoning tasks more accurately by using Imandra’s symbolic logical reasoning engines. With a quick 10-second setup and an Imandra API key, users can enable their AI to employ this new functionality. The Imandra Universe offers a feature known as Reasoning as a Service®, which integrates into AI systems, allowing them to think more precisely and validate their outputs mathematically. This improvement aims to enhance workflows by enabling AI to delegate challenging tasks effectively to specialized reasoning engines. For example, if Claude is given the job of planning a multi-step event, it often misses important details. However, by utilizing the Imandra Universe, it can delegate the complex aspects of this task, improving its overall performance. In addition to enhancing existing AI assistants, the Imandra Universe positions itself as a tool that could revolutionize how users interact with artificial intelligence, bridging the gap between simple user commands and complex logical tasks. The technology aims to provide real-time support for neuro-symbolic AI, enabling more sophisticated and accurate outputs.
Study shows GPT-style models have a fixed memorization capacity of approximately 3.6 bits per parameter and training on more data forces models to memorize less per-sample is helping to reduce privacy risk
A new study from researchers at Meta, Google DeepMind, Cornell University, and NVIDIA finds that GPT-style models have a fixed memorization capacity of approximately 3.6 bits per parameter. One key takeaway from the research is that models do not memorize more when trained on more data. Instead, a model’s fixed capacity is distributed across the dataset, meaning each individual datapoint receives less attention. Jack Morris, the lead author, explained via the social network X that “training on more data will force models to memorize less per-sample.” These findings may help ease concerns around large models memorizing copyrighted or sensitive content. If memorization is limited and diluted across many examples, the likelihood of reproducing any one specific training example decreases. In essence, more training data leads to safer generalization behavior, not increased risk. To precisely quantify how much language models memorize, the researchers used an unconventional but powerful approach: they trained transformer models on datasets composed of uniformly random bitstrings. Each of these bitstrings was sampled independently, ensuring that no patterns, structure, or redundancy existed across examples. Because each sample is unique and devoid of shared features, any ability the model shows in reconstructing or identifying these strings during evaluation directly reflects how much information it retained—or memorized—during training. This method allows the researchers to map a direct relationship between the number of model parameters and the total information stored. By gradually increasing model size and training each variant to saturation, across hundreds of experiments on models ranging from 500K to 1.5 billion parameters, they observed consistent results: 3.6 bits memorized per parameter, which they report as a fundamental measure of LLM memory capacity. The study also examined how model precision—comparing training in bfloat16 versus float32—affects memorization capacity. They observed a modest increase from 3.51 to 3.83 bits-per-parameter when switching to full 32-bit precision. However, this gain is far less than the doubling of available bits would suggest, implying diminishing returns from higher precision. The paper proposes a scaling law that relates a model’s capacity and dataset size to the effectiveness of membership inference attacks. These attacks attempt to determine whether a particular data point was part of a model’s training set. The research shows that such attacks become unreliable as dataset size grows, supporting the argument that large-scale training helps reduce privacy risk. By introducing a principled and quantifiable definition of memorization, the study gives developers and researchers new tools for evaluating the behavior of language models. This helps not only with model transparency but also with compliance, privacy, and ethical standards in AI development. The findings suggest that more data—and not less—may be the safer path when training large-scale language models.
