Wells Fargo announced a collaboration with the National Center for the Middle Market (NCMM) at The Ohio State University Max M. Fisher College of Business. Wells Fargo’s Commercial Banking group will provide the NCMM with insights into the banking needs of middle market companies, helping guide research reports, including their flagship Middle Market Indicator. The collaboration will also support special research projects and work on Wells Fargo’s middle market-focused thought leadership. The NCMM is the leading source of knowledge, leadership, and innovative research on the middle market economy, providing critical data analysis, insights, and perspectives for companies, policymakers, and other key stakeholders, to help accelerate growth, increase competitiveness, and create jobs in this sector. “We are excited to work with the NCMM and share their data and insights with our clients as they seek to build and grow their businesses,” said John Manning, head of Market Coverage for Wells Fargo Commercial Banking. “They have been focused on understanding middle market companies for 14 years, and combined with our expertise with these companies, we believe collaborating with the NCMM will help us provide additional insights to support growth in this important segment of the U.S. economy,” added Manning. Middle market companies – generally defined as companies with annual revenues between $10 million and $1 billion – account for roughly one-third of total employment and GDP in the U.S. and generate more than $10 trillion in annual revenue.i “Middle market businesses play a key role in driving innovation and job creation and are the backbone of local communities across the country,” said Doug Farren, managing director of the NCMM. “Wells Fargo brings decades of middle market banking experience to this collaboration. We look forward to working together to gain an even better understanding of the opportunities and challenges in this segment to support future growth,” added Farren.
Citi expands tokenization by launching real time tools- Citi Token Services (CTS), Real-Time Funding (RTF), and 7-Day Sweeps, all meant to optimize cash positioning and cut operational friction
Citigroup, Inc. is accelerating its push into tokenisation and automation as corporate treasuries increasingly demand real-time access to liquidity and global cash visibility. The bank’s latest suite of digital services aims to eliminate operational constraints caused by cut-off times, public holidays, and regional time zones. “The lack of clear, real-time insight into cash positions across various accounts and entities can create several challenges including difficulty in cash forecasting, inefficient cash allocation and deployment, increased operational cost and risks, and impaired strategic decision-making,” Stephen Randall, global head of liquidity management services at Citi, told. “There is a demand for higher, quicker and more flows,” Randall said in an emailed reply to questions. Clients also want improved visibility and ease of reconciliation. To address this, Citi has launched several initiatives, including Citi Token Services (CTS), Real-Time Funding (RTF), and 7-Day Sweeps, all meant to optimise cash positioning and cut operational friction. CTS lets clients move cash instantly across borders without being constrained by holidays or banking cut-offs. “Asia has been a key focus for CTS, with two out of the four markets that are live today being in Asia—Singapore and Hong Kong,” Randall said. Meanwhile, Real-Time Funding allows clients to transfer money across their Citi accounts globally in real time. “For example, a client might need to make instant or urgent payments from their Citi Hong Kong account today, yet their funds are sitting in an account at Citi London,” he said. “RTF automates a real-time transfer across their Citi accounts so that they can make these payments when their business needs it without manually processing the account funding.” RTF is live in Australia, Hong Kong, and the UK, with plans to expand into Singapore, Thailand, China, and Taiwan. Citi’s 7-Day Sweeps service, which automates 24/7 liquidity management, is now available in the US, South Korea, and Thailand. “Citi’s 7-day sweeps are actually processed and posted on holidays, reducing reconciliation overages and higher buffer of liquidity over weekends and public holidays,” Randall said. He noted that treasurers are increasingly shifting from static liquidity structures—typically reviewed annually—to more agile systems that can respond to cash flow volatility. “They want their liquidity structures to be agile enough to support cash flow volatilities especially through market uncertainties stemming from the recent geopolitical and tariff-linked supply chain shifts.” Clients are also turning to Citi for digital solutions and advisory support as they face risks related to interest rates and foreign exchange. Randall said one key benefit of automation and digitalisation is gaining timely access to important data on global cash positions, which helps clients make better decisions. He added that Citi Treasury Diagnostics, the bank’s global benchmarking tool, can show clients how their treasury operations compare to best practices. Citi offers advanced programming interfaces such as Balance Inquiry and Payment Status, along with integration support, to help clients optimise their treasury systems. Randall said solutions like Citi Treasury Diagnostics could help clients benchmark their treasury practices and unlock further improvements through automation.
Synchrony reduces reliance on credit scores leaning on internal system that continuously weights both outside data as well as Synchrony’s own past experience with consumers who it was already serving through other cards
Synchrony Financial has reduced its reliance on traditional credit scores with a system called PRISM, developed by its internal innovation team. The system weighs over 9,000 key data points when someone applies for a credit card. The technology was engineered to lean both on outside data as well as Synchrony’s own past experience with consumers who it was already serving through other cards. Max Axler, EVP and chief credit officer at Synchrony explains that for many Americans a store-connected credit card account is one of their first card experiences, so the company has touched many, many people. David Chau, SVP of credit technology strategy explains that the new approach also included use of “dynamic decisioning.” In brief, this means that instead of taking a linear approach to the credit granting decision, the system picks each next bit of data to grab based on what it has been seeing thus far. The goal was not only to improve credit evaluation up front, with new consumers, but to monitor creditworthiness on an ongoing basis, in every monthly cycle, as well as within cycles, when warranted. This can result in tweaking credit lines up or down, depending on what PRISM sees. As PRISM has been adopted and evolved, “it has made us much less reliant on credit scores,” says Axler. That was particularly helpful during the pandemic period. Axler explains that government stimulus checks, payment and collection moratoria, and other factors resulted in inflated credit scores for many consumers — levels that wouldn’t reflect future realities. Beyond those factors, Axler says credit scores have been impacted by credit score improvement programs that use methods such as granting credit and reporting it to bureaus, but not actually advancing the funds to the “borrower.” “It immediately increases your score,” says Axler, but not in a way that Synchrony considers desirable. In fact, he says, the company now discounts certain third parties’ efforts when encountered in consumers’ credit files.
On the other hand, despite some of the headlines in recent months about federal student loans, Axler says that those borrowers can actually be good risks. A key factor is to recognize that, given the nature of the credit, they are disproportionately younger and at the beginning of their credit journey. Sifting among each credit population for sub-populations that represent better risks is actually part of what PRISM was built for. Axler explains that just as a glass prism refracts a beam of white light into its component parts, the system is designed to break larger groups into distinct smaller groups. All this said, if you were to hold raw credit performance numbers at Synchrony alongside other issuers’, you would not immediately see the benefits of PRISM. That reflects how Synchrony’s operation intertwines with its retailer partners, and its goal to not be constantly widening and opening the credit aperture for each retailer’s card program.
Including both stablecoins and tokenized deposits in digital asset strategy can help banks compete in new markets, serve crypto-native customers, and participate in the growing DeFi ecosystem while benefiting from stability through regulatory protections, deposit insurance, and central bank backing
For forward-thinking banks, the path forward involves a strategic approach that may include both stablecoins and tokenized deposits. Tokenized deposits offer another path by converting traditional bank deposits into blockchain-based assets. The main benefit of this approach is that tokenized deposits carry the same regulatory protections, deposit insurance, and central bank backing as traditional deposits to ensure stability while providing a conservative entry point into blockchain technology. Meanwhile, properly designed stablecoins can serve different purposes. They enable banks to compete in new markets, serve crypto-native customers, and participate in the growing decentralized finance ecosystem. And with a rapidly expanding ecosystem and a growing cadre of developers looking to integrate blockchain technology into financial services, banks that offer both options will be best positioned to meet diverse customer needs. The benefits offered by both approaches include enabling instant, low-cost cross-border payments, programmable smart contracts for automated financial services, and enhanced transparency for regulators. For stablecoins specifically, the advancing regulatory frameworks are establishing clear guidelines for reserves, capital requirements, and consumer protections. Rather than viewing stablecoins and tokenized deposits as competing options, banks should consider how each can complement their overall digital asset strategy. While stablecoins are all the rage, banks should take an approach that balances their appetite for innovation with proper risk management. Once they understand the regulatory landscape and have proper safeguards in place, they can explore issuing a stablecoin and consider tokenized deposits. By embracing multiple paths into the digital asset economy, banks can responsibly start their journey into blockchain while remaining steady on firm financial ground.
MIT’s algorithm combines ideas from algebra and the geometry into an optimization problem to build ML models with symmetric data using fewer data samples for training than classical approaches, resulting in improved accuracy and adaptability
A new study by MIT researchers shows the first method for machine learning with symmetry that is provably efficient in terms of both the amount of computation and data needed. These results clarify a foundational question, and they could aid researchers in the development of more powerful machine-learning models that are designed to handle symmetry. Such models would be useful in a variety of applications, from discovering new materials to identifying astronomical anomalies to unraveling complex climate patterns. “These symmetries are important because they are some sort of information that nature is telling us about the data, and we should take it into account in our machine-learning models. We’ve now shown that it is possible to do machine-learning with symmetric data in an efficient way,” says Behrooz Tahmasebi, an MIT graduate student and co-lead author of this study. They explored the statistical-computational tradeoff in machine learning with symmetric data. This tradeoff means methods that require fewer data can be more computationally expensive, so researchers need to find the right balance. Building on this theoretical evaluation, the researchers designed an efficient algorithm for machine learning with symmetric data. To do this, they borrowed ideas from algebra to shrink and simplify the problem. Then, they reformulated the problem using ideas from geometry that effectively capture symmetry. Finally, they combined the algebra and the geometry into an optimization problem that can be solved efficiently, resulting in their new algorithm. “Most of the theory and applications were focusing on either algebra or geometry. Here we just combined them,” Tahmasebi says.
AI to reshape coding jobs requiring developers to double up as product managers with solid technical expertise and take on hybrid roles to understand systems, structure problems and shape ideas into working software using AI as a co-creator
AI is vastly changing how the bottom end of that ladder operates, since it can do most junior-level tasks on its own. As a result, beginners entering the industry are increasingly being asked to contribute at a level that used to require years of experience. It is not just about writing code anymore — it is about understanding systems, structuring problems and working alongside AI like a team member. In the near future, the most valuable people in tech won’t be the ones who write perfect code. They will be those who know what should be built, why it matters and how to get an AI system to do most of the work cleanly and efficiently. In other words, the coder of tomorrow looks more like a product manager with solid technical expertise. AI-augmented developers will replace large teams that used to be necessary to move a project forward. In terms of efficiency, there is a lot to celebrate about this change — reduced communication time, faster results and higher bars for what one person can realistically accomplish. We will likely see more hybrid roles — part developer, part designer, part product thinker. As already mentioned, the core part of the job won’t be to write code, but to shape ideas into working software using AI as your main creation tool. Or perhaps, even as a co-creator.
Scaling next-gen AI would require a shift towards tightly integrated, domain-specific and compute-centric networking with specialized interconnects to support direct memory-to-memory transfers and use dedicated hardware to speed information sharing among processors
Fulfilling the promise of AI requires a step-change in capabilities far exceeding the advancements of the internet era. To achieve this, we as an industry must revisit some of the foundations that drove the previous transformation and innovate collectively to rethink the entire technology stack. We are now witnessing a decisive shift towards specialized hardware — including ASICs, GPUs, and tensor processing units (TPUs) — that deliver orders of magnitude improvements in performance per dollar and per watt compared to general-purpose CPUs. This proliferation of domain-specific compute units, optimized for narrower tasks, will be critical to driving the continued rapid advances in AI. These specialized systems will often require “all-to-all” communication, with terabit-per-second bandwidth and nanosecond latencies that approach local memory speeds. To scale gen AI workloads across vast clusters of specialized accelerators, we are seeing the rise of specialized interconnects, such as ICI for TPUs and NVLink for GPUs. These purpose-built networks prioritize direct memory-to-memory transfers and use dedicated hardware to speed information sharing among processors, effectively bypassing the overhead of traditional, layered networking stacks. This move towards tightly integrated, compute-centric networking will be essential to overcoming communication bottlenecks and scaling the next generation of AI efficiently. Traditional fault tolerance relies on redundancy among loosely connected systems to achieve high uptime. ML computing demands a different approach. First, the sheer scale of computation makes over-provisioning too costly. Second, model training is a tightly synchronized process, where a single failure can cascade to thousands of processors. Finally, advanced ML hardware often pushes to the boundary of current technology, potentially leading to higher failure rates. Instead, the emerging strategy involves frequent checkpointing — saving computation state — coupled with real-time monitoring, rapid allocation of spare resources and quick restarts. The underlying hardware and network design must enable swift failure detection and seamless component replacement to maintain performance. While traditional system design focuses on maximum performance per chip, we must shift to an end-to-end design focused on delivered, at-scale performance per watt. This approach is vital because it considers all system components — compute, network, memory, power delivery, cooling and fault tolerance — working together seamlessly to sustain performance.
MIT’s algorithm combines ideas from algebra and the geometry into an optimization problem to build ML models with symmetric data using fewer data samples for training than classical approaches, resulting in improved accuracy and adaptability
A new study by MIT researchers shows the first method for machine learning with symmetry that is provably efficient in terms of both the amount of computation and data needed. These results clarify a foundational question, and they could aid researchers in the development of more powerful machine-learning models that are designed to handle symmetry. Such models would be useful in a variety of applications, from discovering new materials to identifying astronomical anomalies to unraveling complex climate patterns. “These symmetries are important because they are some sort of information that nature is telling us about the data, and we should take it into account in our machine-learning models. We’ve now shown that it is possible to do machine-learning with symmetric data in an efficient way,” says Behrooz Tahmasebi, an MIT graduate student and co-lead author of this study. They explored the statistical-computational tradeoff in machine learning with symmetric data. This tradeoff means methods that require fewer data can be more computationally expensive, so researchers need to find the right balance. Building on this theoretical evaluation, the researchers designed an efficient algorithm for machine learning with symmetric data. To do this, they borrowed ideas from algebra to shrink and simplify the problem. Then, they reformulated the problem using ideas from geometry that effectively capture symmetry. Finally, they combined the algebra and the geometry into an optimization problem that can be solved efficiently, resulting in their new algorithm. “Most of the theory and applications were focusing on either algebra or geometry. Here we just combined them,” Tahmasebi says.
Atlanta Fed opines earmarking with programmable payments can help businesses bring efficiency to managing payroll, vendor payments, and escrow accounts through automated budgeting, real-time visibility into cash flows and customized workflows with flexible rules
At its core, earmarking just means setting money aside for a specific purpose—like rent, payroll, or taxes—so it’s only used for that. It’s a simple concept, but when combined with automation, it could be the budgeting upgrade many people and businesses have been waiting for. That’s where programmable payments come in. These are payments that happen automatically based on rules you set. Through banking apps, digital wallets, or budgeting platforms, consumers choose or create spending categories and assign rules—like percentages, spending limits, or triggers. It’s like having a personal money assistant organizing your finances, paying your bills, and keeping you on budget without you having to think about it. For businesses, this brings new efficiency to managing payroll, vendor payments, and escrow accounts. The upside of earmarking with programmable payments is clear: automation takes the work out of budgeting, real-time visibility helps track your money, and flexible rules let you customize how it all works. It’s also useful in more regulated settings like distributing aid or managing shared accounts because it adds accountability. Earmarking with programmable payments is a smart, modern take on a tried-and-true budgeting technique. Used intentionally, it can bring clarity, control, and purpose to the way money flows. In a complex financial world, that’s something both individuals and experts can benefit from.
OpenMind wants to be the Android operating system of humanoid robots with a new protocol called FABRIC that allows robots to verify identity and share context and information with other robots
OpenMind is building a software layer, OM1, for humanoid robots that acts as an operating system. The company compares itself to being the Android for robotics because its software is open and hardware agnostic. Stanford professor Jan Liphardt, the founder of OpenMind, told that humanoids and other robots have been around and able to do repetitive tasks for decades. But now that humanoids are being developed for use cases that require more human-to-machine interactions, like having a humanoid in your home, they need a new operating system that thinks more like a human. OpenMind unveiled a new protocol called FABRIC that allows robots to verify identity and share context and information with other robots. Unlike humans, machines can learn almost instantly, Liphardt said, which means giving them a better way to connect to other robots will allow them to more easily train and absorb new information. Liphardt gave the example of languages and how robots could connect to each other and share data on how to speak different languages, which would help them better interact with more people without having to be taught each language by a human directly. Now, the company is focused on getting its tech into people’s homes and starting to iterate on the product.
