There’s now a need for a new path forward that allows developers to speed up their applications with fewer barriers, which will ensure faster time to innovation without being locked into any particular vendor. The answer is a new kind of accelerator architecture that embraces a “bring-your-own-code” (BYOC) approach. Rather than forcing developers to rewrite code for specialized hardware, accelerators that embrace BYOC would enable existing code to run unmodified. The focus should be on accelerators where the underlying technology adapts to each application without new languages or significant code changes. This approach offers several key advantages: Elimination of Porting Overhead: Developers can focus on maximizing results rather than wrestling with hardware-specific adjustments. Software Portability: As performance accelerates, applications retain their portability and avoid vendor lock-in and proprietary domain-specific languages. Self-Optimizing Intelligence: Advanced accelerator designs can continually analyze runtime behavior and automatically tune performance as the application executes to eliminate guesswork and manual optimizations. These advantages translate directly into faster results, reduced overhead, and significant cost savings. Finally liberated from extensive code adaptation and reliance on specialized HPC experts, organizations can accelerate R&D pipelines and gain insights sooner. The BYOC approach eliminates the false trade-off between performance gains and code stability, which has hampered HPC adoption. By removing these artificial boundaries, BYOC opens the door to a future where computational power accelerates scientific progress. A BYOC-centered ecosystem democratizes access to computational performance without compromise. It will enable domain experts across disciplines to harness the full potential of modern computing infrastructure at the speed of science, not at the speed of code adaptation.
Fed’s latest Diary of Consumer Payment Choice reveals consumers made an average of 11 payments per month with a mobile phone in 2024, up from four payments in 2018; cash remains a key backup payment method
Federal Reserve Financial Services today issued the 2025 Diary of Consumer Payment Choice (Diary), an annual survey measuring the evolving role of cash in the U.S. economy. Findings from this nationally representative survey showed that amid the increasing digitalization of payments, consumers continue to use cash and keep it handy. Cash ranked third as a top payment instrument among consumers, a position it has held for the past five years. In 2024, it accounted for 14% of consumer payments by number, while credit and debit cards accounted for 35% and 30% of payments, respectively. Overall, U.S. consumers made an average of 48 payments per month, continuing an upward trend that began in 2021. In 2024, this growth in the overall number of payments was driven by increased credit card usage, remote payments and payments made with mobile phones. The survey also revealed generational and demographic trends in payments. Households earning less than $25,000 per year and adults 55 and older relied more on cash than other cohorts. In contrast, adults aged 18 to 24 were more likely to pay with a mobile phone, using their phones for 45% of all payments. Other key findings included:
- S. consumers made an average of 11 payments per month with a mobile phone in 2024, up from four payments in 2018.
- Cash remains a key backup payment method for U.S. consumers. Of all cash payments in 2024, nearly two-thirds were made by consumers who prefer other payment methods, such as debit or credit cards.
- Nearly 80% of U.S. consumers have held cash in their pockets, purses or wallets for at least one day of the month for each Diary survey conducted since 2018. Though the value of these holdings has decreased since 2022, it remained elevated in 2024 compared to pre-pandemic levels.
- More than 90% of U.S. consumers intend to use cash as either a means of payment or store of value in the future.
U.S. Bank launches extension of The Power of Us campaign to “demonstrate the power of the interconnected bank” and highlighting goal savings digital budgeting, business growth and cash back features
Last May, U.S. Bank launched its brand campaign, The Power of Us, which highlighted how it supports clients at every stage of their journey to help them reach their goals. Now, U.S. Bank is building on that momentum by launching an extension of last year’s campaign that is focused on how great things can be achieved by working alongside its clients. The new campaign focuses on sharing stories that reflect the diversity of its businesses, reinforce its distinct style of partnership and demonstrate the interconnected nature of its products and services. “This new campaign builds on the strong foundation we laid last year,” said Michael Lacorazza, chief marketing officer for U.S. Bank. “It brings to life the collaborative spirit that defines our brand. By showcasing the range of our businesses and the strength of our partnerships, we’re telling a powerful story about what we can achieve together.” Like last year’s brand campaign, U.S. Bank is putting a spotlight on the brand’s most iconic assets—name, shield, color palette — and continuing to infuse multicultural insights throughout the brand campaign assets. Actor Jake Gyllenhaal also is back as the voice of the campaign spots, adding a distinct tone with subtle gravitas.
- The Power of Jess: Featuring Jess Sims, an entrepreneur, Peloton Instructor, educator, sideline reporter and game changer, this spot shows how U.S. Bank Smartly® Checking and Savings helps her achieve her goals faster.
- The Power of Mia: This highlights an entrepreneur who uses U.S. Bank Business Essentials® to manage her growing business with ease.
Under Control: Mari surprises her dad with a car and explains how Bank Smartly® digital budgeting tools and cash back from her U.S. Bank Smartly™ Checking and Savings helped make it possible. This spot grew from an insight about the daughter’s role as the CFO for her family and was produced as a bilingual commercial that will run in both English- and Spanish-language media.
Catena Labs aims to be the first fully regulated AI-native financial institution enabling AI agents to transact with regulated stablecoins offering near-instant settlement, minimal transaction costs, and easy integration with AI workflows
Catena Labs announced its plan to establish the first fully regulated AI-native financial institution (FI) designed to serve the unique needs of the emerging AI economy. The company released a new open-source project defining protocols and patterns for agentic commerce. The company also confirmed an $18 million financing round led by a16z crypto, with participation from Breyer Capital, Circle Ventures, Coinbase Ventures and others. The company aims to address the shortcomings in legacy financial systems that make them poorly suited to the needs of AI agents and agentic commerce. “AI agents will soon conduct most economic transactions, but today’s financial systems are unprepared and resistant to interactions with automated intelligence,” said Sean Neville, CEO and co-founder of Catena Labs. “That’s why we’re building an AI-native financial institution that will give AI agents, and the businesses and consumers they serve, the ability to transact safely and efficiently.” The company is building upon protocols, patterns, emerging standards, and open source components to address new requirements AI agents create for identity and payments. Today, the company released the open source Agent Commerce Kit (ACK), which defines several of these open source building blocks. The company is building on ACK and other emerging standards to offer a broad suite of licensed financial services addressing new risk, security, and compliance challenges that arise from AI systems working as independent economic actors.
Pay-i’s platform measures the revenue, costs, and profit margins of generative AI apps running on usage-based pricing models and allows predicting inference costs pre-launch to help meet profitability targets
There was little evidence, some of Goldman’s analysts pointed out, of organisations worldwide making much of a return on the $1 trillion they had invested in artificial intelligence (AI) tools. Recent research from KPMG found that enthusiasm among enterprise leaders for AI remained high, but that none were yet able to point to significant returns on investment. A Forrester paper warned that some executives might start cutting back on AI investment given their impatience for tangible returns. A study from Appen suggests AI project deployments may already be slowing. Enterprises are right to be sceptical about what GenAI is actually achieving for their businesses, David Tepper, co-founder and CEO of Seattle-based start-up Pay-i argues – and they need more scientific methodologies for analysing returns, both ahead of deployments and once new AI projects are up and running. “C-suite leaders need forecasts of likely returns and reliable proof that they are being achieved,” Tepper says. “That’s how they’ll pinpoint which GenAI business cases and deployments are genuinely creating new value.” Pay-I offers tools to help businesses measure the cost of new GenAI initiatives, broken down into granular detail; such costs are currently opaque, Tepper argues, because they depend on a broad range of factors ranging from when and how business users make use of GenAI tools to which cloud architecture that business has opted for. In addition, Pay-i’s platform allows businesses to assign specific objectives to AI deployments and then to track the extent to which these objectives are achieved – and what value is realised accordingly. The idea is to give enterprises a means to evaluate both sides of the balance sheet for any given AI use case – what it costs and what it generates.
GridCARE reduces data centers’ time-to-power from 5-7 years to just 6-12 months by leveraging advanced generative AI-based analysis to find pockets with geographic and temporal capacity on the existing grid to enable faster deployment of GPUs and CPUs
GridCARE, a new company powering the AI revolution, emerged from stealth today. The company has closed a highly oversubscribed $13.5 million Seed financing round led by Xora, a deep technology venture capital firm backed by Temasek. GridCARE works directly with hyperscalers and some of the biggest AI data center developers to accelerate time-to-power for infrastructure deployment, both for upgrading existing facilities and identifying new sites with immediate power availability for gigascale AI clusters. By leveraging advanced generative AI-based analysis to find pockets with geographic and temporal capacity on the existing grid, GridCARE reduces data centers’ time-to-power from 5-7 years to just 6-12 months, allowing AI companies to deploy GPUs and CPUs faster. As the one-stop power partner for data center developers, GridCARE eliminates the complexity of navigating thousands of different utility companies so developers can focus on innovation rather than power acquisition. GridCARE is also actively partnering with utilities, such as Portland General Electric and Pacific Gas & Electric, who view better utilization of their existing grid assets as a way to increase revenues and bring the electricity cost down for all their customers. Additionally, this collaboration stimulates local economies with billions of dollars of new investment and high-paying job opportunities. “GridCARE is solving one of the biggest bottlenecks to AI data centers today — access to scalable, reliable power. Their differentiated, execution-focused approach enables power at speed and at scale,” said Peter Lim, Partner at Xora. “Power is the critical limiter to billions of dollars in AI infrastructure,” said Peter Freed, Partner at Near Horizon Group and former Director of Energy Strategy at Meta. “GridCARE uncovers previously invisible grid capacity, opening a new fast track to power and enabling sophisticated power-first AI data center development.”
BaaS Griffin is creating an agentic bank enabling an MCP server to have agents that can open accounts, make payments, and analyze historic events
Banking-as-a-Service bank Griffin is opening up access to a Model Context Protocol (MCP) server, providing a way for AI agents to autonomously perform tasks on behalf of customers. Griffin, which secured a full banking licence in March last year, says the initiative is the beginning of massive technological platform shift, which will see people delegating more and more of their work to AI. “We think there is much further to go…but to get there, the financial system has to be fundamentally rewired to accommodate a world in which agents can freely transact — while still retaining appropriate safeguards.” Potential use cases cited include end-to-end wealth management, payment admin and transactional capabilities. “This is early for us – we’re in beta – but it shows the power of what’s possible,” says the bank. “You can use the Griffin MCP server to have an agent open accounts, make payments, and analyse historic events. You can also use it to build complete prototypes of your own fintech applications on top of the Griffin API – which we’re already seeing customers doing in real time.”
Wells Fargo exits 2018 consent order that imposed limits on growth in total assets, having addressed requirements for improvements in board effectiveness, firmwide compliance, operational risk programs and a third-party independent reviews
Federal regulators moved to lift an unprecedented punishment that had handcuffed growth at Wells Fargo, a milestone in the bank’s efforts to repair its tarnished reputation after its fake-accounts scandal erupted nearly a decade ago. For the first time in seven years, the fourth-largest U.S. bank will be able to grow its balance sheet and redirect resources it had been pouring into efforts to fix itself. It will once again have the freedom to gather deposits, increase loans to companies and households and grow its Wall Street businesses or even do deals. The removal of the asset cap “marks a pivotal milestone” for Wells Fargo, said Chief Executive Charlie Scharf. The bank said it would give full-time employees a special $2,000 award. The asset-cap removal “reflects the substantial progress the bank has made in addressing its deficiencies,” the Fed said. Still, other provisions from the 2018 order “will remain in place until the bank satisfies the requirements.” The regulatory work has been all-consuming for the bank. The operating committee has for years started its meetings with sometimes hours long reviews of where the bank stood on its regulatory work. Now, executives will have the freedom to focus on broader operations and strategy. The bank can make more loans and keep them on its balance sheet. It will seek to expand its branded credit-card business and to draw wealth-management clients to other services at the bank. Wells Fargo also has more room to focus on the Wall Street businesses, such as dealmaking and trading, that Scharf has sought to grow. The bank plans to expand head count in the corporate and investment bank, where it has tapped Fernando Rivas as head and hired dozens of senior bankers. It will also likely cut costs. The bank has hired an army of 10,000 employees across its risk and control groups to address the regulatory orders. Last year, it spent roughly $2.5 billion more in those groups compared with 2018.
Collibra survey reveals 86% of respondents cite protecting data privacy as a top concern with 76% citing ROI on data privacy and AI initiatives
Collibra survey found that 86% of respondents cite protecting data privacy as a top concern with 76% of respondents citing ROI on data privacy and AI initiatives across their organization. Notably, eight in 10 decision makers also said that data ownership has changed over the last year with the emergence of AI (85%). Despite concerns around data privacy and ROI, the survey indicates a strong overall momentum towards AI adoption, with 86% of organizations planning to proceed with their AI initiatives. However, this enthusiasm varies by company size. While nearly all large companies (96%) intend to forge ahead with their AI plans despite the evolving landscape, smaller (78%) and medium-sized (79%) organizations are exhibiting a more measured approach. On a positive note, the new survey also found that nearly nine in 10 decision-makers say that they have a lot or a great deal of trust in their own companies’ approach [88%] to shaping the future of AI , with three quarters [75%] agreeing that their company prioritizes AI training and upskilling, with decision-makers at large companies (1000+ employees) more likely than those at small companies (1-99 employees) to agree (87% vs. 55%).
When chatbots replace search bars, retailers will need to ensure their inventory, pricing and product information are optimized for AI crawling and decision-making algorithms
Shopping agents are positioned to become invisible sales conversion engines. As OpenAI, Perplexity, and others race to capture this trillion-dollar opportunity, the future of how consumers search and buy hinges on how these platforms will make money, and how their algorithms will decide which products to show consumers (or buy on their behalf). The answers will determine whether these chatbots deliver on their promise of personalized commerce in their truest and most authentic sense — or become a more sophisticated version of today’s pay-to-play search and commerce platforms. Their strength as a conversational interface, capable of understanding complex requests, makes them well-suited to complete purchases without users ever visiting a physical or digital store or leaving the conversation. An emerging Agentic AI commerce ecosystem now stands at the ready to help advance their ambitions. The speed at which the GenAI chatbots have amassed an audience shows the potential for how these models could upend the retail and commerce status quo by changing where consumers start their searches and end by making a purchase without a lot of steps or friction in between. As AI agents increasingly handle the search and presentation of results (or completed sales), traditional retailers risk becoming invisible in the commerce ecosystem altogether. In this world, payment credentials might emerge as the real winner as embedded offers, financing, rewards and other data-driven incentives become an invisible part of the transaction. The venture capital pouring into these platforms signals expectations for massive adoption and ROI. GenAI represents a new form of commerce orchestration across marketplaces, social signals and retail inventory through a simple and single conversational interface. The unique nature of conversational AI suggests that different approaches might not just be possible but necessary to compete. LLM platforms can build on GenAI’s growing sense of user trust, its potential for creating a distinctive new shopping and buying utility and its ability to monetize these new forms of value. How these models present recommendations and act on behalf of users presents new challenges because of their current lack of clarity — and new opportunities because of what they could become for the entire commerce ecosystem. And that could reshape how retailers and the ecosystem adapt their products and platforms to drive sales. For retailers and brands, that now means competing for both customer and AI attention. Retailers will need to ensure their inventory, pricing and product information are optimized for AI crawling and decision-making algorithms. The winners in this new era may be those who recognize that when conversations drive commerce, trust itself becomes the product. And that monetizing trust comes wrapped around a different business model.