While other European banks have largely retreated from the ultra-competitive U.S. retail market, Santander appears to be firmly staying put. Twenty years after entering the U.S., the company is doubling down on its commitment to invest in and expand its stateside operations. Much of its growth plan is pinned on Openbank, the digital deposit-gathering platform it rolled out in the U.S. in conjunction with a company-wide focus on being a “digital bank with branches.” Openbank, which has been available in parts of Europe for years, offers Santander a way to attract low-cost deposits nationwide to help fund the bank’s sizable auto loan book. Overseeing the evolution of Openbank into a fully transactional online bank is the top priority for Christiana Riley, who took over as CEO of Santander’s U.S. operations. Riley is also charged with unifying Santander’s previously disconnected U.S. business units to increase the company’s revenue and profitability. The years that her predecessors spent laying the groundwork mean that Santander can now grow quickly, Riley told. Early results are promising. Openbank, whose high-yield savings account attracts customers with a premium interest rate, has reeled in nearly $4 billion of deposits since its launch in late October, Riley said. That’s “well in excess” of what management had expected, she noted. The pace at which Santander grows may depend on how the U.S. economy fares in the coming months. If the Trump administration imposes global tariffs as planned, and a trade war ensues, the economy could fall into a recession, some economists have warned. Any headwinds the Spanish company faces “will be related to the U.S. economy,” said Arnaud Journois, an analyst at Morningstar DBRS who covers European banks. In addition, it will take time to see if Santander’s deposit-gathering strategy leads to cheaper funding for its auto loans, he said. “I think they need a track record to see how it will unfold,” Journois said. The company has taken a different pathway from its overseas peers, Journois said. At the company’s 2023 investor day, executives laid out plans for higher growth and profitability, largely by leveraging the firm’s global scale and business diversification. The U.S. market factored heavily into the overall strategy, with a focus on reducing funding.
Navy Federal offers Bloom+ credit builder as a checking account feature to report rent and utility payments, as tradelines to TransUnion
Members of Navy Federal Credit Union can now build their credit scores through rent and utility bills, thanks to the credit union’s work on consumer-permissioned data sharing. Navy Federal is offering its members the ability to report recurring payments to credit bureaus. This can enable consumers to qualify for credit from lenders that accept the information. The credit union partnered with Bloom Credit, a credit data infrastructure platform, to offer its consumer-permissioned data product Bloom+ to its 14 million members as a checking account feature in late March. Bloom+ gives financial institutions the ability to offer their customers an option for reporting existing payments from their checking accounts, such as rent and utility payments, as tradelines to TransUnion. Bloom Credit also works with Equifax and Experian for other products, but TransUnion is currently the only receiver of consumer-permissioned data shared through Bloom+. As consumers build their credit score using this cash flow data, they can qualify for better loans even if their traditional credit history isn’t strong. “Credit bureau data can often work to the detriment of consumers that have never held a loan of any kind before,” Justin Zeidman, vice president of strategy at Navy Federal, told “One of the great things [about] consumer-permissioned data sharing is it allows the credit bureaus to treat a rent payment very similarly to the way it would treat a mortgage payment from the perspective of credit history. It allows the credit bureaus to see and track the responsible behavior of consumers who would have otherwise been completely invisible to the bureaus.” Zeidman believes that consumer-permissioned data sharing and cash flow-based underwriting carry several benefits specifically for individuals serving in the military, Navy Federal’s primary member base. “Enlisted members often join the military quite young,” he said. “These are members that are sometimes 18 years old and receiving consistent paychecks with no credit history. “These members are often looking for housing as they move from location to location, and access to credit can become incredibly important to ensure costs and driving up fee income in corporate and investment banking as well as wealth management. “For San.tander, the U.S. has been better performing than [it has been for] competitors,” Journois said. As part of its initiative to expand its scale and attract more deposits, Santander recently announced a multiyear partnership with Verizon that allows eligible Verizon customers to open a high-yield savings account through Openbank. The deal provides the bank with a new pipeline of deposits and rewards Verizon customers with credits toward Verizon mobile and 5G home internet bills. Securing similar partnerships is something that Santander is “absolutely looking to do,” Riley said. The development of Openbank is a chief focus for Riley. As part of Openbank’s evolution, the digital platform is expected to offer checking accounts and certificates of deposits to customers by the end of the year, Riley said. The first Openbank-branded branch is scheduled to open later this month at Miami Worldcenter, a new retail and entertainment hub. Other sites may eventually follow in certain high-growth, high-density areas such as Southeast Florida, across the Sun Belt and into California, she said. Openbank, which requires a minimum deposit of $500 to open a savings account, is currently paying an interest rate of 4.4%. That’s down from the 5.25% rate paid in October. Riley said the company is set on maintaining a “top quartile rate position” in order to draw deposits. “That’s been a key feature … since launch and it’s one we stand fully behind,” she said, noting that Openbank deposits are “replacing vastly more expensive sources of funding” in the bank’s consumer lending business. “We’ve got, probably relative to many others who are operating in this high-yield savings space, a significantly better opportunity to manage that spread margin.”
Dianomic’s solution supports live digital twins and OT/IT convergence by abstracting enterprise-wide machines, sensors, and processes into a unified streaming analytics system and real-time operational data at scale
Dianomic, a leader in intelligent industrial data pipelines and edge AI/ML solutions, has launched FogLAMP Suite 3.0. The solutuion’s ‘Intelligent Industrial Data Pipelines’ abstracts machines, sensors, and processes into a unified real-time data and streaming analytics system for brownfield and greenfield alike. By seamlessly connecting and integrating the plant floor to the cloud and back with high quality normalized streaming data, FogLAMP 3.0 enables innovations like AI-driven applications, digital twins, lakehouse data management, unified namespace and OT/IT convergence. FogLAMP Suite 3.0 creates an intelligent data fabric, unifying and securing real-time operational data at scale with enterprise-grade management. This comprehensive data flow empowers both plant-level optimization and cloud-based insights. Its role-based access control, intuitive graphical interface, and flexible development tools—ranging from no-code to source code—empower both IT and OT teams to collaborate effectively or work independently with confidence. FogLAMP Suite 3.0 Key Features: Real-time Full Fidelity Streaming Analytics and Data Management – Where the physical world meets the digital; Enterprise Wide – Manage, integrate and monitor streaming data from diverse sources to clouds and back; Enable Live Digital Twins – Manage tags and namespaces, use semantic models, detect, predict and prescribe with machine AI/ML; Compatible with brownfield, greenfield, IIoT – Processes, equipment and sensors.
OpenAI’s enterprise adoption appears to be accelerating, at the expense of rivals – 32% of U.S. businesses are paying for subscriptions to OpenAI vs 8% and 0.1% subscribing to Anthropic’s products and Google AI respectively
OpenAI appears to be pulling well ahead of rivals in the race to capture enterprises’ AI spend, according to transaction data from fintech firm Ramp. According to Ramp’s AI Index, which estimates the business adoption rate of AI products by drawing on Ramp’s card and bill pay data, 32.4% of U.S. businesses were paying for subscriptions to OpenAI AI models, platforms, and tools as of April. That’s up from 18.9% in January and 28% in March. Competitors have struggled to make similar progress, Ramp’s data shows. Just 8% of businesses had subscriptions to Anthropic’s products as of last month compared to 4.6% in January. Google AI subscriptions saw a decline from 2.3% in February to 0.1% in April, meanwhile. “OpenAI continues to add customers faster than any other business on Ramp’s platform,” wrote Ramp Economist Ara Kharzian. “Our Ramp AI Index shows business adoption of OpenAI growing faster than competitor model companies.” To be clear, Ramp’s AI Index isn’t a perfect measure. It only looks at a sample of corporate spend data from around 30,000 companies. Moreover, because the index identifies AI products and services using merchant name and line-item details, it likely misses spend lumped into other cost centers. Still, the figures suggest that OpenAI is strengthening its grip on the large and growing enterprise market for AI. OpenAI is projecting $12.7 billion in revenue this year and $29.4 billion in 2026.
Talent development, right data infrastructure, industry-specific strategic bets, responsible AI governance and agentic architecture are key for scaling enterprise AI initiatives
A new study from Accenture provides a data-driven analysis of how leading companies are successfully implementing AI across their enterprises and reveals a significant gap between AI aspirations and execution. Here are five key takeaways for enterprise IT leaders from Accenture’s research.
Talent maturity outweighs investment as the key scaling factor. Accenture’s research reveals that talent development is actually the most critical differentiator for successful AI implementation. “We found the top achievement factor wasn’t investment but rather talent maturity,” Senthil Ramani, data and AI lead at Accenture, told. The report shows front-runners differentiate themselves through people-centered strategies. They focus four times more on cultural adaptation than other companies, emphasize talent alignment three times more and implement structured training programs at twice the rate of competitors. IT leader action item: Develop a comprehensive talent strategy that addresses both technical skills and cultural adaptation. Establish a centralized AI center of excellence – the report shows 57% of front-runners use this model compared to just 16% of fast-followers.
Data infrastructure makes or breaks AI scaling efforts. “The biggest challenge for most companies trying to scale AI is the development of the right data infrastructure,” Ramani said. “97% of front-runners have developed three or more new data and AI capabilities for gen AI, compared to just 5% of companies that are experimenting with AI.” These essential capabilities include advanced data management techniques like retrieval-augmented generation (RAG) (used by 17% of front-runners vs. 1% of fast-followers) and knowledge graphs (26% vs. 3%), as well as diverse data utilization across zero-party, second-party, third-party and synthetic sources. IT leader action item: Conduct a comprehensive data readiness assessment explicitly focused on AI implementation requirements. Prioritize building capabilities to handle unstructured data alongside structured data and develop a strategy for integrating tacit organizational knowledge.
Strategic bets deliver superior returns to broad implementation. While many organizations attempt to implement AI across multiple functions simultaneously, Accenture’s research shows that focused strategic bets yield significantly better results. “In the report, we referred to ‘strategic bets,’ or significant, long-term investments in gen AI focusing on the core of a company’s value chain and offering a very large payoff. This strategic focus is essential for maximizing the potential of AI and ensuring that investments deliver sustained business value.” This focused approach pays dividends. Companies that have scaled at least one strategic bet are nearly three times more likely to have their ROI from gen AI surpass forecasts compared to those that haven’t. IT leader action item: Identify 3-4 industry-specific strategic AI investments that directly impact your core value chain rather than pursuing broad implementation.
Responsible AI creates value beyond risk mitigation. Most organizations view responsible AI primarily as a compliance exercise, but Accenture’s research reveals that mature responsible AI practices directly contribute to business performance. “ROI can be measured in terms of short-term efficiencies, such as improvements in workflows, but it really should be measured against longer-term business transformation.” The report emphasizes that responsible AI includes not just risk mitigation but also strengthens customer trust, improves product quality and bolsters talent acquisition – directly contributing to financial performance. IT leader action item: Develop comprehensive responsible AI governance that goes beyond compliance checkboxes. Implement proactive monitoring systems that continually assess AI risks and impacts. Consider building responsible AI principles directly into your development processes rather than applying them retroactively.
Read Article
Model Context Protocol open standard architecture consisting of servers and clients will be key to building secure, two-way connections between AI agents’ data sources and tools as AI systems mature and start to maintain context
AI agents have been all the rage over the last several months, which has led to a need to come up with a standard for how they communicate with tools and data, leading to the creation of the Model Context Protocol (MCP) by Anthropic. MCP is “an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools,” Anthropic wrote in a blog post announcing it was open sourcing the protocol. MCP can do for AI agents what USB does for computers, Lin Sun, senior director of open source at cloud native connectivity company Solo.io, explained. According to Keith Pijanowski, AI solutions engineer at object storage company MinIO, an example use case for MCP is an AI agent for travel that can book a vacation that adheres to someone’s budget and schedule. Using MCP, the agent could look at the user’s bank account to see how much money they have to spend on a vacation, look at their calendar to ensure it’s booking travel when they have time off, or even potentially look at their company’s HR system to make sure they have PTO left. MCP consists of servers and clients. The MCP server is how an application or data source exposes its data, while the MCP client is how AI applications connect to those data sources. MinIO actually developed its own MCP server, which allows users to ask the AI agent about their MinIO installation like how many buckets they have, the contents of a bucket, or other administrative questions. The agent can also pass questions off to another LLM and then come back with an answer. “Instead of maintaining separate connectors for each data source, developers can now build against a standard protocol. As the ecosystem matures, AI systems will maintain context as they move between different tools and datasets, replacing today’s fragmented integrations with a more sustainable architecture,” Anthropic wrote in its blog post.
A new HPC architecture with “bring-your-own-code” (BYOC) approach would enable existing code to run unmodifieD; the underlying technology adapts to each application without new languages or significant code changes
There’s now a need for a new path forward that allows developers to speed up their applications with fewer barriers, which will ensure faster time to innovation without being locked into any particular vendor. The answer is a new kind of accelerator architecture that embraces a “bring-your-own-code” (BYOC) approach. Rather than forcing developers to rewrite code for specialized hardware, accelerators that embrace BYOC would enable existing code to run unmodified. The focus should be on accelerators where the underlying technology adapts to each application without new languages or significant code changes. This approach offers several key advantages: Elimination of Porting Overhead: Developers can focus on maximizing results rather than wrestling with hardware-specific adjustments. Software Portability: As performance accelerates, applications retain their portability and avoid vendor lock-in and proprietary domain-specific languages. Self-Optimizing Intelligence: Advanced accelerator designs can continually analyze runtime behavior and automatically tune performance as the application executes to eliminate guesswork and manual optimizations. These advantages translate directly into faster results, reduced overhead, and significant cost savings. Finally liberated from extensive code adaptation and reliance on specialized HPC experts, organizations can accelerate R&D pipelines and gain insights sooner. The BYOC approach eliminates the false trade-off between performance gains and code stability, which has hampered HPC adoption. By removing these artificial boundaries, BYOC opens the door to a future where computational power accelerates scientific progress. A BYOC-centered ecosystem democratizes access to computational performance without compromise. It will enable domain experts across disciplines to harness the full potential of modern computing infrastructure at the speed of science, not at the speed of code adaptation.
The line between eCommerce and fintech is disappearing, and the future belongs to integrated ecosystems that combine seamless shopping experiences with embedded financial solutions: Analyst
Jose Daniel Duarte Camacho, a renowned eCommerce and FinTech innovator, has outlined a vision for the future of digital commerce and financial services. He believes that companies that embrace digital agility and customer-centric strategies will emerge as frontrunners in this wave of technological disruption. Duarte Camacho believes that the line between eCommerce and financial technology is disappearing, and the future belongs to integrated ecosystems that combine seamless shopping experiences with embedded financial solutions. Consumers expect speed, trust, and personalization at every touchpoint. Duarte Camacho has identified four major trends that are shaping the future of eCommerce: AI-Driven Hyperpersonalization: Retailers are using machine learning to adapt in real time to individual user behavior. Product recommendations, pricing, and content are becoming uniquely tailored to each customer—boosting conversion rates and customer satisfaction. Immersive Shopping Experiences with AR and VR: Augmented and virtual reality tools are transforming product visualization and engagement. Customers can now preview how furniture fits in a room or how a garment looks on them—without setting foot in a store. Eco-Conscious Consumer Demands: Sustainability is no longer a bonus; it’s a business imperative. eCommerce platforms that prioritize eco-friendly packaging, carbon-neutral shipping, and ethical sourcing are capturing the loyalty of a new generation of socially conscious shoppers. Conversational Commerce and Voice Technology: Voice assistants and chat-based shopping are simplifying online transactions. Duarte Camacho believes brands must optimize for voice commerce and natural language processing to remain competitive in the evolving customer interface.
Sakana’s Continuous Thought Machines (CTM) AI model architecture uses short-term memory of previous states and allows neural synchronization to mirror brain-like intelligence
AI startup Sakana has unveiled a new type of AI model architecture called Continuous Thought Machines (CTM). Rather than relying on fixed, parallel layers that process inputs all at once — as Transformer models do —CTMs unfold computation over steps within each input/output unit, known as an artificial “neuron.” Each neuron in the model retains a short history of its previous activity and uses that memory to decide when to activate again. This added internal state allows CTMs to adjust the depth and duration of their reasoning dynamically, depending on the complexity of the task. As such, each neuron is far more informationally dense and complex than in a typical Transformer model. CTMs allow each artificial neuron to operate on its own internal timeline, making activation decisions based on a short-term memory of its previous states. These decisions unfold over internal steps known as “ticks,” enabling the model to adjust its reasoning duration dynamically. This time-based architecture allows CTMs to reason progressively, adjusting how long and how deeply they compute — taking a different number of ticks based on the complexity of the input. The number of ticks changes according to the information inputted, and may be more or less even if the input information is identical, because each neuron is deciding how many ticks to undergo before providing an output (or not providing one at all). This represents both a technical and philosophical departure from conventional deep learning, moving toward a more biologically grounded model. Sakana has framed CTMs as a step toward more brain-like intelligence—systems that adapt over time, process information flexibly, and engage in deeper internal computation when needed. Sakana’s goal is to “to eventually achieve levels of competency that rival or surpass human brains.” The CTM is built around two key mechanisms. First, each neuron in the model maintains a short “history” or working memory of when it activated and why, and uses this history to make a decision of when to fire next. Second, neural synchronization — how and when groups of a model’s artificial neurons “fire,” or process information together — is allowed to happen organically. Groups of neurons decide when to fire together based on internal alignment, not external instructions or reward shaping. These synchronization events are used to modulate attention and produce outputs — that is, attention is directed toward those areas where more neurons are firing. The model isn’t just processing data, it’s timing its thinking to match the complexity of the task. Together, these mechanisms let CTMs reduce computational load on simpler tasks while applying deeper, prolonged reasoning where needed.
Jenius Bank surpassed $2 billion in deposits with its no-fee ‘evolved banking’ approach, centered on providing personalized financial insights from account aggregation
Jenius Bank has surpassed $2 billion in deposits by focusing on “evolved banking” — providing personalized financial insights through account aggregation while eliminating fees to help customers gain financial confidence and make better decisions. John Rosenfeld, President of Jenius Bank, a division of SMBC MANUBANK said, “We developed two concepts within a paradigm, if you will, where there’s core banking, which is what every bank does, allows you to put money with them, allows you to go online, see how much you have, see how much you’re earning, allows you to move money in and out, review your statements, read your terms and conditions, all that stuff that every bank does. We call that core banking. We developed the concept of evolved banking, which encompasses everything beyond core features that not every bank offers. And we grouped all this into something we call the Jenius views. So, if you download our mobile app, you’ll find this tab at the bottom. And within this space, you’re able to link your accounts from other banks, other brokerages. You can view credit cards and your entire financial picture in one place. Again, this allows the consumer to give us access to their other information, enabling us to consolidate and provide them with valuable insights. While there are some banks that are doing this, what we call aggregation services, many of them are doing it to gain a view of the customer’s financial situation and then potentially use that information to try to figure out what else they can sell them. We took a different approach. We said, what if we used all that information to actually give customers insights and help them avoid fees, making smarter and more confident financial decisions? Now, why would a bank do such a thing that’s not necessarily going to bolster their profits? We thought about this and concluded that if we could establish a new level of trust with consumers, the next time they have a financial need, we hope they’ll come back to us first. The idea that money is such an emotional driver that it has nothing to do with how much you make or don’t make, but rather whether you are making good decisions. With the capabilities that are evolving in the data space and analytics and machine learning and AI, if a consumer gives someone full access to every penny that they have and not access to move the money, but access to the information, think of how much you can do with technology to identify those things that you may not have noticed. The lack of fees on our savings or a loan product was really driven by wanting to create something better and more compelling than what’s available in the industry. We created a bank that’s incredibly efficient because we don’t have buildings, we don’t have paper, we don’t mail things. So, we don’t spend any money on postage. The target was really what we call high-potential digital optimizers. And we call it that because high potential means they’re going somewhere and are ambitious. They want to progress in building a better lifestyle and achieving more.
