Quick commerce (q-commerce) has become the new benchmark for convenience in retail, offering delivery times as little as 15 minutes from mouse-click to doorstep. Venture capitalists have invested in q-commerce startups, with concerns about its sustainability and how omnichannel retailers can compete. Uber recently announced a major partnership with Sephora, making the beauty retailer the first prestige brand to launch on the Uber Eats platform across the US and Canada. Customers can now shop Sephora’s full assortment of beauty, skincare, fragrance, haircare, and wellness products directly through the Uber Eats app. Uber has committed to instore price parity, ensuring no mark-ups for customers. The future of q-commerce may see Uber breaking ground into super app territory, similar to Sephora’s existing partnership with Instacart. The future of q-commerce may involve a combination of in-store and online delivery services. Super apps, such as WeChat, are increasingly important in China, where smartphones are the norm. These apps offer a range of features, including messaging, video, and storefronts, and can be used for outbound marketing. The rise of WeChat is linked to the QR code, which has become the magic sauce of mobile commerce in China. QR codes perform various functions within the app, including payments and links to brand accounts. Rappi, a Latin American super app, offers meals, groceries, pharmacy essentials, pet supplies, and gifts. It also offers turbo delivery, financial services, and travel services. Mercado Libre, a Latin American super app, offers various e-commerce, payment, and logistics services. Noon, the leading e-commerce platform in Saudi Arabia, the UAE, and Egypt, offers shopping, delivery, fashion, food, and payments across the Middle East. Grab App, operating in Southeast Asia, offers ride-hailing, food delivery, grocery delivery, package delivery, and financial services. Retail media is a central pillar in the growth strategy of super apps, as every app screen serves as a digital shelf for brands to place ads, promos, or featured products. Super apps own the user’s attention and transactions, holding rich first-party data that powers precision targeting, measurement, and real-time marketing.
Tokenized assets require unified golden record systems with provable time stamps, cross-chain event logic, and selective disclosure protocols to achieve legal enforceability and real-world financial asset functionality
Real-world asset (RWA) adoption has been largely attributed to regulation, but the industry is increasingly seeing the need for synchronization to align on-chain state with off-chain reality across time, jurisdictions, and systems. Tokenizing a bond or building is now relatively straightforward, but making that digital claim behave like a real financial asset with enforceable rights, deadlines, and dispute resolution is still a challenge. Synchronization is essential for RWAs to function, as it requires maintaining a single, immutable ‘golden record’ for legally significant events, such as property deeds, invoice payments, or NAV updates. However, legal enforceability often fails in tokenized markets, as many teams rush to sell tokens before completing the legal stack. Synchronization requires respecting the cadence of property law, lien registration, and dispute processes. Privacy is also a crucial aspect of synchronization, as public blockchains were built on transparency, but in regulated markets, full transparency is often a blocker. In traditional finance, synchronized ledgers already exist, but DeFi has no equivalents yet. Tokenization will remain a surface-level achievement until it builds them, and the promise of programmable assets will stall not on regulation but on coordination. The irony is that regulation may now be the easy part, but synchronization of provable time, cross-chain event logic, enforceable legal state, and selective disclosure is the harder frontier. Solving this will allow RWAs to move from being tokenized to being truly usable.
AI-powered programming technique transforms natural language prompts into executable code through pattern-matching algorithms trained on billions of GitHub repository lines, eliminating traditional syntax requirements
Currently, vibe coding and being a vibe coder are an ad hoc and seat-of-the-pants foray. Whether it matures into something more formalized and studious is an open question. The betting line is that it won’t go through any stringent formalization. The assumption is that by-and-large, most vibe coding will be undertaken by a hands-off chunk of the world’s population and mainly be performed on an off-the-cuff basis. Maybe we will eventually end up with two classes of vibe coders, professional vibe coders versus amateur vibe coders, of which only a tiny proportion of vibe coders will be in the professional bucket. Generative AI normally takes as input a series of prompts by a user and then tries to answer questions or generate stories and responses based on what the user asked for. The underlying mechanisms involve immense pattern-matching based on vast arrays of human writing. An LLM is set up by doing explicit data training on human-written content that the AI patterns on. The result is an amazingly fluent-seeming AI that interacts akin to a human type of conversation. The same can be done for the writing of programming code. If a video coder happens to also be a proficient software builder, they likely can indeed look at the generated code to fix it. Ergo, that’s a circumstance of the vibe coder doing both the code generation via prompts and then doing the debugging on the generated code. The thing is that vibe coding is presumably supposed to be a widely adaptable approach to producing programs. A vital assumption is that end-users having near-zero programming knowledge will use AI to produce programs. The widest possible use of vibe coding will be by people who aren’t going to practically be able to tackle the code that has been generated. The future suggests that the AI will be improved such that the code is perfect at the get-go, or the AI is proficient at squaring away the code that has been generated.
Meta’s tiny AI MobileLLM-R1 achieves 74% MATH benchmark accuracy with 950M parameters, consuming 0.75% battery for 25 device conversations
Meta’s MobileLLM-R1, a family of sub-billion parameter models, deliver specialized reasoning. Its release is part of a wider industry push for developing compact, powerful models that challenge the “bigger is better” narrative. Meta’s MobileLLM-R1 is a family of reasoning models that come in 140M, 360M, and 950M parameter sizes and are purpose-built for math, coding, and scientific reasoning (they’re not suitable for general chat applications). The models are made more efficient based on some design choices that Meta laid out in the original MobileLLM models, optimized specifically for sub-one-billion parameter architectures. The 950M model slightly outperforms Alibaba’s Qwen3-0.6B on the MATH benchmark (74.0 vs 73.0) and establishes a clear lead on the LiveCodeBench coding test (19.9 vs 14.9). This makes it ideal for applications requiring reliable, offline logic, such as on-device code assistance in developer tools. While MobileLLM-R1 pushes the performance boundary, the broader SLM landscape offers commercially viable alternatives tailored to different enterprise needs. Google’s Gemma 3 270M, for instance, is an ultra-efficient workhorse. At just 270 million parameters, it is designed for extreme power savings. Internal tests showed 25 conversations consumed less than 1% of a phone’s battery. Its permissive license makes it a strong choice for companies looking to fine-tune a fleet of tiny, specialized models for tasks like content moderation or compliance checks. Instead of paying per API call, you can license a model once and use it infinitely on-device. This move also solves for privacy and reliability, as processing sensitive data locally enhances compliance and ensures applications work without a constant internet connection. The potential impact is significant, with a “trillion-dollar opportunity in the small model regime by 2035. The availability of capable SLMs enables a new architectural playbook. Instead of relying on one massive, general-purpose model, organizations can deploy a fleet of specialist models.
Enterprise security platform consolidation leverages unified governance frameworks with centralized identity management and automated threat response playbooks; reducing alert fatigue while maintaining architectural standards across diverse operations
Enterprises with diverse operations often face a chaotic mix of tools and policies, which is why many security vendors position platform consolidation as the path to stronger resilience. Without a central hub, even minute changes in business initiatives can create policy drift that leaves critical systems exposed. Stephen Harrison, chief information security officer of MGM Resorts International said “As you deploy tools or you bring on products, the big win [would be] with a centralized core; everyone should have some sort of centralized core,” he said. A centralized core is important because that lets you address policy drift as you buy new companies, sell new companies, have new business initiatives, bring on new marketing firms [and] different campaigns.” Centralization also allows security teams to maintain visibility across diverse environments, such as stadiums, hotels and gaming platforms. That kind of consistency helps enforce architectural standards across the enterprise, according to Harrison. When you think about the centralized security stack for CrowdStrike, that’s one of the big advantages of it: Being able to address policy drift at its core inside the platform.” “With agentic AI, the identity issue becomes compounded,” he said. Platform consolidation helps counter sprawl by grounding decisions in unified governance and shared principles, according to Harrison. NG SIEM ties the strategy together, according to Harrison. By combining automation, analytics and AI-driven insights, it helps teams cut through alert fatigue and focus on what matters most.
AI networking startup Upscale builds open-standard infrastructure using Switch Abstraction Interface, Ultra Ethernet protocols, and unified SONiC operating system to scale GPU clusters with vendor-agnostic hardware freedom
Startup Upscale AI is targeting an AI networking infrastructure market that’s already valued at more than $20 billion annually, and is quietly confident it will be able to take a huge bite out of that segment. According to Upscale AI Chief Executive Barun Kar, AI networks require a full-stack redesign, and that means developing specialized auxiliary processing units or XPUs, ultra-low latency interconnects and a more power-efficient operating system that’s able to scale and support enormous clusters of thousands of graphics processing units. Upscale AI is racing to build this, with one of its core components being a new kind of AI Network Fabric that it says is designed to enhance the performance of XPU clusters. XPUs are critical for AI, as they take care of the specialized infrastructure tasks in AI, so that GPUs can be used exclusively for computation. The startup has also built an all-new unified network operating system based on open standards such as the Switch Abstraction Interface and Software for Open Networking in the Cloud, known as SAI/SONiC, which enables infrastructure to scale with in-service network upgrades that maximize uptime. Another key development is Upscale AI’s novel AI networking rack platform, which gives network operators the freedom to choose networking hardware from any vendor. Although Upscale AI does not provide any numbers, it’s confident that its redesigned networks will deliver “breakthrough performance” for AI training, inference and edge AI deployments, and its long list of financial backers suggests that many believe its claims. The startup’s network brings together a host of different open standards, enabling complete freedom of choice for data center operators. In addition to SAI/SONiC, it’s built on standards such as Ultra Accelerator Link and Ultra Ethernet.
M&T Bank is making its data AI-ready with software that speeds up the production of data lineage, provides a single repository and enables interrogation and analysis that before would not have been possible
“Data and AI come very tightly coupled, because it’s quite hard often for AI deployment to be successful without the trusted data that you need for it to be successful,” Andrew Foster, chief data officer at M&T Bank in Buffalo, told American Banker. Like some other data chiefs in the industry, Foster’s remit includes defining and executing both an AI strategy and a data strategy for the bank. He chose Microsoft Copilot. Today, 16,000 of the bank’s 22,000 employees use the gen AI model for first drafts of emails and reports, and to summarize call center conversations. “For anything involving capturing and using and interrogating text, it’s a starting point,” Foster said. Generative AI can also interrogate SQL databases, he noted. M&T’s software developers use GitLab to help generate code. In most such use cases, “gen AI gets you 60% of the way, then a human reviews it and takes it the other 40%,” Foster said. The benefit is an “uplift in human efficiency, which is obviously useful,” Foster said. “It makes everyone’s work better, faster, stronger.” Having generative AI summarize calls, for instance, saves about six minutes per call. Employees quickly grow fond of the tools, according to Foster. At one point, M&T ran a pilot with 800 people, then got pushback when it considered shutting down the gen AI model. “People say, ‘it’s transcendent, I can’t go back to the way things were,'” Foster said. But he also noted one challenge of large language models: the problem of having multiple right answers. “If you ask Copilot, help me craft an email or help me craft a press release, you could get three different versions, and each of them is right for its own version of rightness,” he said. “So we’ve put human decision-making, critical thinking, at the center of AI adoption. You’re not deferring your own judgment to the machine through the adoption of Copilot. It’s giving you more tools to be effective, but the human being retains that accountability.” When Foster arrived at M&T in March 2023, after 12 years in a similar job at Deutsche Bank, he started a data academy providing in-person and remote training on data governance. So far, 2,000 people have gone through the training. And he began a data lineage initiative. “This wasn’t in response to gen AI,” Foster said. “I saw it as a core capability: Do we know where our data comes from and how we use it, how do we bring it to a level where we can interrogate it, how all the data goes from point A to point B?” His team created a repository called Edison that contains authoritative documents and data on all bank policies. The bank deployed data lineage software from Solidatus and from Monte Carlo. The Solidatus software speeds up the production of data lineage, Foster said. It also provides a single repository for the bank’s data, which enables interrogation and analysis that before would not have been possible. It’s helping to make M&T’s data AI-ready. Solidatus integrates with databases and applications, and it retrieves metadata and lineage from within them, explained Tina Chace, vice president of product at Solidatus.
Minnesota CU intends to launch its own stablecoin as “crypto platforms are pulling deposits and activity away from community institutions”; Metallicus’ Metal Blockchain product enables an offering that is regulated, and compatible with existing banking rails
St. Cloud Financial Credit Union is issuing its own white-label stablecoin by the end of this year. The Minnesota-based credit union is working with distributed ledger technology company Metallicus and DaLand CUSO to issue a “Cloud Dollar” stablecoin as part of a larger-scale digital asset vault launch in Q4 2025. The company asserts that its upcoming stablecoin will be the first credit-union-issued stablecoin in the U.S. “Our decision to launch
a white-label stablecoin is adding to our arsenal of the use cases we have the ability to offer as the digital asset industry continues to emerge,” Jed Meyer, CEO of St. Cloud Financial, told American Banker. “We believe it’s going to emerge faster than most are expecting it to.” The stablecoin, labeled as $CLDUSD, will be launched through DaLand’s Coin2Core software product, which currently integrates with St. Cloud’s core banking systems for posting, reconciliation and reporting. St. Cloud Financial is the largest shareholder of DaLand, a credit union service organization also owned by two other credit unions and a handful of private investors, according to Meyer. The stablecoin will be issued via Metallicus’ Metal Blockchain product. “Outside crypto platforms are pulling deposits and activity away from community institutions,” said Marshall Hayner, CEO of Metallicus. “By issuing $CLDUSD on a compliance-first foundation and connecting it to its core, St. Cloud makes stablecoins useful on day one: regulated, member-facing and compatible with existing banking rails.” Metallicus launched a stablecoin pilot program in June of this year, which St. Cloud Financial joined, but the credit union had its own processes in the works with Metallicus since the beginning of this year. “St. Cloud’s path has been different from our traditional Stablecoin Pilot Program members,” Metallicus Director of Marketing Will Cleaver told American Banker. “Metallicus, St. Cloud and DaLand CUSO together have been exploring ways to collaborate for well over a year. Both DaLand and the St. Cloud team already have a strong understanding of blockchain and digital assets, and are frequent advocates.” “This milestone is the natural progression of the digital asset vault strategy SCFCU has been advancing with DaLand over the past four years,” said Chase Larson, St. Cloud Financial’s chief lending officer. “The core-integrated platform ensures members can securely vault approved digital assets. Our strategic approach supports multiple use cases and gives us the flexibility to evolve as
member and market needs change.” J.W. Verret, associate professor of law at George Mason Law School, believes there shouldn’t be any major barriers to entry for smaller financial institutions that want to start issuing their own stablecoins. “Community banks and credit unions have deeper loyalty relationships with their customers than mega-banks,” Verret told American Banker. “They can be a trusted onboarding partner for this new technology that some of their customers don’t know or don’t trust yet.”
Senators introduce legislation to shield seniors, allowing companies to postpone payments for up to 15 business days and delay redemptions, if exploitation is suspected
On Thursday, Sens. Ruben Gallego (D-Ariz.) and Bill Hagerty (R-Tenn.) introduced the Financial Exploitation Prevention Act. The legislation would give the financial services industry better tools to address suspected financial exploitation and abuse of seniors, as well as those with mental and physical disabilities. “Far too many of Arizona’s seniors fall victim to scammers who target their hard-earned life savings,” Gallego said in a statement. “Once the money is gone from their accounts, it’s almost impossible to get back. I’m proud to lead this legislation to strengthen the financial industry’s banks’ ability to step in when they suspect fraud and give seniors a critical safeguard against bad actors. This bill will protect Arizona’s retirees and keep their accounts safe when they need them most.” “Older Americans are being exploited through financial scams, losing billions of dollars each year,” Hagerty said. “I’m pleased to reintroduce this legislation with Senator Gallego to provide financial professionals the ability to address the growing issue of financial exploitation and abuse of vulnerable investors.” The Financial Exploitation Prevention Act would give the financial services industry new tools to address suspected abuse of “specified adults,” defined by the bill as anyone 65 or older, or adults with physical or mental impairments that make them unable to protect their own interests. Companies could postpone payments for up to 15 business days, with the option to extend for an additional 10 business days if they notify designated contacts, conduct an internal review and maintain funds in a demand deposit account. The bill directs the Securities and Exchange Commission (SEC) to recommend legislative and regulatory changes to Congress, and it allows mutual funds and their transfer agents to delay redemptions if exploitation is suspected. The proposal calls for the SEC to consult with the Federal Reserve, the Commodity Futures Trading Commission, the Consumer Financial Protection Bureau, banking regulators and state securities officials. The measure comes as 10,000 Americans turn 65 each day, with seniors projected to make up roughly 20% of the U.S. population by 2030. Recent data from the Federal Trade Commission shows that the 60-and-older population lost more than $1.9 billion to financial scams in 2023.
Enterprises achieving data sovereignty deliver 5x higher AI ROI, with front runners deploying 2x more agentic GenAI across 15,000+ simulations and 13 countries
EnterpriseDB (EDB) has released a report revealing a significant divide between enterprises successfully leveraging agentic and Generative AI and the 87% at risk of falling behind. The report, Sovereignty Matters: A Global Blueprint for Sovereign, Agentic, and Generative AI, is based on in-depth interviews with 2,050 senior executives across 13 countries and over 15,000 simulations. The findings show that organizations that prioritize data and AI sovereignty are reaping rewards on an unprecedented scale, achieving up to 5x higher ROI in terms of innovation, efficiency, and long-term competitive value. These front-runners are 90% more likely to achieve transformative AI results and deploy twice as much mainstream agentic and GenAI as their peers. The report also reveals a comprehensive list of over 200 companies that embody the qualities of the “Deeply Committed” and are winning in the race of agentic and GenAI. The Deeply Committed are leading the charge, achieving 5x greater ROI from their AI initiatives, being 2.5x more confident in their ability to evolve from mainstream players to industry leaders, and delivering 2x more mainstream agentic and GenAI deployments than their peers. While over 95% of these enterprises globally aim to become their own AI and data platforms by 2028, only 13%, what EDB refers to as the “Deeply Committed,” have successfully navigated the tension between accessing fragmented data and AI infrastructures while maintaining compliance and cybersecurity. These Deeply Committed leaders have embraced true sovereignty—the ability to access, govern, and secure all data wherever it resides, free of silos and compliant by design. “Success hinges on AI and data that is sovereign by design—available anywhere, in any form you need. Early adopters are showing that hybrid environments and technologies like Postgres® offer a strong foundation. Organizations that fail to prioritize sovereignty over AI and data as mission critical risk being left behind.”
