Sezzle has introduced features it says are designed to help customers weather increased financial pressure. “With record-low consumer confidence — the Conference Board’s index recently plunged to its lowest level since May 2020 amid fears of tariffs and recession — the current climate makes budgeting tools more essential than ever,” Sezzle said. Among the new features are Sezzle Balance, designed to simplify the repayment process for consumers through a pre-loadable digital wallet. There are two additional products in beta: “Express Checkout,” described as a “streamlined flow that reduces friction for returning shoppers,” and Browser Extension, a tool that automatically prompts shoppers to earn Sezzle Spend and save with available coupons. “When shoppers see real savings, they come back — it’s that simple,” said Charlie Youakim, Sezzle Chairman and CEO. “That’s why we’re focused on delivering value at every touchpoint, whether it’s through smarter discovery, seamless checkout, or transparent pricing. The more we help consumers feel in control and save money, the more they trust and choose Sezzle as their go-to way to shop.”
Volcano Exchange’s financial RWA digital asset leverages blockchain tech to transform high-threshold private banking services into divisible and tradable digital assets lowering the entry barrier for retail investors
Volcano Exchange (VEX), a global digital trading platform for Real World Assets (RWA), today announced the official launch of its first financial RWA digital asset–HL (Morgan Stanley Private Wealth RWA Token). Backed by the future returns of Morgan Stanley Private Bank’s premium wealth management products, HL has a total issuance of $20 million, corresponding to 200 million HL tokens, with an initial subscription price of $0.1 per token. This issuance marks a deep integration of traditional finance and blockchain technology, offering global investors more transparent, efficient, and flexible digital asset trading and staking services. HL is VEX’s first RWA digital asset underpinned by the revenue rights of a traditional financial institution, with its value directly pegged to the future earnings of Morgan Stanley Private Bank products. Leveraging blockchain technology, VEX transforms high-threshold private banking services into divisible and tradable digital assets, lowering the entry barrier for retail investors while maintaining the stability and compliance of traditional finance. With a total supply of 200 million tokens, HL is available for subscription on the VEX platform at an initial price of $0.1 per token. After the subscription period concludes, HL will be officially listed for trading, becoming VEX’s first RWA digital financial trading pair. Holders can freely trade on secondary markets or participate in value-added services such as staking and lending through VEX’s digital finance sector, maximizing asset liquidity. The launch of HL represents a major milestone in the RWA space. By digitizing traditional financial assets through blockchain technology, we enhance their liquidity and composability. Moving forward, VEX will continue to onboard premium assets from top-tier institutions, building a global RWA financial infrastructure.
Kraken and Bybit listing their tokenized U.S. stocks just two hours apart indicates growing momentum behind tokenized finance and the broader ambition to decentralize access to traditional markets
In a sign of growing momentum behind tokenized finance, two major crypto exchanges, Kraken and Bybit, unveiled their listings of tokenized U.S. stocks just two hours apart. Kraken is launching 60 tokenized equities under the xStocks brand, powered by Swiss issuer Backed. The offering includes prominent names like Apple, Tesla, and ETFs such as SPY. Two hours later, Bybit, currently the second-largest exchange by crypto trading volume, announced the same product integration on its Spot platform. Kraken’s launch signals a broader ambition to decentralize access to traditional markets. Its xStocks are built on the Solana blockchain and allow users not only to trade them on the exchange but also to withdraw them to self-custody wallets. From there, users can deploy them as collateral across decentralized finance protocols, something conventional stocks can’t match. The exchange plans to expand access to xStocks across more than 185 countries in the coming weeks, with support for additional blockchains to follow. Bybit’s listing supports Ethereum (ERC-20) and Solana (SPL) versions of xStocks, and includes the same basket of high-demand equities. Emily Bao, Bybit’s Head of Spot, said the exchange aims to provide users with more control and choice while remaining within the crypto ecosystem. xStocks offer features such as traditional equities can’t, fractional ownership, on-chain mobility, and round-the-clock trading. By listing them nearly simultaneously, Kraken and Bybit are positioning themselves at the frontier of financial infrastructure. Meanwhile, Robinhood also announced the launch of tokenized versions of U.S.-listed stocks and ETFs, besides a blockchain network.
Crusoe’s modular data centers enable rapid deployments with diverse power sources for edge inference by integrating all necessary infrastructure into a single, portable unit
Crusoe has launched Crusoe Spark™, a prefabricated modular AI factory designed to bring powerful, low-latency compute to the network’s edge. The modular data centers integrate all necessary infrastructure, including power, cooling, remote monitoring, fire suppression, and racks supporting the latest GPUs, into a single, portable unit. Crusoe Spark enables rapid deployments with diverse power sources for on-prem AI, edge inference, and AI capacity expansion needs, with units delivered as fast as three months. AI at the edge is transforming industries by enabling real-time decision-making and intelligence directly where data is generated, without the latency and bandwidth limitations of a remote cloud system. AI-optimized modular data centers integrate all necessary infrastructure—including power, cooling, remote monitoring, fire suppression, and racks that support the latest GPUs—into a single, portable unit. Crusoe Spark enables rapid deployments with diverse power sources for on-prem AI, edge inference, AI capacity expansion needs, with units delivered as fast as three months. AI at the edge is transforming industries by enabling real-time decision-making and intelligence directly where data is generated, without the latency and bandwidth limitations of a remote cloud system. This capability is critical for applications including autonomous vehicles needing instant reactions, real-time patient monitoring in healthcare, predictive maintenance in manufacturing, and smart city infrastructure optimizing traffic flow and public safety. This rapidly expanding market is driven by the explosive growth of IoT devices and the demand for immediate, localized AI insights.
Zerve and Arcee AI solution to enable users to automate AI model selection within their existing workflows by intelligently selecting between SLMs and LLMs based on input complexity, cost, domain relevance, and other variables
Zerve, the agent-driven operating system for Data & AI teams, announced a partnership with Arcee AI, a language model builder to bring model optimization and automation capabilities to the Zerve platform, enabling data science and AI professionals to build faster, smarter, and more efficient AI workflows at scale. Through the new partnership and integration, Zerve and Arcee AI enable users to automate AI model selection within their existing workflows using an OpenAI-compatible API, without incurring infrastructure overhead. Arcee Conductor enhances AI pipeline efficiency for users by intelligently selecting between SLMs and LLMs based on input complexity, cost, domain relevance, and other variables. This collaboration allows data science and AI engineering teams to: Optimize model usage by routing tasks to the most appropriate model, improving accuracy and runtime performance; Enhance automation by combining Conductor’s routing with the Zerve Agent’s dynamic workflow control; Maintain seamless integration through plug-and-play compatibility with existing Zerve environments; Cut costs by deploying lightweight, lower-cost models where applicable.
Anysphere’s new agent orchestration tools allow developers to send natural language prompts from a mobile or web-based browser directly to the background agents, instructing them to perform tasks like writing new features or fixing bugs
Well-funded AI startup Anysphere Inc. is expanding beyond its viral generative AI code editor and into “agentic AI” with the launch of new web and mobile browser-based orchestration tools for coding agents. With its new application, developers can send natural language prompts from a mobile or web-based browser directly to the background agents, instructing them to perform tasks like writing new features or fixing bugs. Using the web app, developers can also monitor fleets of agents that are busy working on different tasks, check their progress and register those that have been completed within the underlying codebase. Anysphere explained that developers can instruct its AI agents to complete tasks via the web app, and if they’re unable to do so, they can seamlessly switch to the IDE to take over and see what’s caused it to come unstuck. Each of its agents has its own shareable link, which developers can click on to see its progress.
‘Solvers’ can help address fragmented liquidity in DeFi by enabling interoperability across chains through a unified execution layer that allows for invisible bridging and composability at the infrastructure layer
Decentralized finance (DeFi) is facing a significant challenge as the proliferation of new blockchains has fragmented its once-unified liquidity, threatening its core advantage of composability. The fragmentation of DeFi’s liquidity across dozens of L1s, rollups, and appchains creates fundamental inefficiencies, such as thinner markets, higher slippage, and weaker user and protocol incentives. The shift to multichain has been necessary for scaling, but without a way to emulate composability across chains, it risks undermining DeFi’s success. The lack of a unified execution layer in DeFi systems has led to inconsistent interfaces, fragmented pricing, and uncertain outcomes. Solvers, sophisticated actors, can help solve this issue by enabling interoperability across chains. By expressing an intent, solvers execute across chains, abstracting away the complexity underneath. This approach allows for invisible bridging, allowing one-click swaps, deposits, or interactions that move across chains without the user needing to manage the complexity. Multichain is not just theoretical anymore, but the environment in which DeFi operates today. Without solving for composability at the infrastructure layer, DeFi may not scale with it. The risk is not dramatic collapse, but slow erosion of thinner liquidity, weaker incentives, and fewer things that work across chains. Solver infrastructure offers a way out by mimicking the experience of synchrony across fragmented chains, preserving DeFi’s power and unlocking what comes next.
Better Mortgage’s voice AI loan assistant can seamlessly transition to human originators by surfacing key borrower insights, tracking outstanding questions, anticipating next steps and replacing them
Better Mortgage’s technological advancements are anchored by two proprietary tools: TinMan, the company’s end-to-end loan origination system, and Betsy, a voice-based AI loan assistant. At the core of Better Mortgage’s AI strategy is a clear conviction: automation should elevate, not eliminate, human expertise. Betsy was built to work in tandem with loan officers—not in place of them. Every interaction she has with a borrower is fully visible within the Tinman dashboard, giving loan officers complete transparency and the ability to jump in with full context at any point. Her warm hand-off capabilities, including real-time summaries and status notes, ensure a seamless transition from machine to human. The shift from AI to human feels intuitive, not abrupt, reinforcing the trust borrowers place in their loan officer while still benefiting from around-the-clock digital support. Importantly, loan officers aren’t being sidelined by this technology — they’re being elevated. Betsy surfaces key borrower insights, tracks outstanding questions or documents, and anticipates next steps, allowing originators to step into each conversation already informed. Betsy allows loan officers to focus their energy on building relationships and driving decisions forward. The scalability of this hybrid model is already visible through Better’s NEO Powered by Better initiative. Partner companies like NEO Home Loans are now able to serve significantly more families without increasing headcount—proof that tech-human collaboration isn’t just efficient, it’s expansive. Ultimately, Betsy and Tinman aren’t replacements. They’re reinforcements. Together, they enable a concierge-level mortgage experience where accuracy, speed, and human empathy converge.
OpenLedger enables deploying thousands of fine-tuned models using a single GPU without preloading them, by dynamically merging and infering on demand using quantization, flash attention, and tensor parallelism to offer 90% savings in deployment costs
OpenLedger has launched OpenLoRA, a new open protocol that enables developers to deploy thousands of LoRA fine-tuned models using a single GPU, saving up to 90% of deployment costs. Built on cutting-edge research and an open-source foundation, OpenLoRA allows developers to serve thousands of LoRA models on one GPU without preloading them, dynamically merging and infering on demand using quantization, flash attention, and tensor parallelism. This means builders can now scale AI deployment without bloating compute bills. Deployed as a SaaS platform, OpenLoRA makes it radically easier for startups and enterprises alike to launch AI products across verticals, from marketing, legal, education, crypto, customer service, and beyond, without having to replicate the entire model architecture for each use case. It’s a paradigm shift in how fine-tuned intelligence can be deployed at scale. Ram, Core Contributor at OpenLedger said, “With OpenLoRA, we’re redefining the economics of AI deployment, offering the first protocol where developers can serve massive fleets of fine-tuned models with minimal cost and maximum performance.”
Zango Global’s AI agents can read and interpret regulations with a high degree of accuracy, integrate it directly into a company’s day-to-day operations and respond to inquiries or draft consulting reviews complete with citations
Zango Global raised $4.8 million in seed funding led by Nexus Venture Partners to provide artificial intelligence agents to financial firms and banks, with an aim to transform how they deal with regulatory compliance. Zango uses AI agents, a type of artificial intelligence software that can make decisions, do research and achieve specific goals with a degree of autonomy. Agents are designed to carry out tasks with minimal or no human oversight, while adapting to changing circumstances. This allows them to continuously integrate knowledge, including regulatory information, so they can respond to inquiries or draft consulting reviews complete with citations. The company said its large language models and AI agents don’t just read and interpret regulations with a high degree of accuracy. They can integrate directly into a company’s day-to-day operations. In one example given by Zango, a bank involved with a regulator had a process that would have taken 48 hours, reduced to under four hours using the agentic AI platform. Using the platform, the company said, aiming to remain compliant and launching a new product or service can be as simple as spinning up an agent and asking: “I want to launch a lending product in X market. What do I need to do?” The agents will go to work, track down all the necessary resources and produce research, compliance requirements, records, citations, an impact assessment and a gap analysis helpful for future-proofing the product.
