Membrane Labs has launched an institutional-grade risk engine purpose-built for lenders operating in digital asset markets. Powered by Bitpulse, a leader in crypto underwriting and quantitative risk infrastructure, the system delivers real-time VaR analysis, position stress testing, portfolio exposure modeling, and other tools to empower institutions to thrive in the open economy. “Institutions can’t afford blind spots when billions move at blockchain speed,” said Carson Cook, Founder & CEO of Membrane Labs. “Our risk engine brings clarity and actionability to risk management—without requiring clients to build a quant team from scratch.” The risk engine equips institutional credit and risk teams with the tools to quantify VaR, simulate stress scenarios, and analyze exposures with greater speed and precision. Fully integrated into Membrane’s loan and collateral management infrastructure, these capabilities enhance real-time visibility across the entire credit lifecycle, streamlining decision-making and improving operational efficiency and visibility. In partnership with Bitpulse, Membrane’s risk engine is powered by Bitpulse’s Risk Engine API suite, a leader in quantitative risk infrastructure, bringing advanced analytics directly into Membrane’s institutional workflows.
Alleviate’s AI dashboard helps people visualize their journey out of debt and toward wealth by offering real-time insights into their account status and progress, upcoming settlements, and action steps through intuitive visuals and curated financial wellness content
Alleviate has launched its new Client Dashboard — a breakthrough AI-powered platform designed to help Americans take control of their debt and chart a smarter path toward lasting wealth. The dashboard gives clients real-time insights into their progress, upcoming settlements, and action steps. It’s designed to increase confidence and engagement through intuitive visuals, helpful automation, and a mobile-first experience — all while reinforcing Alleviate’s core promise: to turn paying off debt into a launchpad for lifelong financial transformation. Michael Barsoum, CEO of Alleviate said “Our Client Dashboard helps people visualize their journey out of debt and toward wealth, one milestone at a time. It’s built to give every client confidence, control, and a real chance to move from debt to wealth. Key features include: All New Client Onboarding – Fast, intuitive and customized with AI; Visual Progress Tracking – Instantly see how far you’ve come and what’s ahead; Live Settlement Updates – Real-time visibility into account status and timing; Integrated Financial Wellness Content – Curated articles, videos, and tools to build lasting money habits; AI-Powered Personalization & Payment Optimization – Smart content and guidance tailored to each user’s journey; Access to Exclusive Financial Products – products built for the debt to wealth journey, every step of the way; and Secure Support Access – Direct communication with in-house support teams, built into the platform. Looking ahead, the Client Dashboard will serve as a gateway to a new generation of exclusive financial products and services – accessible only to Alleviate members.
Meta’s new Llama API to use Cerebras ultra-fast inference tech that would allow developers build apps that require chaining multiple LLM calls while offering generation speeds up to 18X faster than traditional GPU-based solutions
Meta announced a partnership with Cerebras Systems to power its new Llama API, offering developers access to inference speeds up to 18 times faster than traditional GPU-based solutions. What sets Meta’s offering apart is the dramatic speed increase provided by Cerebras’ specialized AI chips. The Cerebras system delivers over 2,600 tokens per second for Llama 4 Scout, compared to approximately 130 tokens per second for ChatGPT and around 25 tokens per second for DeepSeek, according to benchmarks from Artificial Analysis. This speed advantage enables entirely new categories of applications that were previously impractical, including real-time agents, conversational low-latency voice systems, interactive code generation, and instant multi-step reasoning — all of which require chaining multiple large language model calls that can now be completed in seconds rather than minutes. The Llama API represents a significant shift in Meta’s AI strategy, transitioning from primarily being a model provider to becoming a full-service AI infrastructure company. By offering an API service, Meta is creating a revenue stream from its AI investments while maintaining its commitment to open models. The API will offer tools for fine-tuning and evaluation, starting with Llama 3.3 8B model, allowing developers to generate data, train on it, and test the quality of their custom models. Meta emphasizes that it won’t use customer data to train its own models, and models built using the Llama API can be transferred to other hosts—a clear differentiation from some competitors’ more closed approaches. Cerebras will power Meta’s new service through its network of data centers located throughout North America, including facilities in Dallas, Oklahoma, Minnesota, Montreal, and California. By combining the popularity of its open-source models with dramatically faster inference capabilities, Meta is positioning itself as a formidable competitor in the commercial AI space. For Cerebras, this partnership represents a major milestone and validation of its specialized AI hardware approach.
Anthropic’s new feature update to enable Claude to incorporate data from SaaS applications into its prompt responses while its Research tool to allow preparing detailed reports about user-specified topics with more thorough analysis
Anthropic updated Claude with a feature called Integrations that will enable the chatbot to access data from third-party cloud services. The company rolled out the capability alongside an enhanced version of Research, a tool it introduced last month. The latter feature enables Claude to prepare detailed reports about user-specified topics. Research can now perform the task more thoroughly than before. The new Integrations capability will enable Claude to incorporate data from software-as-a-service applications into its prompt responses. If customers wish to connect Claude to an application for which a prepackaged integration isn’t available, they can build their own. Anthropic estimates that the process takes as little as 30 minutes. According to the company, developers can further speed up the workflow by using a set of tools that Cloudflare introduced in March to ease such projects. Claude’s new connectors are powered by MCP, a data transfer technology that Anthropic open-sourced. It provides software building blocks that reduce the amount of work involved in connecting a LLM to external applications. OpenAI, Anthropic’s top competitor, rolled out MCP support to its Agents SDK last month. Anthropic added MCP to Claude immediately after open-sourcing the technology last year. Until now, however, the chatbot only supported connections to applications installed on the user’s computer, which limited the feature’s usefulness.
Salesforce’s new benchmark for tackling ‘jagged intelligence’ in CRM scenarios shows leading agents succeed less than 65% of the time at function-calling for the use cases of three key personas: service agents, analysts, and managers
To tackle “jagged intelligence” one of AI’s most persistent challenges for business applications: the gap between an AI system’s raw intelligence and its ability to consistently perform in unpredictable enterprise environments —Salesforce revealed several new benchmarks, models, and frameworks designed to make future AI agents more intelligent, trusted, and versatile for enterprise use. The SIMPLE dataset, a public benchmark featuring 225 straightforward reasoning questions designed to measure how jagged an AI system’s capabilities really are. Perhaps the most significant innovation is CRMArena, a novel benchmarking framework designed to simulate realistic customer relationship management scenarios. It enables comprehensive testing of AI agents in professional contexts, addressing the gap between academic benchmarks and real-world business requirements. The framework evaluates agent performance across three key personas: service agents, analysts, and managers. Early testing revealed that even with guided prompting, leading agents succeed less than 65% of the time at function-calling for these personas’ use cases. Among the technical innovations announced, Salesforce highlighted SFR-Embedding, a new model for deeper contextual understanding that leads the Massive Text Embedding Benchmark (MTEB) across 56 datasets. A specialized version, SFR-Embedding-Code, was also introduced for developers, enabling high-quality code search and streamlining development. Salesforce also announced xLAM V2 (Large Action Model), a family of models specifically designed to predict actions rather than just generate text. These models start at just 1 billion parameters—a fraction of the size of many leading language models. To address enterprise concerns about AI safety and reliability, Salesforce introduced SFR-Guard, a family of models trained on both publicly available data and CRM-specialized internal data. These models strengthen the company’s Trust Layer, which provides guardrails for AI agent behavior. The company also launched ContextualJudgeBench, a novel benchmark for evaluating LLM-based judge models in context—testing over 2,000 challenging response pairs for accuracy, conciseness, faithfulness, and appropriate refusal to answer. Salesforce unveiled TACO, a multimodal action model family designed to tackle complex, multi-step problems through chains of thought-and-action (CoTA). This approach enables AI to interpret and respond to intricate queries involving multiple media types, with Salesforce claiming up to 20% improvement on the challenging MMVet benchmark.
AI agents could usher in a paradigm of DeFAI wherein a blockchain-powered, verifiable trust-centric model could enable secure, free and compliant AI interactions between autonomous agents across DeFi ecosystems
As AI agents take on more responsibility, and especially as the convergence between crypto and TradFi accelerates, worries around transparency and market manipulation will grow. DLT offers a solution. The Identity Management Institute reported companies that integrated blockchain identity systems have already cut fraud by 40% and identity theft by 50%. Applying these guardrails to AI-driven finance can counter manipulation and promote fairness. Moreover, the use of DLTs with fair ordering is growing rapidly, ensuring transactions are sequenced fairly and unpredictably, addressing MEV concerns and promoting trust in decentralized systems. A blockchain-powered, trust-centric model could unlock a new paradigm, “DeFAI”, in which autonomous agents can operate freely without sacrificing oversight. Open-source protocols like ElizaOS, which have blockchain plugins, are already enabling secure and compliant AI interactions between agents across DeFi ecosystems. As AI agents take on more complex roles, verifiable trust becomes non-negotiable. Verifiable compute solutions are already being built by firms like EQTY Lab, Intel and Nvidia to anchor trust on-chain. DLT ensures transparency, accountability and traceability. This is already in motion; on-chain agents are now operating that offer services ranging from trade execution to predictive analytics.
Sony’s Soneium taps Plume’s real world asset tokenization blockchain to offer tokenized Treasuries and private credit via cross-chain bridges
Soneium has announced a collaboration with real world asset (RWA) tokenization blockchain Plume, aiming to make tokenized Treasuries and private credit available to Soneium users. At a practical level, there’s Sony Bank, a digital only bank that has already offered its customers NFTs as rewards as well as tokenized assets, mainly in the form of real estate to date. Hence, apart from direct users of the Soneium blockchain, Sony Bank customers would make a good audience. But before that happens, the Plume network needs to launch, which is rumored to be imminent. Plume is a permissionless Layer 1 blockchain, that’s compatible with Ethereum and dedicated to tokenization. “The ability to offer access to real-world yield through tokenized assets is a major step forward in making blockchain services relevant to mainstream financial use cases,” said Ryohei Suzuki, Director of Sony Block Solutions Labs. “This partnership with Plume unlocks a compelling new layer of value for our ecosystem and users.” One of the challenges with the proliferation of blockchains, is the need to move assets between chains. Plume has a solution it refers to as SkyLink which uses LayerZero. For this partnership it would involve either burning or locking a token on the Plume network and simultaneously minting or unlocking a token on the Soneium network.
Splitit expanding its orchestration service to let processors participate in the transaction while giving them issuer channels through which to make direct offers to the consumer and also adding digital wallets to the mix
Splitit’s approach in the service economy is to construct an orchestration layer that lets customers pay for purchases over time using cards. “We’re expanding our service offering with more capabilities via the processor and the issuer based on the demand by these various players,” John Beisner, head of client success at Splitit, said. Among the near-term initiatives lies the ability to let processors participate in the transaction and give the issuer channels through which to make direct offers to the consumer amid a merchant interaction. As to the changing dynamics in the competitive arena of installment payments, Beisner said, “you’ve got the typical buy now, pay laters. You also have bank financing offers and other FinTechs involved in making financing offers to consumers.” “We think that by orchestrating that, bringing it into a single experience… we’re doing that at a level where it’s not just eCommerce, but it’s also for in-store transaction,” he said. “So, we’re trying to bring all of that together and provide a very focused capability to enhance the consumer experience. We’re also making sure that we maintain the relationship between the merchant and the consumers.” Consumers, in turn, discover that they can manage their funding more adroitly and find the spending power to “upgrade” their purchases to bigger-ticket choices as they don’t have to take out new loans to do so, he said. “We’re spending time getting out front of the transactions so that the consumer understands that they have options and that these are not loan-based options,” Beisner said. The checkout experience remains the same, as consumers enter their card details (or if they are already registered with a merchant, one-click checkout is an option). Splitit is also adding digital wallets to the mix, including Google Pay, Apple Pay and Samsung Pay, he said, “where the merchant does not even need to be signed up, where the customer can walk in with their wallet into any storefront and make a purchase — and then decide how they want to split those payments up,” he said. Splitit will also be rolling out a service where the merchant and the consumer share in the cost — “and we’ll still be using the ‘open to pay’ on a card to make that decision,” rather than a new loan, he said.
LendingClub is buying AI-powered spending intelligence platform Cushion that ingests users’ bank transactions and purchase data to help them track their bills, make on-time payments, manage subscriptions, build credit, and monitor BNPL loans
LendingClub announced the acquisition of intellectual property and select talent behind Cushion, an AI-powered spending intelligence platform, providing a natural complement to LendingClub’s suite of mobile financial products and experiences. Cushion’s AI-powered technology ingests users’ bank transactions and purchase information to help them track their bills, make on-time payments, manage subscriptions, build credit, and monitor BNPL loans. Scott Sanborn, CEO of LendingClub said, “Cushion’s technology complements our DebtIQ experience to provide our members with the tools and information they need to take control of their debt and spending. With credit card balances and interest rates at historic highs and consumers seeking ways to keep more of what they earn, the need for our solution has never been greater.” Adopting Cushion’s technology will eventually allow LendingClub to provide much-needed visibility into a consumer’s financial obligations beyond traditional credit monitoring. It builds on LendingClub’s acquisition of Tally in Q4 2024, which will simplify credit card management, help users optimize payments, reduce interest, and improve credit health.
Edward Jones partners CAIS to enable HNWIs and advisors to manage alt investments in addition to mutual funds and ETFs through unified managed accounts while also letting them do intelligent rebalancing, overall tax management, tax transitions
Edward Jones plans to provide a variety of private investment options through a deal with CAIS, which provides technology designed to unlock alternative investments for advisors and their clients. The service will be offered starting on May 5 through a business line called Edward Jones Generations, which is open to investors with $10 million or more in assets, but will eventually be extended to more of the firm’s clientele. Russ Tipper, principal and head of products at Edward Jones, said it’s too early to say what the criteria for investing in alternatives will eventually be set at. Rather than choosing a fixed investable asset threshold for all accounts, Edward Jones is more likely to look at every client’s portfolio individually and decide if alts have a place. “We’re going to make sure it’s an appropriate portion of a client’s portfolio, which could be as little as zero to as high as 20% depending on the objective they’re trying to solve for.” Edward Jones has roughly 9 million clients but doesn’t say how many have more than $10 million in investable assets.