Bilt Rewards now enables students to earn rewards on their student housing payments and redeem their rewards toward student loan payments. The first of these new capabilities results from the expansion of the Bilt Rewards network of homes to include student housing properties, beginning with those of its launch partner American Campus Communities (ACC). This partnership with ACC, which is a student housing company and a Blackstone portfolio company, will begin in late May at two properties at Baylor University and then expand in the coming months to the broader ACC portfolio that serves nearly 140,000 students. The collaboration extends to student housing properties the Bilt Rewards payments and commerce network that transforms housing and neighborhood spending into rewards and benefits. The other new feature announced — the ability for Bilt members to redeem their Bilt Points toward eligible student loan payments — is the first of its kind for Bilt Rewards. Starting Wednesday, Bilt members can redeem Bilt Points on student loans with five servicers: Nelnet, MOHELA, Sallie Mae, Aidvantage and Navient.
OpenAI is planning a truly ‘open reasoning’ AI system with a ‘handoff’ feature that would enable it to make calls to the OpenAI API to access other, larger models for a substantial computational lift
OpenAI is gearing up to release an AI system that’s truly “open,” meaning it’ll be available for download at no cost and not gated behind an API. Beyond its benchmark performance, OpenAI may have a key feature up its sleeve — one that could make its open “reasoning” model highly competitive. Company leaders have been discussing plans to enable the open model to connect to OpenAI’s cloud-hosted models to better answer complex queries. OpenAI CEO Sam Altman described the capability as a “handoff.” If the feature — as sources describe it — makes it into the open model, it will be able to make calls to the OpenAI API to access the company’s other, larger models for a substantial computational lift. It’s unclear if the open model will have the ability to access some of the many tools OpenAI’s models can use, like web search and image generation. The idea for the handoff feature was suggested by a developer during one of OpenAI’s recent developer forums, according to a source. The suggestion appears to have gained traction within the company. OpenAI has been hosting a series of community feedback events with developers to help shape its upcoming open model release. A local model that can tap into more powerful cloud systems brings to mind Apple Intelligence, Apple’s suite of AI capabilities that uses a combination of on-device models and models running in “private” data centers. OpenAI stands to benefit in obvious ways. Beyond generating incremental revenue, a handoff could rope more members of the open source community into the company’s premium ecosystem.
Amazon’s new benchmark to evaluate AI coding agents’ ability to navigate and understand complex codebases and GitHub issues
Amazon has introduced SWE-PolyBench, the first industry benchmark to evaluate AI coding agents’ ability to navigate and understand complex codebases. The benchmark, which measures system performance in GitHub issues, has spurred the development of capable coding agents and has become the de-facto standard for coding agent benchmarking. SWE-PolyBench contains over 2,000 curated issues in four languages and a stratified subset of 500 issues for rapid experimentation. The benchmark aims to advance AI performance in real-world scenarios. Key features of SWE-PolyBench at a glance: Multi-Language Support: Java (165 tasks), JavaScript (1017 tasks), TypeScript (729 tasks), and Python (199 tasks). Extensive Dataset: 2110 instances from 21 repositories ranging from web frameworks to code editors and ML tools, on the same scale as SWE-Bench full with more repository. Task Variety: Includes bug fixes, feature requests, and code refactoring. Faster Experimentation: SWE-PolyBench500 is a stratified subset for efficient experimentation. Leaderboard: A leaderboard with a rich set of metrics for transparent benchmarking.
Corporate treasuries are exploring yield-bearing strategies such as staking, lending and liquidity pools following the maturation of decentralized finance protocols and tokenized products
Major firms and new entities are amassing crypto holdings for treasury operations, reflecting a broader adoption and diversification of crypto assets beyond bitcoin. Corporate treasuries are exploring yield-bearing strategies such as staking, lending and providing liquidity, driven by the maturation of decentralized finance protocols and tokenized products. In the simplest sense, farming yield for enterprise reasons involves actively putting digital assets to work through yield-bearing instruments, such as staking, lending and liquidity pools. For example, in proof-of-stake (PoS) blockchain networks, holders can “stake” their assets to help secure the network, earning rewards (usually paid in the native token). For CFOs, staking offers a yield-generating mechanism somewhat akin to a dividend, albeit with technical, liquidity and regulatory considerations. Crypto lending platforms, for their part, can enable asset holders to lend their tokens in exchange for interest payments. This helps allow treasuries to deploy idle crypto assets and earn returns, similar to money-market strategies in traditional finance. By supplying their digital assets to decentralized exchanges or automated market makers (AMMs), corporate treasuries can earn fees from trading activity. While this method is potentially lucrative, it exposes providers to unique risks such as impermanent loss, a phenomenon where the value of supplied assets diverges from simply holding them.
Research shows by June 2030, the leading AI data center may have 2 million AI chips, cost $200 billion, and require 9 GW of power — roughly the output of nine nuclear reactors
Data centers to train and run AI may soon contain millions of chips, cost hundreds of billions of dollars, and require power equivalent to a large city’s electricity grid, if the current trends hold, according to a new study from researchers at Georgetown, Epoch AI, and Rand. The co-authors compiled and analyzed a dataset of over 500 AI data center projects and found that, while the computational performance of data centers is more than doubling annually, so are the power requirements and capital expenditures. The findings illustrate the challenge in building the necessary infrastructure to support the development of AI technologies in the coming decade. According to the Georgetown, Epoch, and Rand study, the hardware costs for AI data centers like xAI’s Colossus, which has a price tag of around $7 billion, increased 1.9x each year between 2019 and 2025, while power needs climbed 2x annually over the same period. The study also found that data centers have become much more energy efficient in the last five years, with one key metric — computational performance per watt — increasing 1.34x each year from 2019 to 2025. Yet these improvements won’t be enough to make up for growing power needs. By June 2030, the leading AI data center may have 2 million AI chips, cost $200 billion, and require 9 GW of power — roughly the output of nine nuclear reactors.
UiPath’s agentic AI platform to utilize Redis semantic routing tech that would enable AI agents to leverage the best LLM or LLM provider depending on the context, intent, and use-case which the customer is trying to solve
Data platform Redis and UiPath expanded their collaboration toward furthering agentic automation solutions for customers. By extending their partnership, Redis and UiPath will explore ways to leverage the Redis vector database, Semantic Caching, and Semantic Routing to support UiPath Agent Builder, a secure, simple way to build, test, and launch agents and the agentic automations they are executing. With Redis powering these solutions, UiPath agents will understand the meaning behind user queries, making data access faster and system responses smarter, which delivers greater speed and cost efficiency to enterprise developers looking to take advantage of automation. Additionally, via the utilization of semantic routing, UiPath agents will be able to leverage the best LLM or LLM provider depending on the context, intent, and use-case which the customer is trying to solve. UiPath Agent Builder builds on the RPA capabilities and orchestration of UiPath Automation Suite and Orchestrator to deliver unmatched agentic capabilities. Agent Builder will utilize a sophisticated memory architecture that enables agents to retrieve relevant information only from permissioned, governed knowledgebases and maintain context across planning and execution. This architecture will enable developers to create, customize, evaluate, and deploy specialized enterprise agents that can understand context, make decisions, and execute complex processes while maintaining enterprise-grade security and governance.
New algorithm reduces quantum data preparation time by 85% by using advanced graph analytics and clique partitioning to compress and organize massive datasets
Researchers at Pacific Northwest National Laboratory have developed a new algorithm, Picasso, that reduces quantum data preparation time by 85%, addressing a key bottleneck in hybrid quantum-classical computing. The algorithm uses advanced graph analytics and clique partitioning to compress and organize massive datasets, making it feasible to prepare quantum inputs from problems 50 times larger than previous tools allowed. The PNNL team was able to lighten the computational load substantially by developing new graph analytics methods to group the Pauli operations, slashing the number of Pauli strings included in the calculation by about 85 percent. Altogether, the algorithm solved a problem with 2 million Pauli strings and a trillion-plus relationships in 15 minutes. Compared to other approaches, the team’s algorithm can process input from nearly 50 times as many Pauli strings, or vertices, and more than 2,400 times as many relationships, or edges. The scientists reduced the computational load through a technique known as clique partitioning. Instead of pulling along all the available data through each stage of computation, the team created a way to use a much smaller amount of the data to guide its calculations by sorting similar items into distinct groupings known as “cliques.” The goal is to sort all data into the smallest number of cliques possible and still enable accurate calculations. By combining sparsification techniques with AI-guided optimization, Picasso enables efficient scaling toward quantum systems with hundreds or thousands of qubits.
Pinwheel’s pre-integration with Q2’s digital banking platform’s SDK enables instant direct deposit switching within account onboarding journey in 1-click
Pinwheel announced an integration with Q2’s Digital Banking Platform, via the Q2 Partner Accelerator Program. Q2 Holdings, is a leading provider of digital transformation solutions for banking and lending. With this integration via the Q2 Digital Banking Platform, banks and credit unions can provide consumers with instant direct deposit switching within their account onboarding journey. Pinwheel’s integration with the Q2 Digital Banking Platform means all Q2 customers can embed 1-click deposit switching. The Q2 Partner Accelerator program through the Q2 Innovation Studio allows in-demand financial services companies leveraging the Q2 SDK to pre-integrate their technology to the Q2 Digital Banking Platform. This enables financial institutions to work with these partners, purchase their solutions and rapidly deploy their standardized integrations to their customers. “Removing friction from the deposit switching process is critical for financial institutions to boost activation rates and secure primacy,” said Brian Karimi-Pashaki, Head of Revenue at Pinwheel. “Q2 customers can take advantage of Pinwheel Deposit Switch by making it available through Q2’s Partner Accelerator Program.”
Amount’s platform enhancements allow banks and credit unions to pend certain deposit growth, risk management, compliance, and credit decisioning applications for manual review instead of automatically declining them, to support long-term customer relationships
Amount announced a suite of powerful platform enhancements designed to give banks and credit unions greater control over deposit growth, risk management, compliance, and credit decisioning. Each enhancement reflects direct feedback from Amount’s banking and credit union clients and addresses real-world challenges including adapting credit strategy and navigating regulatory and risk pressures. The platform enhancements also support relationship-driven banking models by allowing institutions to pend certain applications for manual review instead of automatically declining them. This flexibility is particularly valuable for credit unions and community banks that prioritize long-term customer relationships even in cases where traditional data sources may fall short. New capabilities available in this release include: Real-time custom rule builder, Customizable pricing and credit line strategies, Faster, more secure bank account verification. With self-service controls embedded directly in the platform, banks and credit unions can rapidly update risk, eligibility, and pricing logic without filing a change request or waiting on implementation cycles.
European startup Two plans to target US businesses with B2B BNPL solution that can be embedded directly into merchants’ checkouts, providing instant trade credit at the point of sale and enabling consolidating multiple purchases into grouped monthly statements
A new wave of European startups is actively pursuing expansion into the US B2B BNPL sector as part of broader international growth strategies. Among them is Two, a B2B BNPL firm currently running a pilot program in North America as it prepares for a full-scale launch into the region’s BNPL market. While Two doesn’t yet have a physical footprint in the United States, it maintains an online presence through partnerships with Santander and Allianz, whose operations span the Americas, including the US. Two’s current pilot program offers trade credit solutions through its white-labeled Buy Now, Pay Later service and its installment product. These solutions are integrated into merchants’ checkout, existing financial workflows, and payment systems, enabling buyers to break large purchases over up to 24 months. Andreas Mjelde, CEO & Co-Founder of Two, shares that the company is preparing to roll out its full product lineup in North America by the second quarter of 2025. The full lineup will include: 1) B2B BNPL embedded directly into guest checkouts, providing instant trade credit to businesses at the point of sale. Currently, BNPL is embedded in certain open and closed-wall checkouts. 2) Trade Accounts that will enable businesses to consolidate multiple purchases into grouped monthly statements, simplifying payment management and cash flow oversight. 3) Extended installment plans, providing repayment options of up to 36 months for businesses managing larger transactions. He explains that the firm’s localized approach incorporates regional credit risk assessments and personalized repayment experience targeting the specific needs of US-based companies. ”Our proprietary risk models, Frida and Delphi, deliver high acceptance rates while minimizing friction for buyers,” says Andreas. This automation speeds up onboarding and boosts approval rates, without the need for hard credit checks. The firm has built its machine learning infrastructure and AI models in-house. Two is pursuing a phased expansion strategy, starting in US states with high digital payment adoption with the support of global banking partnerships, before scaling further.