Fnality international, the institutional settlement solution that uses tokenized central bank reserves, has announced its latest feature, the earmarking of funds. At an institutional level, if a securities transaction fails because of insufficient funds, that can sometimes trigger penalties, never mind the loss of the transaction. Hence, Fnality now supports earmarking so that an amount of tokenized currency can be earmarked for a particular transaction at a specified time. The banks involved in developing the programmable functionality were Lloyds Bank, Santander and UBS, the first three banks to go live on the sterling Fnality payment system (£FnPS). The latest feature adds to the 24/7 instant payments previously showcased for margin payments, FX swaps and repo transactions. “One of the promises of institutional blockchain-based applications is ‘atomicity’ in that all the legs of a transaction either fulfil together or they all fail,” said John Whelan, Managing Director of Digital Assets at Banco Santander. “There is no leg-risk. The concept of ‘earmarking’ as introduced by Fnality helps enable this feature in a way that can be truly interoperable with other DLT systems. Again, bringing us one step closer to the utilization of this tech at scale in the banking industry.” In recently filed accounts, Fnality International disclosed it issued a £20 million convertible loan note on 30 September 2024. The note will convert to equity on the completion of its Series C funding, which is in progress. One of Fnality’s biggest goals was always FX settlement. Once a second currency goes online, the range of use cases will expand significantly.
CompoSecure and MetaMask’s metal payment card powered by tap-to-authenticate tech enables spending stablecoins directly from a self-custodied wallet
CompoSecure and MetaMask launched a new metal payment card enabling users to spend stablecoins directly from self-custodied wallets, combining traditional payment convenience with Web3 asset control. The new offering, powered by CompoSecure’s Arculus tap-to-authenticate technology, serves three primary functions: a traditional payments card; a secure authentication device; and a crypto wallet interface. The card targets crypto-native users, offering full control over private keys and on-chain transactions without relying on centralized custodians — mirroring the freedom of holding cash. The card’s real-time blockchain integration allows for features like NFT-based loyalty rewards, staking and yield, pushing crypto toward mainstream adoption. Adam Lowe, PhD, chief product and innovation officer at CompoSecure says “The issuer or platform can instantly mint an NFT, you can gamify the purchase — and it can all be automated and real time,” he said. “For the consumer, there’s also the opportunity for staking and yield. If you have a dollar in your wallet, in your pocket, it’s not doing anything for you,” Lowe said. “If you have a dollar in a yield-bearing stablecoin, every moment it’s earning you 4%-plus yield.” At the same time, CompoSecure is actively testing direct on-chain payments, which would bypass traditional rails altogether. The goal is to enable the same simplicity and ubiquity of tap-to-pay while maintaining the flexibility and efficiency of blockchain settlements. There’s no reason you can’t directly work paying on-chain,” Lowe said. “The stablecoin goes directly to a merchant wallet. We skip everything in the middle.” In cases where a merchant prefers fiat, Lowe said the solution can handle real-time asset conversion.
Bolt’s one-click crypto, payments and checkout superapp to enable real-time order tracking, purchase and sale of cryptos, and processing of peer payments in-app with just a single click
Bolt CEO Ryan Breslow is unveiling a new “superapp” that he describes as “one-click crypto and everyday payments” in a single platform. Breslow hopes to change Bolt’s revenue with this new consumer app, which he ambitiously hopes will serve as “a centralized and personalized hub for financial services.” The app at once competes with a number of other companies such as crypto exchange Coinbase, payments platform Zelle, and PayPal. Its advantage is the ability to do what all these others do from one place via mobile. For example, the app will allow users to buy, sell, send, and receive major cryptocurrencies such as Bitcoin, Ethereum, USDC, Solana, and Polygon directly within the app. Users are provisioned an on-chain balance powered by Zero Hash and will be able to see their balance in real time. Breslow is also hoping to pick up where Zelle left off with the shutdown of its standalone app. With Bolt’s new offering, users can process peer payments “with just a single click” within its app. With Zelle, users can only send payments to peers through banking apps. On top of that, Bolt has partnered with Midland States Bank to now also offer a debit card that features a rewards program, including up to 3% direct cash back on eligible purchases and up to 7% in Love.com store credits. As Bolt doesn’t offer banking services, users will have to transfer money from another bank account into this one to fund purchases with the debit card. And lastly, the new app also provides real-time order tracking for users — something other companies such as Klarna offer in their app, as well.
CPI Card adds Web Push Provisioning feature that allows card issuers to issue payment credentials to a digital wallet through a simple push of a button on their website
CPI Card has added Web Push Provisioning (WPP), giving card issuers more options to connect payment cards with cardholders’ digital wallets. Card issuers can now issue payment credentials to a cardholder’s digital wallet through a simple push of a button on their website. This new functionality represents a simplified step forward, integrating directly with the cardholder’s wallet. WPP also provides card issuers with an alternative path to relying solely on a mobile app by enabling direct integration into a digital wallet. This expands onboarding options, allowing issuers to offer multiple implementation choices to their cardholders. As a result, users gain instant access and use of their digital card while waiting for their physical card to arrive. WPP gives card issuers more options and functionality to serve cardholders who open new accounts or request replacement cards. Cardholders can also select to provision multiple devices during the process. This next-generation functionality brings card issuers closer to providing a frictionless digital experience to attract and retain tech-forward cardholders while providing the ultimate cardholder experience.
eBay to leverage Checkout.com’s technology, data and global acquiring expertise to maximize payment acceptance and deliver frictionless payments experiences to shoppers
eBay announced its strategic partnership with Checkout.com, a leading global digital payments platform. Through this partnership, eBay expands its global payment platform to enhance customer experience and drive operational efficiencies. Avritti Khandurie Mittal, VP & General Manager of Global Payments and Financial Services at eBay said, “eBay operates at a significant global scale, and our customers value speed, convenience, and safety while shopping on our marketplace. Our strategic partnership with Checkout.com enables us to continue delivering fast, reliable, and frictionless payments experiences to millions of customers globally. With more than 2.3 billion live listings, eBay is one of the world’s largest online marketplaces. Millions of customers across 190 markets buy and sell hard-to-find collectibles, pre-loved fashion, electronics, car parts, and more on the marketplace. Guillaume Pousaz, CEO at Checkout.com said: “Payments performance is critical at this enterprise-level scale, and our technology, data, and acquiring expertise will help eBay maximize acceptance in global markets and drive efficiency across its platform. Together, we’re shaping the future of the digital economy.”
Aitium’s AI-powered solution for Amazon B2B sellers Identifies high-value customers, tracks repeat purchase trends, predicts demand and provides data-driven recommendations to maximize sales
The Global AI Internet Freedom Fund (GAIIFF) announced the launch of Aitium, a powerful new sales intelligence and inventory planning solution for Amazon Business Marketplace sellers. Designed to maximize B2B sales success, Aitium provides AI-powered inventory forecasting, corporate customer insights, and global marketplace analytics across 23 Amazon Business regions. Aitium offers: Corporate Customer Analytics – Identifies high-value B2B customers and tracks repeat purchase trends; Global Marketplace Expansion – Supports Amazon Business sellers in 23 international regions, including Amazon.com, Amazon.de, and Amazon.co.uk.; AI-Driven Inventory Forecasting – Helps sellers predict demand and optimize stock levels to prevent stockouts and reduce excess inventory; Conversion Optimization Insights – Provides data-driven recommendations to help sellers maximize visibility and increase sales.
Google Cloud partners NVIDIA to allow on-premises data centers to secure access to Gemini family of AI models as well as the data used for fine-tuning
NVIDIA is collaborating with Google Cloud to bring agentic AI to enterprises seeking to locally harness the Google Gemini family of AI models using the NVIDIA Blackwell HGX and DGX platforms and NVIDIA Confidential Computing for data safety. With the NVIDIA Blackwell platform on Google Distributed Cloud, on-premises data centers can stay aligned with regulatory requirements and data sovereignty laws by locking down access to sensitive information, such as patient records, financial transactions and classified government information. NVIDIA Confidential Computing also secures sensitive code in the Gemini models from unauthorized access and data leaks. “By bringing our Gemini models on premises with NVIDIA Blackwell’s breakthrough performance and confidential computing capabilities, we’re enabling enterprises to unlock the full potential of agentic AI,” said Sachin Gupta, vice president and general manager of infrastructure and solutions at Google Cloud. Confidential computing with NVIDIA Blackwell provides enterprises with the technical assurance that their user prompts to the Gemini models’ application programming interface — as well as the data they used for fine-tuning — remain secure and cannot be viewed or modified. At the same time, model owners can protect against unauthorized access or tampering, providing dual-layer protection that enables enterprises to innovate with Gemini models while maintaining data privacy.
Yugabyte’s agentic AI app allows developers to identify performance issues, analyze the root causes, and understand the impact using a structured, query-centric view as against voluminous metrics and alerts in traditional monitoring
Yugabyte announced the first of its next-generation agentic AI apps, Performance Advisor for YugabyteDB Aeon, its SaaS offering. Yugabyte also announced an extensible indexing framework designed to support the seamless integration of state-of-the-art vector indexing libraries and algorithms, augmenting the capabilities offered by pgvector. By infusing AI into Performance Advisor and delivering an extensible framework for vector support, YugabyteDB enhances database performance monitoring, improving AI application resilience and enabling greater AI innovation. The Performance Advisor agentic AI application allows developers to detect potential issues before an application is deployed and offers timely insights to SREs, and platform engineers to help with performance optimization. Traditional monitoring often relies on voluminous metrics and alerts, which are difficult to interpret, causing data overload, false-positives, and alert-fatigue. Equipped with AI-powered anomaly detection, Performance Advisor helps users identify performance issues, analyze the root causes, and understand the impact. A structured, query-centric view enables teams to pinpoint the queries consuming resources, highlight where performance bottlenecks occur, and monitor overall system load and potential anomalies. Yugabyte enhanced its pgvector support, adding extensible and future-proof vector indexing to YugabyteDB. This innovative approach is designed to support seamless integration of state-of-the-art vector indexing libraries and algorithms such as USearch, HNSWLib, and Faiss, taking YugabyteDB’s vector search capabilities beyond pgvector. Combining the power of popular open source pgvector extension with YugabyteDB’s inherently distributed architecture, YugabyteDB provides a robust foundation for building intelligent, data-driven applications that demand high-performance vector search. Built-in resilience and geo-distribution ensure continuous availability of vector search functionalities and low-latency retrieval across different geographic regions.
OpenAI’s new GPT-4.1 AI are optimized for real-world software engineering tasks such as frontend coding, making fewer extraneous edits, following formats reliably, adhering to response structure and consistent tool usage
OpenAI launched a new family of models called GPT-4.1. Yes, “4.1” — as if the company’s nomenclature wasn’t confusing enough already. There’s GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano, all of which OpenAI says “excel” at coding and instruction following. Available through OpenAI’s API but not ChatGPT, the multimodal models have a 1-million-token context window, meaning they can take in roughly 750,000 words in one go. OpenAI’s grand ambition is to create an “agentic software engineer,” as CFO Sarah Friar put it. The company asserts its future models will be able to program entire apps end-to-end, handling aspects such as quality assurance, bug testing, and documentation writing. GPT-4.1 is a step in this direction. “We’ve optimized GPT-4.1 for real-world use based on direct feedback to improve in areas that developers care most about: frontend coding, making fewer extraneous edits, following formats reliably, adhering to response structure and ordering, consistent tool usage, and more,” says OpenAI. “These improvements enable developers to build agents that are considerably better at real-world software engineering tasks.” OpenAI claims the full GPT-4.1 model outperforms its GPT-4o and GPT-4o mini models on coding benchmarks, including SWE-bench. GPT-4.1 mini and nano are said to be more efficient and faster at the cost of some accuracy, with OpenAI saying GPT-4.1 nano is its speediest — and cheapest — model ever. According to OpenAI’s internal testing, GPT-4.1, which can generate more tokens at once than GPT-4o (32,768 versus 16,384), scored between 52% and 54.6% on SWE-bench Verified, a human-validated subset of SWE-bench. In a separate evaluation, OpenAI probed GPT-4.1 using Video-MME, which is designed to measure the ability of a model to “understand” content in videos. GPT-4.1 reached a chart-topping 72% accuracy on the “long, no subtitles” video category, claims OpenAI.
TheStage AI’s tech enables developers to optimize and fine-tune their AI models to meet their exact requirements in terms of performance, size and latency
TheStage AI, a startup that’s optimizing AI to run better at lower costs, has just announced a $4.5 million funding. Its flagship technology is known as ANNA, which stands for Automatic NNs Analyzer, a system that leverages AI and discrete math to automatically adjust PyTorch models using techniques such as quantization, pruning and sparsification. In essence, it trims the fat to make AI models leaner and more performant. Using ANNA, developers can fine-tune their AI models to meet their exact requirements in terms of performance, size and latency. In addition, the company provides access to so-called “Elastic models,” or pre-fine-tuned open-source models, available in a range of sizes. Customers can select the most appropriate model based on the required quality, speed and cost. TheStage AI’s main goal is to help companies reduce the cost of deploying and running AI applications. Within its Model Library, TheStage AI currently lists dozens of optimized models, including various fine-tuned versions of the popular image generation model Stable Diffusion. Customers can therefore strike exactly the right balance in terms of cost and performance. In addition, customers can also bring their own models and use ANNA to optimize them. In its collaboration with Recraft, TheStage AI reckons it was able to double the performance of that company’s most powerful models while reducing processing times by 20%.