Banking-as-a-Service bank Griffin is opening up access to a Model Context Protocol (MCP) server, providing a way for AI agents to autonomously perform tasks on behalf of customers. Griffin, which secured a full banking licence in March last year, says the initiative is the beginning of massive technological platform shift, which will see people delegating more and more of their work to AI. “We think there is much further to go…but to get there, the financial system has to be fundamentally rewired to accommodate a world in which agents can freely transact — while still retaining appropriate safeguards.” Potential use cases cited include end-to-end wealth management, payment admin and transactional capabilities. “This is early for us – we’re in beta – but it shows the power of what’s possible,” says the bank. “You can use the Griffin MCP server to have an agent open accounts, make payments, and analyse historic events. You can also use it to build complete prototypes of your own fintech applications on top of the Griffin API – which we’re already seeing customers doing in real time.”
Model Context Protocol: MCP should pivot towards an independent governance model and a formal consortium, away from its single vendor status for it to become a true open industry standard
Anthropic’s Model Context Protocol (MCP) proposes a clean, stateless protocol for how large language models (LLMs) can discover and invoke external tools with consistent interfaces and minimal developer friction. This has the potential to transform isolated AI capabilities into composable, enterprise-ready workflows. In turn, it could make integrations standardized and simpler. MCP is not yet a formal industry standard. Despite its open nature and rising adoption, it is still maintained and guided by a single vendor, primarily designed around the Claude model family. A true standard requires more than just open access. There should be an independent governance group, representation from multiple stakeholders and a formal consortium to oversee its evolution, versioning and any dispute resolution. None of these elements are in place for MCP today. While MCP presents a promising direction, mission-critical systems demand predictability, stability and interoperability, which are best delivered by mature, community-driven standards. Protocols governed by a neutral body ensure long-term investment protection, safeguarding adopters from unilateral changes or strategic pivots by any single vendor. The idea behind MCP is that models should speak a consistent language to tools. Prima facie: This is not just a good idea, but a necessary one. It is a foundational layer for how future AI systems will coordinate, execute and reason in real-world workflows. The road to widespread adoption is neither guaranteed nor without risk.
Token Monster’s AI chatbot platform uses a third-party service to connect to multiple LLMs and routes user prompts to the LLMs and linked tools that are best suited to answer them to deliver enhanced output leveraging the strengths of multiple models
Token Monster, a new AI chatbot platform, has launched its alpha preview, aiming to change how users interact with LLMs. Developed by Matt Shumer, co-founder and CEO of OthersideAI and its hit AI writing assistant Hyperwrite AI, Token Monster’s key selling point is its ability to route user prompts to the best available LLMs for the task at hand, delivering enhanced outputs by leveraging the strengths of multiple models. There are seven major LLMs presently available through Token Monster. Once a user types something into the prompt entry box, Token Monster uses pre-prompts developed through iteration by Shumer himself to automatically analyze the user’s input, decide which combination of multiple available models and linked tools are best suited to answer it, and then provide a combined response leveraging the strengths of said models. Unlike other chatbot platforms, Token Monster automatically identifies which LLM is best for specific tasks — as well as which LLM-connected tools would be helpful such as web search or coding environments — and orchestrates a multi-model workflow. The alpha preview, which is currently free to sign up for at tokenmonster.ai, allows users to upload a range of file types, including Excel, PowerPoint, and Docs.
ElevenLabs enterprise multimodal AI voice agents can access external knowledge bases and retrieve relevant information instantly, using built-in Retrieval-Augmented Generation (RAG) system
ElevenLabs, the well-funded voice and AI sound effects startup founded by former Palantir engineers, debuted Conversational AI 2.0, a significant upgrade to its platform for building advanced voice agents for enterprise use cases, such as customer support, call centers, and outbound sales and marketing. This update introduces a host of new features designed to create more natural, intelligent, and secure interactions, making it well-suited for enterprise-level applications. A key highlight of Conversational AI 2.0 is its state-of-the-art turn-taking model. This technology is designed to handle the nuances of human conversation, eliminating awkward pauses or interruptions that can occur in traditional voice systems. By analyzing conversational cues like hesitations and filler words in real-time, the agent can understand when to speak and when to listen. This feature is particularly relevant for applications such as customer service, where agents must balance quick responses with the natural rhythms of a conversation. Conversational AI 2.0 also introduces integrated language detection, enabling seamless multilingual discussions without the need for manual configuration. One of the more powerful additions is the built-in Retrieval-Augmented Generation (RAG) system. This feature allows the AI to access external knowledge bases and retrieve relevant information instantly, while maintaining minimal latency and strong privacy protections. In addition to these core features, ElevenLabs’ new platform supports multimodality, meaning agents can communicate via voice, text, or a combination of both. This flexibility reduces the engineering burden on developers, as agents only need to be defined once to operate across different communication channels.
Oxford researchers’ new technique identifies and verifies the intrinsic topological superconductivity of materials capable of building scalable, fault-tolerant quantum computers
Oxford researchers have developed a powerful new technique to identify materials capable of supporting stable quantum states, marking a major step toward scalable, fault-tolerant quantum computing. In this new study, the Oxford researchers verified that the known superconductor uranium ditelluride (UTe 2) is an intrinsic topological superconductor. The researchers used a scanning tunneling microscope (STM), which uses an atomically sharp superconducting probe to obtain ultra-high-resolution images at the atomic scale, without using light or electron beams. The experiments used an entirely new operating mode invented by Professor Séamus Davis (called the Andreev STM technique). This method is specifically attuned only to electrons in a special quantum state (topological surface state) that is predicted to cover the surface of intrinsic topological superconductors. When implemented, the method performed exactly as theory suggested, enabling the researchers to not only detect the topological surface state but also to identify the intrinsic topological superconductivity of the material. The results indicated that UTe2 is indeed an intrinsic topological superconductor, but not exactly the kind physicists have been searching for. Although, based on the reported phenomena, Majorana quantum particles are believed to exist in this material, they occur in pairs and cannot be separated from each other. The technique now enables researchers to efficiently screen other materials for topological superconductivity, potentially replacing complex and costly synthetic quantum circuits with simpler, crystalline alternatives.
Nord Quantique’s multimode encoding uses bosonic qubit technology that detect leakage errors to remove the qubit from the encoding space to enable building fault-tolerant quantum computers at utility scale
Nord Quantique has successfully developed bosonic qubit technology with multimode encoding, outlining a path to a significant reduction in the number of qubits required for quantum error correction. This provides the system protection against many common types of errors, including bit flips, phase flips, and control errors. Another key advantage over single-mode encoding is that leakage errors, which remove the qubit from the encoding space, can now be detected and corrected. The Tesseract code allows for increased error detection, and it is expected that this will translate into additional quantum error correction benefits as more modes are added. These results are therefore a key stepping stone in the development of this hardware-efficient approach. The core concept of the multimode approach centres on simultaneously using multiple quantum modes to encode individual qubits. Each mode represents a different resonance frequency inside an aluminium cavity and offers additional redundancy, which protects quantum information.
Shopify imagines an interface “where you can quickly shift between talking, typing, clicking, and even drawing to instruct software, like moving around a whiteboard in a dynamic conversation”
Shopify’s new chief design officer Carl Rivera believes that, in the very near future, the e-commerce platform’s user experience is going to feel like sci-fi and designers are at the center of it. His new position directly responds to industry skepticism about design’s relevance in an AI-driven landscape. “Imagine an interface where you can quickly shift between talking, typing, clicking, and even drawing to instruct software, like moving around a whiteboard in a dynamic conversation,” Carl Rivera says. An experience in which users are not presented with a barrage of nested menus, but with a blank canvas that invites creativity aided by an artificial intelligence that knows everything there is to know about online and brick-and-mortar retail and marketing. A fluid interface that adapts and anticipates your needs, automating tasks and recommending actions like the most brilliant partner you could dream of.
Donor-advised funds (DAFs) could emerge as popular form of giving among the ultrawealthy amid proposed tax hikes on private foundations coupled with added benefits of convenience, lower cost, enhanced donor privacy and the ability to contribute non-cash assets
A provision in Trump’s tax bill could make donor-advised funds an even more popular form of giving. Donor-advised funds, or DAFs, are accounts where donors can contribute funds, immediately get a tax deduction, and “advise” on where to donate — and they are becoming increasingly popular. As Daniel Heist, a professor at Brigham Young University and a lead researcher on the 2025 National Survey of DAF Donors, put it, “they’re growing like crazy.” Donors can contribute non-cash assets, like appreciated securities or crypto, to DAFs, and the funds grow over time. Technically, donors don’t control the funds in their DAF, but practically speaking, they can direct the money to any accredited charity. “As long as you’re following the rules of the DAF provider, you should always have those recommendations honored,” Mitch Stein, the head of strategy at Chariot, a technology company focused on DAFs, said. Private foundations have to distribute at least 5% of their assets annually for charitable purposes, but DAFs don’t have payout requirements. Donors also don’t report their gifts to individual organizations on their taxes, and instead report that they gave to the DAF.
GoCardless’s platform enables merchants to manage both account-to-account collections and customer payouts within a single platform with built-in payments security through reuse of stored bank details and Confirmation of Payee
Bank payment company GoCardless announces the launch of Outbound Payments, a significant expansion of its platform that will enable merchants to send money directly to customers, suppliers, and third parties via GoCardless. Merchants will be able to use GoCardless to manage both account-to-account collections and payouts within a single platform, streamlining operations, simplifying reconciliation and enhancing payment visibility. The introduction of payouts will also help merchants save time and money as they eliminate costly set up, maintenance, and contractual processes by managing both collections and payouts in the same place. Outbound Payments provides built-in payment security as merchants can reuse stored bank details from payment collections to reduce manual errors when paying out. In addition, Confirmation of Payee helps to ensure that payouts reach the right recipient by confirming that the payee’s name matches the registered bank account details before funds are transferred, helping reduce the risk of accidental or fraudulent transfers. Outbound Payments is a direct result of the strategic acquisition and rapid integration of Nuapay, which has a proven track record of processing billions in payout volume.
Cube Dev’s AI agent built on top of a semantic layer provides self-serve, natural language-driven analytics for any user by generating a SQL query to look for contextual insights and presents them in interactive visualizations
Cube Dev Inc., the creator of an open-source semantic layer that simplifies access to data from disparate systems, is launching an “agentic analytics” platform that uses AI to automate data analytics tasks. With D3, Cube says, it can scale up the productivity of business workers and enable them to explore data independently, without needing to seek help from data professionals first. The platform introduces the concept of “AI data co-workers” that can automate and enhance analytics tasks, with support for natural language queries, full explainability for every insight, and comprehensive governance. With Cube’s platform, developers can perform calculations on many different datasets in real time, without any of those hassles. It also provides an in-memory cache that saves the results of frequent calculations, so users don’t have to perform them constantly, meaning lower computing costs. Now, Cube is adding AI agents into the mix. At launch, Cube D3 features two different AI agents. The first is an AI Data Analyst, which is able to provide self-serve, natural language-driven analytics for any user. Users ask about their data in plain language, and the agent will generate a semantic Structured Query Language query that digs up the insights they need, presenting them in easily digestible, interactive visualizations. In addition, it can also perform tasks such as refining existing reports. The biggest advantage of building AI agents on top of a semantic layer is they gain more context, allowing them to perform tasks for users more effectively. There’s also an AI Data Engineer for more advanced users that’s able to automate the development of semantic AI models that can quickly leverage disparate data sources, enabling higher velocity and flexibility for the semantic data layer.
