Mobey Forum, a global association for banks, has released a report titled “Embedded Finance: A Strategic Roadmap for Banks,” offering actionable insights for banks to succeed in the digital economy by embedding financial services into digital ecosystems, reshaping customer experiences, and delivering integrated, customer-centric solutions. The report explores three core Embedded Finance business models. First, API-Driven Banking positions banks as “producers” that expose their financial products and services through APIs, allowing external platforms to embed these offerings seamlessly. This model emphasizes automation, scalability, and alignment with regulations like PSD2/3 and Open Banking, broadening banks’ distribution channels. Second, the Verticalised Offering model casts banks as “distributors,” integrating third-party services into their offerings while retaining control over branding and customer relationships, thus enriching solutions within a unified experience. Third, Platform Banking represents the most comprehensive model, where banks act as “marketplace orchestrators” facilitating exchanges among producers, consumers, and partners in a centralized ecosystem. Unlike the vertical model, this approach enables onboarding of users who may not initially be banking customers. The report provides practical frameworks and maturity models to guide banks in embedding finance effectively, with strategies such as developing robust API infrastructures and forming strategic third-party partnerships. Real-world case studies from UBS, PostFinance, and SEB Embedded demonstrate how Embedded Finance can drive customer engagement and unlock new revenue. Ultimately, the report issues a call to action, warning that banks must define their Embedded Finance strategies now or risk losing market relevance, customer loyalty, and income streams to faster-moving non-bank competitors.
Aquant’s “retrieval-augmented conversation” feature allows LLMs to retrieve and present information to users that allows them natively to act as a guided domain expert instead of receiving and presenting knowledge as a single all-in-one answer
Aquant Inc., the provider of an AI platform for service professionals, introduced “retrieval-augmented conversation,” a new way for LLMs to retrieve and present information to users that allows them natively to act as a guided domain expert instead of receiving and presenting knowledge as a single all-in-one answer. RAC can be thought of as an expert technician that is aware of its capacity and capabilities, Indresh Satyanarayana, vice president of product technology and labs, and the father of retrieval-augmented conversation, told. It helps the AI look at a user’s question and ask follow-up questions to fill knowledge gaps and generate tailored solutions. Unlike RAG, RAC introduces dynamic turn-taking, much more like a human conversation with an expert in the field in question. It’s designed to provide “bite-sized actions,” which he says avoids cognitive overload for the user. Not only that, RAC can incorporate even more data points into its conversational context depending on the persona developers want to build into their AI app. “It retrieves not only manuals but transactional data, job history, parts catalogs, internet of things readings, and key performance indicator targets, then reasons over that richer context to recommend the action that best balances cost, risk and time,” said Satyanarayana. RAC does not fundamentally replace RAG; it will still perform the retrieval-augmented portion. Documents still need to be searched and retrieved, and this aspect will guide the conversation for the user. On the other end, developers will have a chance to decide how “chatty” their app acts. It can do one-to-one questions solving one ambiguity at a time and then provide a final answer after they have all been resolved. Alternatively, they could develop an app that can resolve multiple questions at once, the way some people can hold multiple threads of conversation at the same time — like many open tabs in Chrome while researching — before resolving the problem.
Reasoner reveals the thoughts people keep hidden- offers the ability to see the gap between what people say and what they truly mean- from decision confidence and power dynamics to unspoken sentiments
Reasoner has launched its first application, Mind Reasoner, a tool built on a new fact-based AI architecture, giving users a previously impossible superpower: the ability to see the gap between what people say and what they truly mean, with every insight backed by verifiable proof. The company’s core technology, the Precision Intelligence Engine, is a battle-tested engine, which operates on facts, not approximations, is now being made available to everyone through Mind Reasoner. Key capabilities of Mind Reasoner include: Deep self-awareness: See objective proof of your own blind spots, like the gap between the confidence you project and the hesitation in your language, or the real triggers for your frustration in meetings. Uncover hidden realities: Understand the unspoken concern behind a colleague’s “yes,” or the true meaning behind a loved one’s “I’m fine.” Decode the room: Analyze over 100 dimensions of communication—from decision confidence and power dynamics to unspoken sentiment—to understand what’s really happening. Provable, traceable insights: Unlike other AI tools, every insight comes with an unbreakable evidence chain, allowing users to see the exact words and phrases that led to a conclusion, ensuring absolute confidence. Rigorous privacy safeguards: Mind Reasoner is delivered via a lightweight, secure desktop application that is SOC 2, HIPAA, and GDPR compliant, ensuring user privacy and data protection.
Heroku app development platform is adapting to enable developers to work smarter within increasingly agent-augmented workflows
Heroku is offering a developer-centric approach that blends simplicity, scalability and strong architectural foundations to support fast, sustainable innovation, according to Vish Abrams, chief architect of Heroku from Salesforce Inc. Heroku’s foundational principle has always been to minimize the undifferentiated heavy lifting for developers. That ethos continues today, even as new paradigms such as AI agents, multicloud integration and “vibe coding” reshape the software landscape. The pressure to adapt quickly doesn’t mean complexity has to scale in parallel, according to Betty Junod, chief marketing officer of Heroku. Vibe coding introduces a new way to build software by turning natural language into code through AI. Heroku is responding with a modernized architecture approach, updating the Twelve-Factor App methodology to guide both developers and AI agents in creating scalable, maintainable applications, according to Abrams. This renewed focus on the developer experience is also driving a cultural shift — one that emphasizes human creativity alongside machine efficiency. As AI agents become more capable of generating and deploying code, developers are being asked to step further into roles that require design thinking, exception handling and strategic oversight. The future isn’t about replacing developers with automation but enabling them to work smarter within increasingly agent-augmented workflows.
Stablecoins ‘perform poorly’ as money and could face uphill payments battle- lack of unified standards, pseudonymous nature that requires external tooling for ID verification and cumbersome user experience of signing transactions, managing private keys, and navigating gas fees are key barriers
Mastercard, Visa, Fiserv, and Stripe are adding stablecoin capabilities, showing that traditional finance sees stablecoins as useful for global payments, despite challenges. The Bank for International Settlements (BIS) criticized stablecoins in its 2025 Economic Report, stating they do not perform well as a currency due to issues such as price instability, lack of trust, and vulnerability to criminal use. The BIS also noted that stablecoins lack the flexibility of credit essential to modern financial systems. On one hand, they’ve gone from niche crypto tools to serious considerations by legacy financial institutions. On the other, they continue to fail the basic tests of stability, acceptability, trust and utility.The success of stablecoins as a form of money requires overcoming significant challenges in infrastructure, compliance, and the economy, which may take years. “The biggest problem in crypto is not adoption; it’s the user experience,” Mesh CEO and co-founder Bam Azizi told. While traditional payment systems are governed by unified standards, stablecoins operate on fragmented blockchains and each comes with its own set of protocols. Bridging tokens across these chains can be clunky at best and introduce security risks at worst. Stablecoins also introduce new wrinkles in compliance, particularly around KYB and KYC requirements as most blockchains are pseudonymous, meaning identity verification requires external, often cumbersome, tooling. This lack of embedded identity has made stablecoins a popular tool for money laundering and illicit finance. Another critical barrier to mainstream adoption is user experience. Signing transactions, managing private keys, and navigating gas fees make stablecoin payments a chore for the uninitiated. Still looming large over the entire stablecoin ecosystem is the question of regulation. This lingering, but potentially waning, uncertainty has hampered adoption by banks and merchants who don’t want to navigate compliance ambiguity.
Tray.ai’s platform addresses data incompleteness in AI deployment through integration of smart data sources that simplify synchronization of structured and unstructured enterprise knowledge, ensuring agents are informed with relevant and reliable information
Tray.ai has released Merlin Agent Builder 2.0, a platform designed to address challenges in AI agent deployment within enterprises. The platform aims to bridge the gap between building and actual usage of AI agents, addressing issues such as lack of complete data, session memory limitations, challenges with large language model (LLM) configuration, and rigid deployment options. The updated solution includes advancements in four key areas: integration of smart data sources for rapid knowledge preparation, built-in memory for maintaining context across sessions, multi-LLM support, and streamlined omnichannel deployment. Smart data sources simplify the connection and synchronization of structured and unstructured enterprise knowledge, ensuring agents are informed with relevant and reliable information. Built-in memory capabilities reduce the need for custom solutions and enhance continuity in user exchanges, improving adoption rates. The platform supports multiple LLM providers, allowing teams to assign specific models to individual agents with tailored configurations. Unified deployment across channels allows teams to build an agent once and deploy it seamlessly across communication and application environments, eliminating the need for repeated setup and technical adjustments for different channels. Tray.ai aims to provide a unified platform that enables IT and business teams to transition from pilot projects to production-ready AI agents that are actively used by employees and customers.
Camunda’s agentic orchestration for trade exception management lets clients connect to cloud or on-premises AI models and apply deterministic guardrails offering 86% reduction in manual effort and cutting T+1 delays by 98%
Camunda has highlighted how its agentic orchestration capabilities are enabling organizations to introduce AI at scale into their processes while preserving transparency, compliance, and control. Agentic trade exception management (available on Camunda Marketplace): Camunda’s platform allows clients to connect their preferred AI models, whether hosted in the cloud or internally via EY labs, and apply deterministic guardrails to ensure AI is only triggered when appropriate. This lets clients avoid rebuilding AI from scratch, instead focusing on governance, visibility, and scalable deployment – areas where Camunda’s orchestration brings immediate and measurable value. In one capital markets implementation, EY reduced manual effort by 86%, cut T+1 delays by 98%, and boosted analyst productivity from 6–10 to 41–64 cases per day – a 7x improvement. Agentic AI-assisted quality audit process (available on Camunda Marketplace): Cognizant has created and demonstrated workflows in Camunda that include mandatory human review steps – enabling AI to suggest actions, but requiring manual approval before those actions are executed. This balance allows organizations to benefit from AI-powered insights while also facilitating compliance with regional laws. For example, audit trails, escalation paths, and process visibility are all embedded into the BPMN model, assisting organizations in demonstrating full control over every agentic interaction. This led to significant time savings: the quality audit process was reduced from 138 minutes to just 7–10 minutes, increasing auditor productivity by 20–30%, and cutting costs by 30–50%. All activity is fully traceable via embedded audit trails and escalation paths in BPMN. Customer service agent (available on Camunda Marketplace): Replacing standard auto-responses, Incentro built an AI agent that uses a LLM to analyze queries and draft meaningful replies in real time. The agent accesses the company’s full FAQ and documentation set, enabling specific answers rather than generic acknowledgments. Camunda’s BPMN model structures the logic, with the agent dynamically choosing the best response path via an ad-hoc sub-process. When implementing these systems with Payter, Incentro was able to reduce handling time per inquiry from 24 to 12 minutes, with lead time cut by 58%, helping improve both customer NPS and agent satisfaction without increasing headcount. Compliance agent (available on Camunda Marketplace): BP3 shared how it is integrating agentic AI into decision-heavy workflows in regulated industries like BFSI, pharma, healthcare, and utilities. Its approach uses LLMs alongside DMN (Decision Model and Notation) tables to generate “accept, reject, or refer” outcomes. In ambiguous cases, decisions are escalated to a human, enabling the AI to learn from real-world feedback over time.
Tines workflow automation agents enable enterprises to apply the right level of automation with flexibility from manual to fully autonomous within a single platform and run entirely within the platform’s secure infrastructure
Tines announced autonomous AI capabilities within its workflow automation platform via the launch of agents. Agents mark a significant evolution in Tines’ platform, enabling customers to automate workflows with maximum control and flexibility, whether with deterministic logic, human-in-the-loop copilots, or full AI autonomy. Agents enable Tines customers to build intelligent, context-aware workflows that can act independently, suggest next steps, and collaborate with users in real time. The addition of agents allows customers to choose the right level of AI involvement for every workflow, ensuring organizations can implement AI automation that aligns with their specific security requirements, levels of complexity, and operational needs. Unlike traditional AI implementations that require external data sharing or compromise on security, Tines’ agents run entirely within the platform’s secure infrastructure. This ensures no customer data leaves the environment, is logged, or used for training, delivering the privacy and governance assurances that enterprise teams demand. Tines capabilities: Full-spectrum automation and orchestration: Apply the right level of automation with flexibility—from manual to fully autonomous—within a single platform. Enterprise-grade security: Built by security professionals, Tines keeps all automation and data within its own infrastructure. Seamless system integration: Connect any tool, LLM, or proprietary app to build, augment, and orchestrate intelligent workflows. Intuitive no-code interface: Easily design complex, mission-critical workflows with drag-and-drop tools and built-in collaboration features. User-friendly adoption: Deploy apps, chatbots, and integrations with popular tools such as Slack to boost usage and maximize ROI on AI initiatives.
Tokenized private credit fund managed by Apollo Global Management and Securitize reaches $100 million in on-chain assets indicative of the growing acceptance of blockchain tech in traditional finance
Apollo Global Management’s tokenized private credit fund, managed by Apollo and Securitize, has reached a $100 million in on-chain assets, highlighting the growing acceptance of blockchain technology in traditional finance. The fund targets key investment areas such as Corporate Direct Lending, Asset-Backed Lending, Performing Credit, Dislocated Credit, and Structured Credit. The $100 million milestone aligns with industry projections, which estimate the global tokenization market will grow from $2.3 billion in 2021 to $5.6 billion by 2026. Securitize’s technology enables the fund to operate across multiple blockchains, enhancing accessibility for institutional investors. The partnership builds on a growing collaboration between traditional finance and digital asset firms, and Securitize’s platform streamlines private market efficiency, offering improved transparency and potentially lower interest rates. However, challenges remain, such as the decentralized nature of blockchain platforms.
SECU’s integration of MANTL’s deposit origination tech to enable onboarding of members on any device or channel through over 85% automation of application decisions, including KYC, AML, Bank Secrecy Act, product service ordering, funding, and core booking
State Employees’ Credit Union of Maryland (SECU), a $5.7B credit union with 23 financial centers across Maryland, has partnered with MANTL, an Alkami solution team, to improve its in-branch and online account opening processes for businesses and retail members. The partnership will allow SECU to open new member accounts on any banking channel, at any time, demonstrating SECU’s commitment to providing the best possible banking experiences. SECU will leverage MANTL’s Consumer Deposit Origination to transform the online account opening experience and streamline the in-branch experience for members and employees. The Business Deposit Origination will allow SECU to better attract, serve, and deepen relationships with businesses across its target markets. By integrating MANTL with its core processing system, SECU will automate over 85% of application decisions, including Know Your Customer, Anti-Money Laundering, Bank Secrecy Act, product service ordering, funding, and core booking.
