A posting by the Atlanta Fed cautions that invisible and embedded transactions may lead to overspending. When payments fade into the background, losing track of spending, overlooking transactions, or feeling less in control of financial decisions becomes much easier. On average, respondents find cash more helpful to prevent overspending and track their expenditures than electronic, and especially contactless, payment methods. When paying with cash, respondents are more aware of the exact amount that they pay. But as PYMNTS Intelligence has found, consumers and businesses across all verticals have been enthusiastically embracing frictionless commerce. Embedded transactions are fast becoming “table stakes” for providers, with 54% of independent software providers and 74% of marketplaces enabling digital payment experiences, a necessary step “to remain competitive.” “Embedded is the prefix for most of the innovations we talk about now in payments. We embed payments into software (something we’ve been doing ever since the dawn of eCommerce), identity into payments, lending into checkout flows, banking into virtual accounts, point solutions inside of tech stacks, GenAI into software, offers into banking apps, and networks into networks … A lot of what was called invisible at the dawn of the 2010s with the introduction of Uber is now described as embedded … But it’s not enough to just “embed” something into something else. Embedding should be almost invisible and frictionless.”
Capital One’s first multi-agentic workflow Chat Concierge, deployed through its auto business has improved their customer engagement metrics significantly — up to 55% in some cases
Milind Naphade, SVP, technology, of AI Foundations at Capital One, offered best practices and lessons learned from real-world experiments and applications for deploying and scaling an agentic workflow. Capital One recently launched a production-grade, state-of-the-art multi-agent AI system to enhance the car-buying experience. In this system, multiple AI agents work together to not only provide information to the car buyer, but to take specific actions based on the customer’s preferences and needs. With over 100 million customers using a wide range of other potential Capital One use case applications, the agentic system is built for scale and complexity. Capital One’s applications include a number of complex processes as customers raise issues and queries leveraging conversational tools. These two factors made the design process especially complex, requiring a holistic view of the entire journey — including how both customers and human agents respond, react, and reason at every step. “The main breakthrough for us was realizing that this had to be dynamic and iterative,” Naphade said. “If you look at how a lot of people are using LLMs, they’re slapping the LLMs as a front end to the same mechanism that used to exist. They’re just using LLMs for classification of intent. But we realized from the beginning that that was not scalable.” Based on their intuition of how human agents reason while responding to customers, researchers at Capital One developed a framework in which a team of expert AI agents, each with different expertise, come together and solve a problem. Additionally, Capital One incorporated robust risk frameworks into the development of the agentic system. As a regulated institution, Naphade noted that in addition to its range of internal risk mitigation protocols and frameworks,”Within Capital One, to manage risk, other entities that are independent observe you, evaluate you, question you, audit you,” Naphade said. The evaluator determines whether the earlier agents were successful, and if not, rejects the plan and requests the planning agent to correct its results based on its judgement of where the problem was. This happens in an iterative process until the appropriate plan is reached. It’s also proven to be a huge boon to the company’s agentic AI approach. “We have multiple iterations of experimentation, testing, evaluation, human-in-the-loop, all the right guardrails that need to happen before we can actually come into the market with something like this,” Naphade said. In terms of models, Capital One is keenly tracking academic and industry research, presenting at conferences and staying abreast of what’s state of the art. In the present use case, they used open-weights models, rather than closed, because that allowed them significant customization. That’s critical to them, Naphade asserts, because competitive advantage in AI strategy relies on proprietary data. In the technology stack itself, they use a combination of tools, including in-house technology, open-source tool chains, and NVIDIA inference stack. Working closely with NVIDIA has helped Capital One get the performance they need, and collaborate on industry-specific opportunities in NVIDIA’s library, and prioritize features for the Triton server and their TensoRT LLM. Capital One continues to deploy, scale, and refine AI agents across their business. Their first multi-agentic workflow was Chat Concierge, deployed through the company’s auto business. It was designed to support both auto dealers and customers with the car-buying process. And with rich customer data, dealers are identifying serious leads, which has improved their customer engagement metrics significantly — up to 55% in some cases. “They’re able to generate much better serious leads through this natural, easier, 24/7 agent working for them,” Naphade said.
New Liquid Foundation Models can be deployed on edge devices without the need for extended infrastructure of connected systems and are superior to transformer-based LLMs on cost, performance and operational efficiency
If you can simply run operations locally on a hardware device, that creates all kinds of efficiencies, including some related to energy consumption and fighting climate change. Enter the rise of new Liquid Foundation Models, which innovate from a traditional transformer-based LLM design, to something else. The new LFM models already boast superior performance to other transformer-based ones of comparable size such as Meta’s Llama 3.1-8B and Microsoft’s Phi-3.5 3.8B. The models are engineered to be competitive not only on raw performance benchmarks but also in terms of operational efficiency, making them ideal for a variety of use cases, from enterprise-level applications specifically in the fields of financial services, biotechnology, and consumer electronics, to deployment on edge devices. These post-transformer models can be used on devices, cars, drones, and planes, and applications to predictive finance and predictive healthcare. LFMs, he said, can do the job of a GPT, running locally on devices. If they’re running off-line on a device, you don’t need the extended infrastructure of connected systems. You don’t need a data center or cloud services, or any of that. In essence, these systems can be low-cost, high-performance, and that’s just one aspect of how people talk about applying a “Moore’s law” concept to AI. It means systems are getting cheaper, more versatile, and easier to manage – quickly.
KnowBe4’S Just-in-Time security training analyses existing security stack and delivers real-time, context-sensitive “nudges” based on users’ current actions to mitigate risky behavior before it escalates
AI-driven cybersecurity empowers organizations with proactive defenses, accelerated response times and more robust protection. One breakthrough in this space is Just-in-Time AI training, a transformative method that enhances cybersecurity awareness. By delivering real-time, context-sensitive “nudges” based on users’ current actions, KnowBe4 Inc. uses this approach to mitigate risky behavior before it escalates, according to Javvad Malik, lead security awareness advocate at KnowBe4. “The Just-in-Time training or the nudges is where AI can integrate with your existing security stack,” Malik said. “You have firewalls, you have network monitoring controls, you have some [endpoint detection and response], you have some gateway controls, you have a lot of visibility into what people are doing. What AI can do is pull all of that out and analyze it and say, ‘Okay, this user’s now plugged in a USB drive. It’s not a corporate-approved one.’”AI-driven cybersecurity significantly enhances awareness training and user behavior, supporting stronger risk mitigation by leveraging real-time analytics, personalization and automation. KnowBe4 leverages this approach to transform users from potential vulnerabilities into active defenders, greatly strengthening an organization’s human layer of defense against cyber threats, according to Malik. KnowBe4 enables cybersecurity awareness training and human risk management within organizations by leveraging behavioral science, AI-driven analytics and interactive training tools through its comprehensive training platform. This approach transforms employees from potential security liabilities into proactive defenders, according to Malik.
BNY is to give AI-powered ‘digital employees’ who clean up code and validate payment instructions their own logins to access apps and provide them with email accounts
Bank of New York Mellon has given dozens of AI agent ‘digital employees’ their own logins and will soon provide them with email accounts. BNY chief information officer Leigh-Ann Russell tells that the bank’s AI hub has created two worker personas: one that cleans up code and another that validates payment instructions. The agents have direct managers and, because they have their own logins to access apps like their human colleagues, can work autonomously. Each instance of the agent works in a defined narrow team to avoid giving them access to too much information. BNY is planning to give the digital employees their own email accounts and possibly access to Microsoft Teams so that they can contact their human colleagues with issues. The bank also intends to build agents to carry out other tasks but stresses it is still hiring humans.
Sephora is offering Lyft ride credits to shoppers enabling them to be “delivered” to a participating store and receive personalized ‘skin scan’, exclusive sampling and expert guidance from beauty advisors
Sephora U.S. has announced its first-ever “Delivered to Beauty” activation, in partnership with Lyft Media. From July 7-10, the beauty retailer is offering Lyft ride credits (up to $20 off) to shoppers in New York City, Los Angeles, San Francisco, Chicago and Seattle, enabling the shoppers to be “delivered” to a participating Sephora location. Once they arrival in the store, shoppers can receive guidance from Sephora’s beauty advisors, along with a personalized “skin scan,” exclusive product sampling and $10 off any order (over $50) at checkout. The activation is part of Sephora’s new “Get Beauty from People Who Get Beauty” campaign, which aims to showcase the value of “trusted and personalized expertise provided by Sephora.” As part of the activation, which was developed in partnership with Lyft Media, select vehicles will be custom wrapped with Sephora branding, transforming the journey into an extension of the beauty experience itself, the company said. “At Lyft, we want to connect people with the places they love, and our partnership with Sephora really leans into that,” said Suzie Reider, executive VP of Lyft Media and Business. “It’s a natural collaboration: a rider steps out of their Lyft, transported by a driver who knows their way around their communities, and enters Sephora’s best-in-class shopping experience that offers expert guidance, too.”
Savvy Wealth is embedding AI financial advisor inside the core CRM and advisor-facing tech stack to enable human advisors to offer predictive, real-time insights tailored to individual client financial profiles
Savvy Wealth, a digital-first platform for financial advisors centered around modernizing human financial advice, announced the successful close of a $72 million Series B funding round, led by Industry Ventures, a venture capital firm focused on private technology investments. Savvy will leverage the fresh funding to accelerate its core technology offering, hire top technical talent and expand recruitment of independent advisors and advisory teams to its affiliate registered investment advisor (RIA), Savvy Advisors. The firm will also accelerate the development of artificial intelligence (AI) solutions to build personalized knowledge bases on each client, providing predictive, real-time intelligence tailored to individual financial profiles and needs. Ritik Malhotra, Founder and CEO of Savvy Wealth. “At Savvy, we’re embedding AI inside the core of our CRM and advisor-facing tech stack to ‘10x’ their capabilities – unlocking predictive, real-time insights that strengthen human relationships. As modern advisors continue to choose independence, Savvy’s boutique culture, cutting-edge technology and full-service platform offers them a welcome home where their voice matters.” Recently surpassing $2 billion in assets under management, Savvy continues to build upon the sophistication of its offering, which includes solutions designed to meet the complex needs of high-net-worth investors. As Savvy expands its client base, it plans to evolve into a modern wealth management platform that offers more premium services to vertically integrate all of an individual or family’s financial needs.
Ripple taps OpenPayd’s global fiat infrastructure, including real-time payment rails, multicurrency accounts and virtual IBANs to offer a rail-agnostic and fully interoperable cross-border payments solution; applies for a national banking charter
Financial services infrastructure provider OpenPayd launched a partnership with blockchain company Ripple. The collaboration will see OpenPayd’s global fiat infrastructure, including real-time payment rails, multicurrency accounts and virtual IBANs, support Ripple Payments into euros and British pounds. “By combining Ripple Payments with OpenPayd’s rail-agnostic and fully interoperable fiat infrastructure, we are delivering a unified platform that bridges traditional finance and blockchain,” OpenPayd CEO Iana Dimitrova said. “This partnership enables businesses to move and manage money globally, access stablecoin liquidity at scale, and simplify cross-border payments, treasury flows and dollar-based operations.” Ripple Payments is Ripple’s cross-border payment solution, employing blockchain, digital assets and a network of payout partners to deliver cross-border payments and on/off ramps for banks, FinTechs and cryptocurrency firms. The partnership is part of OpenPayd’s efforts to expand its newly launched stablecoin infrastructure, with the company providing direct minting and burning capabilities for Ripple USD (RLUSD). Businesses will be able to convert between fiat and RLUSD, accessing OpenPayd’s suite of services usinag a single API.
Visa and Mastercard are casting themselves as the connective tissue and playing to their strengths of global scale, trusted rails, built-in fraud protection, and tokenization tech to gain from the shift in card swipe fees to stablecoin payments
A major turf war is heating up in the payments worldand Visa and Mastercard suddenly find themselves on defense. Stablecoins like USDC are gaining traction, with companies like Shopify, Coinbase, and Stripe quietly rerouting payments around traditional card networks. For merchants, the pitch is irresistible: faster settlement, fewer fees, and no middlemen. With U.S. businesses spending roughly $187 billion a year on card swipe fees, even a small shift could redraw the map. Treasury Secretary Scott Bessent has hinted the stablecoin marketnow at $253 billioncould reach $2 trillion in the next few years. That’s not a side bet. That’s a direct hit. Visa and Mastercard aren’t sitting still. They’re flipping the narrativecasting themselves as the connective tissue for all things digital, stablecoins included. Visa is letting banks issue digital tokens and pilot stablecoin settlement directly on its network. Mastercard, meanwhile, just teamed up with Paxos to mint and redeem USDG, its fiat-backed stablecoin. The two networks are leaning into their edge: global scale, trusted rails, built-in fraud protection, and tokenization tech that masks sensitive data at checkout. That’s not just defense. It’s a strategic pivot.
Precisely’s code-light conversational interface uses MCP to connect APIs with LLMs through natural language prompts and enables instant access to location intelligence tools and rich datasets without requiring any code
Precisely has developed a lightweight setup using the Model Context Protocol (MCP) to connect APIs with large language model (LLM) interfaces like Claude Desktop. This approach eliminates the need for writing boilerplate code and allows for intuitive exploration of services through conversational interfaces. MCP offers a standardized method for AI applications to connect with APIs, data, and tools, enabling LLMs to dynamically decide which functions to invoke in response to user prompts. This aligns with Precisely’s goal of making it easier to integrate high-integrity data with applications and workflows. An MCP server was built to wrap all available endpoints from Precisely APIs, resulting in a code-light environment where Claude Desktop can execute API calls automatically based on a user’s request. The MCP server supports natural language prompts and enables instant access to location intelligence tools and rich datasets without requiring any code. It also helps scale the impact of data programs across the organization without adding to developer workload.