Virtual assistants aren’t providing the boost in customer satisfaction that they once did, according to analysis across the firm’s banking and credit card mobile app studies. The average percentage of responding consumers who used a virtual assistant actually fell from 33% to 30%. The average overall satisfaction among consumers who used a virtual assistant also fell, from 691 to 687. Sean Gelles, senior director of banking and payments intelligence, believes the reason behind these declines goes beyond the virtual assistants themselves, which major institutions have been developing continuously. He points to OpenAI’s ChatGPT, Microsoft Copilot, Meta AI and other GenAI tools. Virtual assistants have been looked to as the next step up. But consumers are getting used to more versatile GenAI assistants like ChatGPT and Copilot. He believes things are getting to the point where virtual assistants that were good — or good enough — in the past are not as much of a differentiator anymore. J.D. Power’s analysis found that customer satisfaction with apps and sites has been improving, but chiefly because of increased speed and other technical enhancements. Gelles says institutions have to begin assessing what’s missing from their digital experiences in general, as well as how GenAI could potentially improve their offerings. Overall, ease of use of digital channels remains a sticking point for customer satisfaction, according to Gelles. He says the average across the studies for users saying that they find the tools easy to use is only 28%. In a financial context, instead of presenting static choices, Gelles says, financial players could personalize things from the get-go, drawing on what services the consumer has tapped before.
Issuing deposit tokens that are fully insured, backed by fiat deposits and remain on-balance-sheet can help banks fend off deposit displacement threat from branded stablecoins of tech-native startups and retailers
Stablecoins could divert significant transaction volume and core deposits away from banks as retailers, fintechs, and Big Techs issue branded stablecoins that lead consumers to move cash into them for convenience, rewards, or programmability. This scenario could result in stablecoins becoming functional equivalents of bank deposits without the FDIC insurance, relationship ties, or regulatory protections banks provide. Deposit displacement has been happening for years, with $2.15 trillion leaving banks for fintech investment accounts, 65% of which has come from Gen Xers and Baby Boomers. JPMorgan is initially designed for institutional clients and will be issued on Coinbase’s Base blockchain, targeting on-chain settlements and cross-border B2B transfers. JPMorgan’s blockchain arm, Kinexys, markets it as a “deposit token”—a fully insured, interest-bearing digital representation of bank deposits—making it easy to reconcile with existing banking operations. By issuing tokenized bank money instead of a traditional stablecoin, JPMorgan safeguards its deposit base, ensuring it remains on-balance-sheet and insured. The GENIUS Act provides banks with regulatory clarity, defense against shadow banking, new revenue channels, and increased oversight from prudential regulators, SEC, FinCEN, and the Federal Reserve. Banks should prepare by defining a strategic position, identifying relevant use cases, investing in infrastructure, educating the board and C-suite, partnering with fintechs, blockchain infrastructure firms, or consortiums, and advocating for smart regulation. They must weigh the trade-offs between the opportunity to innovate versus the risk of disintermediation, as the cost of inaction may not be reputational but may be financial erosion as tech-native alternatives capture consumer funds.
Studies show using RAG with LLMs increases “unsafe” outputs such as misinformation, create a gateway through firewalls allowing for data leakage and experience significant decline in accuracy as tasks became more complex
Retrieval-Augmented Generation (RAG), a method used by generative AI tools like Open AI’s ChatGP, is becoming a cornerstone for genAI tools, providing implementation flexibility, enhanced explainability, and composability with Large Language Models (LLMs). However, recent research suggests that RAG may be making genAI models less safe and reliable. Alan Nichol, CTO at Rasa, criticized RAG as “just a buzzword” and called it “just a buzzword” that just means adding a loop around large language models” and data retrieval. Two studies by Bloomberg and The Association for Computational Linguistics (ACL) found that using RAG with large language models (LLMs) can reduce their safety, even when both the LLMs and the documents it accesses are sound. Both studies found that “unsafe” outputs such as misinformation or privacy risks increased under RAG. RAG needs strong guardrails and researchers actively trying to find flaws, vulnerabilities, or weaknesses in a system, often by thinking like an adversary. To fully unlock RAG’s potential, enterprises need to include fragmented structured data, such as customer information, to fully unlock its potential. RAG can also create a gateway through firewalls, allowing for data leakage, and security and data governance become critical with RAG architecture. Apple’s research paper on Large Reasoning Models (LRMs) found that as tasks became more complex, both standard LLMs and LRMs experienced a significant decline in accuracy, reaching near-zero performance.
Coding agents, by integrating with vast codebases, drastically reduce time-to-code from days to hours, while augmenting engineers to managerial roles such as system architecture, maintainability and security
Augment Code firmly believes that coding agents drastically reduce time-to-code from days to hours, while freeing engineers to focus on higher-level decisions such as system architecture, maintainability and security. Rather than removing humans from the loop, coding agents place them in a managerial role, curating output and guiding development, according to Guy Gur-Ari, co-founder of Augment Code. The value of coding agents lies in speed, breadth of knowledge and the agent’s ability to integrate with vast codebases and libraries. “We’re not setting out to replace software engineers; we’re setting out to augment them and help them be more productive,” Gur-Ari said. “And that’s certainly what we’re seeing with the agent now. Things that used to take a developer hours, days or weeks to perform can now be done in hours or less. But to us, what that means is developers can ultimately then be more productive and produce more and build even more amazing products than they could before.” Importantly, the success of these tools hinges on supervision. Left unchecked, agents can accrue technical debt quickly, according to Gur-Ari. But with the right oversight, they produce high-quality, scalable code quickly. “We work on developing an AI assistant for professional developers that works on large code bases, large teams for performing real software development tasks. To me, everything that’s happening now with large language models and code is just incredibly exciting.”. The latter philosophy is Augment Code’s raison d’être; the company firmly believes that coding agents drastically reduce time-to-code from days to hours, while freeing engineers to focus on higher-level decisions such as system architecture, maintainability and security. Rather than removing humans from the loop, coding agents place them in a managerial role, curating output and guiding development,
AI is transforming traditional spreadsheet models into structured, controlled and web-accessible applications with well-defined inputs and outputs, consistent formatting and clear documentation that can enable secure interaction with AI-based tools
AI and machine learning systems rely on structured and high-quality data to function effectively, as well as a plethora of functions such as well-defined inputs and outputs, consistent formatting, clear documentation, and reliable access to the underlying logic. The problem here is that spreadsheets very rarely match up to these standards. When proprietary models and sensitive financial data are introduced into external AI tools, the risk that this data could be exposed or shared with third parties is extremely high. “Instead of companies abandoning their spreadsheet models as a whole, a more practical solution to this is by modernising them. By updating their software, finance departments can transform traditional spreadsheet models into structured, controlled and web-accessible applications. Companies can also retain institutional knowledge embedded in their existing tools while getting rid of the restrictions that currently prevent AI integration. This includes involving spreadsheet logic within a web-based interface that carries out consistent input and output formats, automates validation, and manages access through user permissions. More importantly, this strategic approach enables a flawless interaction with AI-based tools. These web-based applications can get rid of the complications that accompany spreadsheet use by applying structured data, standardised formats, and transparent workflows. These factors create a setting where AI can be effectively used without manual intervention. By transforming traditional spreadsheet models into secure and structured web applications, companies will be empowered to modernise their workflows without disruption and will lead to a favourable outcome that includes a secure and scalable finance function that is AI-ready.
Boomi’s AI solution offers low-code integration tools, a visual design interface and scalable agent orchestration to enable secure and adaptive deployment of agents across hybrid environments
By bridging the gap between cloud agility and on-premises control, Boomi Agentstudio is changing how organizations handle data integration and automation across hybrid environments as a secure AI management solution. With Agentstudio, developers gain access to Boomi’s renowned low-code integration tools, a visual design interface and scalable agent orchestration — making hybrid deployments easier and more adaptive, according to Mani Gill, vice president of product management AI and data at Boomi LP. As part of Boomi Agentstudio, the Boomi Agent Control Tower serves as a centralized dashboard for monitoring, scaling and orchestrating agent activity across distributed environments. Together, Agentstudio and the Control Tower enable a streamlined hybrid integration strategy — delivering both development agility and deployment robustness, according to Gill. Boomi Agentstudio leverages a platform-based approach by centralizing the design, deployment and management of integration agents across hybrid environments. This approach is enabled by the Boomi Enterprise Platform, which provides a comprehensive suite of tools for integration, API management, data quality and workflow automation, according to Gill.
Waymo robotaxis can now be hailed in Atlanta via Uber through “Waymo on Uber” service that splits the responsibilities of owning and operating a fleet of driverless vehicles; users can now set their preferences to increase the chances of being matched with a Waymo
Waymo robotaxis can now be hailed in Atlanta via Uber. The two companies, which already offer the “Waymo on Uber” service in Austin, said the commercial service will initially cover about 65 square miles in Atlanta. The launch, if successful, is poised to propel the businesses of both companies. Uber, which has locked in partnerships with 18 autonomous vehicles companies, said it has an annual run rate of 1.5 million mobility and delivery AV trips on its network. Meanwhile, Waymo said it provides 250,000 paid robotaxi rides every week across Austin, Los Angeles, Phoenix, and San Francisco. The addition of Atlanta to that list should push those numbers up. Waymo’s fleet in Atlanta is in the “dozens” and will eventually be expanded over time. The companies have previously said the fleet shared between Atlanta and Austin will grow to the hundreds. The “Waymo on Uber” service is a hybrid of sorts where robotaxis and human-driven vehicles intermingle. Uber users can set their preferences in the app to increase the chances of being matched with a Waymo. That structure differs from the other markets where Waymo operates. In those cities, riders use the Waymo One app to hail robotaxis. The “Waymo on Uber” service splits the responsibilities of owning and operating a fleet of driverless vehicles. Uber handles the charging, maintenance, and cleaning of the autonomous vehicles, and manages access to the robotaxis via its app. Meanwhile, Waymo monitors the tech and the autonomous operations, including roadside assistance and certain aspects of rider support.
OpenAI ramps up office productivity features- ChatGPT can now record and transcribe any meeting, brainstorming session or voice note, pull out key points and turn them into follow-ups, plans and code; orchestrating not just automating tasks
OpenAI is busy rolling out a suite of office productivity features on ChatGPT that puts it in direct competition with its main investor and partner, Microsoft, and key rival, Google. Since early June, OpenAI has buffed up ChatGPT to do office work: Record Mode: Record and transcribe any meeting, brainstorming session or voice note. ChatGPT will pull out key points and turn them into follow-ups, plans and code. Enhanced Projects: Projects now have deep research, voice, improved memory, file-uploading capability and model selection. Advanced Voice: Voice now offers live translation and smoother interaction. Connectors: ChatGPT can pull data from Microsoft Outlook, Microsoft Teams, Microsoft OneDrive, Microsoft SharePoint, Google Drive, Gmail, Google Calendar, Dropbox and more. Updated Canvas: The side-by-side editing capability can now export documents in PDF, docx or markdown formats. AI-native workflows are the future. Read.ai, Otter.ai and Microsoft Copilot are “now in ChatGPT’s competitive crosshairs. The difference? ChatGPT isn’t just automating tasks; it’s orchestrating them, end-to-end, with context and language-level intelligence.” We’re seeing the beginning of the ‘invisible app era’ where productivity doesn’t live in documents; it lives in dynamic, AI-mediated interactions.
Kognitos platform combines the reasoning of symbolic logic with AI to transform tribal and system knowledge into documented, automated processes, shrinking the automation lifecycle and ensuring no hallucinations and full governance
Kognitos launched its groundbreaking neurosymbolic AI platform, the industry’s first to uniquely combine the reasoning of symbolic logic with the learning power of modern AI. This unified platform empowers enterprises to address hundreds of business automation use cases, consolidate their AI tools and reduce technology sprawl. Kognitos uniquely transforms tribal and system knowledge into documented, automated processes, establishing a new, dynamic system of record for business operations. Using English as code, businesses can achieve automation in minutes with pre-configured workflows and a free community edition. “With Kognitos, we’re automating processes we thought were out of reach, thanks to hallucination-free AI and natural language capabilities,” said customer Christina Jalaly at Boost Mobile. “The agility and speed to value are game-changing, consistently delivering roughly 23x ROI and tangible results. Kognitos is a key partner in transforming our operations.” Kognitos also addresses complex “long tail” automation challenges. Its patented Process Refinement Engine keeps documented automation current and optimized using AI. This shrinks the automation lifecycle, where testing, deployment, monitoring and changes are all English-based and AI-accelerated. Key innovations launched today include: The Kognitos Platform Community Edition; Hundreds of pre-built workflows; Built-in document and Excel processing; Automatic agent regression testing; Browser use.
Fiserv-Circle partnership to offer banks and fintechs seamless access to regulated and interoperable digital dollar infrastructure for launching stablecoin-based solutions with real-time settlement and expanded global reach
Circle Internet Group announced a strategic collaboration with Fiserv to jointly explore and develop stablecoin-enabled solutions for financial institutions and merchants within the Fiserv ecosystem. This collaboration will bring together Circle’s comprehensive stablecoin platform, including its regulated USDC infrastructure and Circle Payments Network, with Fiserv’s industry-leading digital banking and payment capabilities. The initiative aims to equip banks and fintechs leveraging Fiserv’s digital asset platform and branded stablecoin with seamless access to digital dollar infrastructure, enabling enhanced payment experiences, real-time settlement, and expanded global reach. Through this integration, Fiserv clients adopting stablecoin-based solutions will gain access to the power of interoperable, regulated digital dollars. Leveraging Circle’s infrastructure, Fiserv intends to offer capabilities that connect domestic and cross-border payment use cases to a modern internet-native financial layer, providing high-speed, low-cost transactions that move at the speed of the internet. By integrating with Circle’s infrastructure, Fiserv is positioned to extend the benefits of stablecoin-based payments and open internet finance to thousands of financial institutions. This effort builds on Circle’s longstanding commitment to partnering with a wide range of fintechs, payment processors, and regulated banks to unlock new opportunities in the digital asset economy. As part of this initiative, the companies will work together to design and deploy scalable solutions that harness the liquidity, interoperability, and compliance frameworks that underpin USDC and the Circle platform.
