• Menu
  • Skip to right header navigation
  • Skip to main content
  • Skip to primary sidebar

DigiBanker

Bringing you cutting-edge new technologies and disruptive financial innovations.

  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In
  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In

Google and partners are launching a “responsible AI sandbox” for banks to test gen AI use cases safely

July 22, 2025 //  by Finnovate

Banks have been cautiously exploring generative AI, using it internally for call centers, software development, and investment research, but hesitating to deploy it directly to customers due to risks like hallucinations, toxicity, and misinformation. In response, Google, Oliver Wyman, and Corridor Platforms are launching a “responsible AI sandbox” for banks to test gen AI use cases safely. “The idea is, in a three month period, they really get a good sense of exactly what needs to be done for their use case to go live and what governance is needed,” said Manish Gupta, Corridor’s CEO. The sandbox includes bias, accuracy, and stability tests and supports portability. “Tier 1 banks have been using sandboxes with good results – for example, HSBC and JPMorganChase,” said Alenka Grealish of Celent. Consultant Dov Haselkorn added, “It gets them probably three years ahead on the journey… And speed is really of the essence.” He also emphasized data risk: “Data provenance is a major topic in this field… you have to make sure that those controls are in place to make sure that none of our customer data accidentally leaks to third parties.” The sandbox, initially using Google’s Gemini model trained for customer service, lets banks test gen AI with internal or external data. “The industry needs to learn how to control” the adoption of AI, said Oliver Wyman’s Michael Zeltkevic. Oliver Wyman will assist banks in operationalizing the models. Similar efforts are underway in the U.K., where the Financial Conduct Authority’s “Supercharged Sandbox” lets firms safely test AI. “This collaboration will help those that want to test AI ideas but who lack the capabilities to do so,” said FCA’s Jessica Rusu. Karan Jain, CEO of NayaOne, which powers the FCA sandbox, explained, “They brought the data and the AI model into NayaOne, we locked it down, and we provided the GPUs.” Jain noted that sandbox use cases include fraud detection, cybersecurity, code assistance, and compliance, but warned of lagging adoption: “The speed of adoption of technology is 10 times slower than the speed of technology that’s entering the market and the employees want it.”

Read Article

Category: Additional Reading

Previous Post: « Embedded payments are seeing rising adoption in the parking sector through AI-recognition tech that lets customers just drive in and scan a QR code to enter their credit card information the first time they park, with automatic vehicle identification and charges applied on subsequent trips

Copyright © 2025 Finnovate Research · All Rights Reserved · Privacy Policy
Finnovate Research · Knyvett House · Watermans Business Park · The Causeway Staines · TW18 3BA · United Kingdom · About · Contact Us · Tel: +44-20-3070-0188

We use cookies to provide the best website experience for you. If you continue to use this site we will assume that you are happy with it.