• Menu
  • Skip to right header navigation
  • Skip to main content
  • Skip to primary sidebar

DigiBanker

Bringing you cutting-edge new technologies and disruptive financial innovations.

  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In
  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In

Lessons learned from agentic AI leaders enterprise leaders now report more complex ROI patterns that demand different technical architectures

June 30, 2025 //  by Finnovate

On day two of VB Transform 2025, a panel moderated by Joanne Chen, general partner at Foundation Capital, included Shawn Malhotra, CTO at Rocket Companies, Shailesh Nalawadi, head of product at Sendbird, and Thys Waanders, SVP of AI transformation at Cognigy; shared discovery: Companies that build evaluation and orchestration infrastructure first are successful, while those rushing to production with powerful models fail at scale. A key part of engineering AI agent for success is understanding the return on investment (ROI). Early AI agent deployments focused on cost reduction. While that remains a key component, enterprise leaders now report more complex ROI patterns that demand different technical architectures. For Cognigy, Waanders noted that cost per call is a key metric. He said that if AI agents are used to automate parts of those calls, it’s possible to reduce the average handling time per call. Saving is one thing; making more revenue is another. Malhotra reported that his team has seen conversion improvements: As clients get the answers to their questions faster and have a good experience, they are converting at higher rates. Nalawadi highlighted entirely new revenue capabilities through proactive outreach. His team enables proactive customer service, reaching out before customers even realize they have a problem. While there are solid ROI opportunities for enterprises that deploy agentic AI, there are also some challenges in production deployments. Nalawadi identified the core technical failure: Companies build AI agents without evaluation infrastructure. He noted that it’s just not possible to  predict every possible input or write comprehensive test cases for natural language interactions. Nalawadi’s team learned this through customer service deployments across retail, food delivery and financial services. Standard quality assurance approaches missed edge cases that emerged in production. “We have a feature that we’re releasing soon that is about simulating potential conversations,” Waanders explained. “So it’s essentially AI agents testing AI agents.” The approach tests demographic variations, emotional states and edge cases that human QA teams can’t cover comprehensively.

Read Article

Category: Members, Additional Reading

Previous Post: « CC Signals framework will allow creators to publish a document that specifies how AI models may and may not use their content
Next Post: Embedded payments technology can help organizations reduce the amount of time finance personnel spend processing expense documentation »

Copyright © 2025 Finnovate Research · All Rights Reserved · Privacy Policy
Finnovate Research · Knyvett House · Watermans Business Park · The Causeway Staines · TW18 3BA · United Kingdom · About · Contact Us · Tel: +44-20-3070-0188

We use cookies to provide the best website experience for you. If you continue to use this site we will assume that you are happy with it.