On day two of VB Transform 2025, a panel moderated by Joanne Chen, general partner at Foundation Capital, included Shawn Malhotra, CTO at Rocket Companies, Shailesh Nalawadi, head of product at Sendbird, and Thys Waanders, SVP of AI transformation at Cognigy; shared discovery: Companies that build evaluation and orchestration infrastructure first are successful, while those rushing to production with powerful models fail at scale. A key part of engineering AI agent for success is understanding the return on investment (ROI). Early AI agent deployments focused on cost reduction. While that remains a key component, enterprise leaders now report more complex ROI patterns that demand different technical architectures. For Cognigy, Waanders noted that cost per call is a key metric. He said that if AI agents are used to automate parts of those calls, it’s possible to reduce the average handling time per call. Saving is one thing; making more revenue is another. Malhotra reported that his team has seen conversion improvements: As clients get the answers to their questions faster and have a good experience, they are converting at higher rates. Nalawadi highlighted entirely new revenue capabilities through proactive outreach. His team enables proactive customer service, reaching out before customers even realize they have a problem. While there are solid ROI opportunities for enterprises that deploy agentic AI, there are also some challenges in production deployments. Nalawadi identified the core technical failure: Companies build AI agents without evaluation infrastructure. He noted that it’s just not possible to predict every possible input or write comprehensive test cases for natural language interactions. Nalawadi’s team learned this through customer service deployments across retail, food delivery and financial services. Standard quality assurance approaches missed edge cases that emerged in production. “We have a feature that we’re releasing soon that is about simulating potential conversations,” Waanders explained. “So it’s essentially AI agents testing AI agents.” The approach tests demographic variations, emotional states and edge cases that human QA teams can’t cover comprehensively.