• Menu
  • Skip to right header navigation
  • Skip to main content
  • Skip to primary sidebar

DigiBanker

Bringing you cutting-edge new technologies and disruptive financial innovations.

  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In
  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In

New report shows while 85% of organizations trust their BI dashboards, only 58% say the same for their AI/ML model outputs, implying trust in AI remains elusive

June 20, 2025 //  by Finnovate

Ataccama’s new report in partnership with BARC finds that while 58% of organizations have implemented or optimized data observability programs – systems that monitor detect, and resolve data quality and pipeline issues in real-time – 42% still say they do not trust the outputs of their AI/ML models. The findings reflect a critical shift. Adoption is no longer a barrier. Most organizations have tools in place to monitor pipelines and enforce data policies. But trust in AI remains elusive. While 85% of organizations trust their BI dashboards, only 58% say the same for their AI/ML model outputs. The gap is widening as models rely increasingly on unstructured data and inputs that traditional observability tools were never designed to monitor or validate. 51% of respondents cite skills gaps as a primary barrier to observability maturity, followed by budget constraints and lack of cross-functional alignment. But leading teams are pushing it further, embedding observability into designing, delivering, and maintaining data across domains. When observability is deeply connected to automated data quality, teams gain more than visibility: they gain confidence that the data powering their models can be trusted. The report also underscores how unstructured data is reshaping observability strategies. Kevin Petrie, Vice President at BARC said “We’re seeing a shift: leading enterprises aren’t just monitoring data; they’re addressing the full lifecycle of AI/ML inputs. That means automating quality checks, embedding governance controls into data pipelines, and adapting their processes to observe dynamic unstructured objects. This report shows that observability is evolving from a niche practice into a mainstream requirement for Responsible AI.”

Read Article

Category: Additional Reading

Previous Post: « TD Bank survey finds 70% of Americans are comfortable with AI being used for fraud detection and 64% for credit score calculations, while 43% would use AI in combination with a human advisor for financial planning
Next Post: Ubyx aims to provide a clearing system enabling anyone to easily on and off-ramp between bank accounts and stablecoins supporting corporates that want to use stablecoins for cross border payments »

Copyright © 2025 Finnovate Research · All Rights Reserved · Privacy Policy
Finnovate Research · Knyvett House · Watermans Business Park · The Causeway Staines · TW18 3BA · United Kingdom · About · Contact Us · Tel: +44-20-3070-0188

We use cookies to provide the best website experience for you. If you continue to use this site we will assume that you are happy with it.