• Menu
  • Skip to right header navigation
  • Skip to main content
  • Skip to primary sidebar

DigiBanker

Bringing you cutting-edge new technologies and disruptive financial innovations.

  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In
  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In

New data observability solutions are addressing the full lifecycle of AI/ML inputs as 42% of enterprises still don’t trust AI model outputs

June 18, 2025 //  by Finnovate

Ataccama’s new report in partnership with BARC finds that while 58% of organizations have implemented or optimized data observability programs – systems that monitor detect, and resolve data quality and pipeline issues in real-time – 42% still say they do not trust the outputs of their AI/ML models. The findings reflect a critical shift. Adoption is no longer a barrier. Most organizations have tools in place to monitor pipelines and enforce data policies. But trust in AI remains elusive. While 85% of organizations trust their BI dashboards, only 58% say the same for their AI/ML model outputs. The gap is widening as models rely increasingly on unstructured data and inputs that traditional observability tools were never designed to monitor or validate. 51% of respondents cite skills gaps as a primary barrier to observability maturity, followed by budget constraints and lack of cross-functional alignment. But leading teams are pushing it further, embedding observability into designing, delivering, and maintaining data across domains. When observability is deeply connected to automated data quality, teams gain more than visibility: they gain confidence that the data powering their models can be trusted. The report also underscores how unstructured data is reshaping observability strategies.  Kevin Petrie, Vice President at BARC said “We’re seeing a shift: leading enterprises aren’t just monitoring data; they’re addressing the full lifecycle of AI/ML inputs. That means automating quality checks, embedding governance controls into data pipelines, and adapting their processes to observe dynamic unstructured objects. This report shows that observability is evolving from a niche practice into a mainstream requirement for Responsible AI.”

Read Article

Category: Members, Additional Reading

Previous Post: « Groq’s custom Language Processing Unit (LPU) architecture, designed specifically for AI inference enables it to handle memory-intensive operations like large context windows at lower cost compared to general-purpose GPUs
Next Post: Frontier models are multimodal, capable of zero-shot learning, display agent-like behavior, offer real-time inference and are characterized by massive data sets, compute resources, and sophisticated architectures »

Copyright © 2025 Finnovate Research · All Rights Reserved · Privacy Policy
Finnovate Research · Knyvett House · Watermans Business Park · The Causeway Staines · TW18 3BA · United Kingdom · About · Contact Us · Tel: +44-20-3070-0188

We use cookies to provide the best website experience for you. If you continue to use this site we will assume that you are happy with it.