Ataccama’s new report in partnership with BARC finds that while 58% of organizations have implemented or optimized data observability programs – systems that monitor detect, and resolve data quality and pipeline issues in real-time – 42% still say they do not trust the outputs of their AI/ML models. The findings reflect a critical shift. Adoption is no longer a barrier. Most organizations have tools in place to monitor pipelines and enforce data policies. But trust in AI remains elusive. While 85% of organizations trust their BI dashboards, only 58% say the same for their AI/ML model outputs. The gap is widening as models rely increasingly on unstructured data and inputs that traditional observability tools were never designed to monitor or validate. 51% of respondents cite skills gaps as a primary barrier to observability maturity, followed by budget constraints and lack of cross-functional alignment. But leading teams are pushing it further, embedding observability into designing, delivering, and maintaining data across domains. When observability is deeply connected to automated data quality, teams gain more than visibility: they gain confidence that the data powering their models can be trusted. The report also underscores how unstructured data is reshaping observability strategies. Kevin Petrie, Vice President at BARC said “We’re seeing a shift: leading enterprises aren’t just monitoring data; they’re addressing the full lifecycle of AI/ML inputs. That means automating quality checks, embedding governance controls into data pipelines, and adapting their processes to observe dynamic unstructured objects. This report shows that observability is evolving from a niche practice into a mainstream requirement for Responsible AI.”