Relyance AI, a data governance platform provider that secured $32.1 million in Series B funding last October, is launching a new solution aimed at solving one of the most pressing challenges in enterprise AI adoption: understanding exactly how data moves through complex systems. The company’s new Data Journeys platform addresses a critical blind spot for organizations implementing AI — tracking not just where data resides, but how and why it’s being used across applications, cloud services, and third-party systems. Data Journeys provides comprehensive view, showing the complete data lifecycle from original collection through every transformation and use case. The system starts with code analysis rather than simply connecting to data repositories, giving it context about why data is being processed in specific ways. Data Journeys delivers value in four critical areas: First, compliance and risk management: The platform enables organizations to prove the integrity of their data practices when facing regulatory scrutiny. Second, precise bias detection: Rather than just examining the immediate dataset used to train a model, companies can trace potential bias to its source. Third, explainability and accountability: For high-stakes AI decisions like loan approvals or medical diagnoses, understanding the complete data provenance becomes essential. Finally, regulatory compliance: The platform provides a “mathematical proof point” that companies are using data appropriately, helping them navigate increasingly complex global regulations. Customers have seen 70-80% time savings in compliance documentation and evidence gathering.
Apache Airflow 3.0’s event-driven data orchestration makes real-time, multi-step inference process possible at scale across various enterprise use cases
Apache Airflow community is out with its biggest update in years, with the debut of the 3.0 release. Apache Airflow 3.0 addresses critical enterprise needs with an architectural redesign that could improve how organizations build and deploy data applications. Unlike previous versions, this release breaks away from a monolithic package, introducing a distributed client model that provides flexibility and security. This new architecture allows enterprises to: Execute tasks across multiple cloud environments; Implement granular security controls; Support diverse programming languages; and Enable true multi-cloud deployments. Airflow 3.0’s expanded language support is also interesting. While previous versions were primarily Python-centric, the new release natively supports multiple programming languages. Airflow 3.0 is set to support Python and Go with planned support for Java, TypeScript and Rust. This approach means data engineers can write tasks in their preferred programming language, reducing friction in workflow development and integration. Instead of running a data processing job every hour, Airflow now automatically starts the job when a specific data file is uploaded or when a particular message appears. This could include data loaded into an Amazon S3 cloud storage bucket or a streaming data message in Apache Kafka.
Datadog unifies observability across data and applications, combining AI with column-level lineage to detect, resolve and prevent data quality problems from occurring
Cloud security and application monitoring giant Datadog is looking to expand the scope of its data observability offerings after acquiring a startup called Metaplane. By adding Metaplane’s tools to its own suite, Datadog said, it will enable its users to identify and take instant action to remedy any data quality issues affecting their most critical business applications. Metaplane has built an end-to-end data observability platform that combines AI with column-level lineage to try and detect, resolve and also prevent data quality problems from occurring. It’s an important tool for any company that’s trying to make data-driven decisions, since “bad” data means those decisions are being made based on the wrong insights. This allows it to notify customers of any issues with the tools that are creating their data, such as Slack, PagerDuty and the like. Datadog Vice President Michael Whetten said, Metaplane’s offerings will help the company to unify observability across data and applications so its customers can “build reliable AI systems.” When the acquisition closes, Metaplane will continue to support its existing customers as a standalone product, though it will be rebranded as “Metaplane by Datadog.” Of course, Datadog will also look to integrate Metaplane’s capabilities within its own platform, and likely do its utmost to get Metaplane’s customers onboard.
Candescent and Ninth Wave’s integrated open data solution to facilitate secure, API-based, consumer-permissioned data sharing for banks and credit unions of all sizes and enable compliance to US CFPB Rule 1033
US digital banking platform Candescent has expanded its partnership with Ninth Wave to launch an integrated open data solution for banks and credit unions. The new offering is designed to facilitate secure, API-based, consumer-permissioned data sharing for banks and credit unions of all sizes. The development aims to support institutions in enhancing customer experience, operational efficiency, and regulatory compliance, including adherence to the US Consumer Financial Protection Bureau’s Rule 1033. The expanded collaboration seeks to replace traditional data-sharing practices—such as screen scraping and manual uploads—with modern, transparent alternatives. The new solution offers seamless integration with third-party applications used by both retail and business banking customers. Candescent chief product officer Gareth Gaston said: “With our integrated solution, banks and credit unions will be able to access Ninth Wave open data capabilities from within the Candescent digital banking platform. By adopting this model, financial institutions are expected to gain improved control over shared data, as well as stronger compliance with evolving regulatory standards. Ninth Wave founder and CEO George Anderson said “This partnership will allow financial institutions of all sizes to gain the operational efficiencies, reliability, and scalability of a single point of integration to open finance APIs and business applications.”
Reducto’s ingestion platform turns unstructured data that’s locked in complex documents into accurate LLM-ready inputs for AI pipelines
Reducto, the most accurate ingestion platform for unlocking unstructured data for AI pipelines, has raised a $24.5M series A round of funding led by Benchmark, alongside existing investors First Round Capital, BoxGroup and Y Combinator. “Reducto’s unique technology enables companies of all sizes to leverage LLMs across a variety of unstructured data, regardless of scale or complexity,” said Chetan Puttagunta, General Partner at Benchmark. “The team’s incredibly fast execution on product development further underscores their commitment to delivering state-of-the-art software to customers.” Reducto turns complex documents into accurate LLM-ready inputs, allowing AI teams to reliably use the vast data that’s locked in PDFs and spreadsheets. Ingestion is a core bottleneck for AI teams today because traditional approaches fail to extract and chunk unstructured data accurately. These input errors lead to inaccurate and hallucinated outputs, making LLM applications unreliable for many real-world use cases such as processing medical records and financial statements. In benchmark studies, Reducto has been proven to be significantly more accurate than legacy providers like AWS, Google and Microsoft – in some cases by a margin of 20+ percent, alongside significant processing speed improvements. This is critical for high-stakes, production AI use cases.