FICO has introduced a digital hub designed to connect organizations with data and analytics providers. This innovative new Marketplace offers easy access to data, artificial intelligence (AI) models, optimization tools, decision rulesets, and machine learning models, which deliver enterprise business outcomes from AI. With FICO Marketplace, FICO® Platform users can fast-track their journey to becoming an intelligent enterprise, because they will be able to: Unlock Value from Data Faster: by experimenting with new data sources and decision assets to determine predictive power and business value. Users can expect to cut the time required to access, validate and integrate new data sources by half. Leverage Decision Agents Across Multiple Use Cases, Improving Collaboration: with its open API architecture, it allows for any decision asset, data service, analytics model, software agent or third-party solution to address a wide range of use cases including customer management, fraud, originations, and marketing. The reusability of decision agents across multiple departments breaks down silos and improves collaboration. Drive Better Customer Experiences: by enabling a holistic view of each individual customer, as well as building innovative new intelligent solutions and analytic capabilities that come from industry collaboration. “FICO Marketplace will facilitate the type of collaboration across the industry that drives the next generation of intelligent solutions,” said Nikhil Behl, president, Software, FICO.
Bigeye introduces the first platform for governing AI data usage for enforceable policies that control how AI agents access and use high-quality, sensitive, and certified data, also covering observability and enforcement
Bigeye announced the industry’s first AI Trust Platform for agent data usage, defining a new technology category built for enterprise AI trust and governance. Bigeye is enabling safe adoption of agentic AI by developing a comprehensive platform that supports the governance, observability, and enforcement of AI systems interacting with enterprise data. Without visibility into agent behavior, lineage between data sources and outputs, or controls over sensitive data access, organizations are left exposed to compliance risks, bad decisions, and reputational damage. Delivering on this framework requires a new approach to managing and securing AI agent data. An AI Trust Platform meets these requirements and includes three foundational capabilities: Governance: Enforceable policies that control how AI agents access and use high-quality, sensitive, and certified data. Observability: Ensure the quality, security, and compliance posture of your data before it powers critical AI decisions through real-time lineage, classification, and anomaly detection. Enforcement: Monitor, guide, or steer every agent’s data access based on enterprise policy. Bigeye’s AI Trust Platform brings these capabilities together to give enterprises complete control over how agents access and act on data. The first version will be released in late 2025.
TensorStax’s data engineering AI agents can design and deploy data pipelines through structured and predictable orchestration using a deterministic control layer that sits between the LLM and the data stack
Startup TensorStax is building AI agents that can perform tasks on behalf of users with minimal intervention to the challenge of data engineering. The startup gets around this by creating a purpose-built abstraction layer to ensure its AI agents can design, build and deploy data pipelines with a high degree of reliability. Its proprietary LLM Compiler acts as a deterministic control layer that sits between the LLM and the data stack to facilitate structured and predictable orchestration across complex data systems. Among other things, it does the job of validating syntax, normalizing tool interfaces and resolving dependencies ahead of time. This helps to boost the success rates of its AI agents from 40% to 50% to as high as 90% in a variety of data engineering tasks, citing internal testing. The result is far fewer broken data pipelines, giving teams the confidence to offload various complicated engineering tasks to AI agents. TensorStax says its AI agents can help to mitigate the operational complexities involved in data engineering, freeing up engineers to focus on more complex and creative tasks, such as modeling business logic, designing scalable architectures and enhancing data quality. By integrating directly within each customer’s existing data stack, TensorStax makes it possible to introduce AI agent data engineers into the mix without disrupting workflows or rebuilding their data infrastructure. These agents are designed to work with dozens of common data engineering tools. The best thing is that TensorStax AI agents respond to simple commands. Constellation Research Inc. analyst Michael Ni said TensorStax appears to be architecturally different to others, with its LLM compiler, its integration with existing tools and its no-customer-data-touch approach.
RavenDB’s new feature allows developers to run GenAI tasks directly inside the database and use any LLM on their terms without requiring any middleware, external orchestration, or third-party services
RavenDB, a high-performance NoSQL document database trusted by developers and enterprises worldwide, has launched its new feature, bringing native GenAI capabilities directly into its core database engine, eliminating the need for middleware, external orchestration, or costly third-party services. RavenDB’s new feature supports any LLM (open-source or commercial), allowing teams to run GenAI tasks directly inside the database. Moving from prototype to production traditionally requires complex data pipelines, vendor-specific APIs, external services, and significant engineering effort. With this feature, RavenDB removes those barriers and bridges the gap between experimentation and production, giving developers complete control over cost, performance, and compliance. The result is a seamless transition from idea to implementation, making the leap to production almost as effortless as prototyping. What sets RavenDB apart is its fully integrated, flexible approach: developers can use any LLM on their terms. It’s optimized for cost and performance with smarter caching and fewer API calls, and includes enterprise-ready capabilities such as governance, monitoring, and built-in security, designed to meet the demands of modern, intelligent applications. By collapsing multiple infrastructure layers into a single intelligent operational database, RavenDB’s native GenAI capabilities significantly upgrade its data layer. This enhancement accelerates innovation by removing complexity for engineering leaders. Whether classifying documents, summarizing customer interactions, or automating workflows, teams can build powerful features directly from the data they already manage, with no dedicated AI team required.
Data governance platform Relyance AI allows organizations to precisely detect bias by examining not just the immediate dataset used to train a model, but by tracing the potential bias to its source
Relyance AI, a data governance platform provider that secured $32.1 million in Series B funding last October, is launching a new solution aimed at solving one of the most pressing challenges in enterprise AI adoption: understanding exactly how data moves through complex systems. The company’s new Data Journeys platform addresses a critical blind spot for organizations implementing AI — tracking not just where data resides, but how and why it’s being used across applications, cloud services, and third-party systems. Data Journeys provides comprehensive view, showing the complete data lifecycle from original collection through every transformation and use case. The system starts with code analysis rather than simply connecting to data repositories, giving it context about why data is being processed in specific ways. Data Journeys delivers value in four critical areas: First, compliance and risk management: The platform enables organizations to prove the integrity of their data practices when facing regulatory scrutiny. Second, precise bias detection: Rather than just examining the immediate dataset used to train a model, companies can trace potential bias to its source. Third, explainability and accountability: For high-stakes AI decisions like loan approvals or medical diagnoses, understanding the complete data provenance becomes essential. Finally, regulatory compliance: The platform provides a “mathematical proof point” that companies are using data appropriately, helping them navigate increasingly complex global regulations. Customers have seen 70-80% time savings in compliance documentation and evidence gathering.
Apache Airflow 3.0’s event-driven data orchestration makes real-time, multi-step inference process possible at scale across various enterprise use cases
Apache Airflow community is out with its biggest update in years, with the debut of the 3.0 release. Apache Airflow 3.0 addresses critical enterprise needs with an architectural redesign that could improve how organizations build and deploy data applications. Unlike previous versions, this release breaks away from a monolithic package, introducing a distributed client model that provides flexibility and security. This new architecture allows enterprises to: Execute tasks across multiple cloud environments; Implement granular security controls; Support diverse programming languages; and Enable true multi-cloud deployments. Airflow 3.0’s expanded language support is also interesting. While previous versions were primarily Python-centric, the new release natively supports multiple programming languages. Airflow 3.0 is set to support Python and Go with planned support for Java, TypeScript and Rust. This approach means data engineers can write tasks in their preferred programming language, reducing friction in workflow development and integration. Instead of running a data processing job every hour, Airflow now automatically starts the job when a specific data file is uploaded or when a particular message appears. This could include data loaded into an Amazon S3 cloud storage bucket or a streaming data message in Apache Kafka.
Datadog unifies observability across data and applications, combining AI with column-level lineage to detect, resolve and prevent data quality problems from occurring
Cloud security and application monitoring giant Datadog is looking to expand the scope of its data observability offerings after acquiring a startup called Metaplane. By adding Metaplane’s tools to its own suite, Datadog said, it will enable its users to identify and take instant action to remedy any data quality issues affecting their most critical business applications. Metaplane has built an end-to-end data observability platform that combines AI with column-level lineage to try and detect, resolve and also prevent data quality problems from occurring. It’s an important tool for any company that’s trying to make data-driven decisions, since “bad” data means those decisions are being made based on the wrong insights. This allows it to notify customers of any issues with the tools that are creating their data, such as Slack, PagerDuty and the like. Datadog Vice President Michael Whetten said, Metaplane’s offerings will help the company to unify observability across data and applications so its customers can “build reliable AI systems.” When the acquisition closes, Metaplane will continue to support its existing customers as a standalone product, though it will be rebranded as “Metaplane by Datadog.” Of course, Datadog will also look to integrate Metaplane’s capabilities within its own platform, and likely do its utmost to get Metaplane’s customers onboard.
Candescent and Ninth Wave’s integrated open data solution to facilitate secure, API-based, consumer-permissioned data sharing for banks and credit unions of all sizes and enable compliance to US CFPB Rule 1033
US digital banking platform Candescent has expanded its partnership with Ninth Wave to launch an integrated open data solution for banks and credit unions. The new offering is designed to facilitate secure, API-based, consumer-permissioned data sharing for banks and credit unions of all sizes. The development aims to support institutions in enhancing customer experience, operational efficiency, and regulatory compliance, including adherence to the US Consumer Financial Protection Bureau’s Rule 1033. The expanded collaboration seeks to replace traditional data-sharing practices—such as screen scraping and manual uploads—with modern, transparent alternatives. The new solution offers seamless integration with third-party applications used by both retail and business banking customers. Candescent chief product officer Gareth Gaston said: “With our integrated solution, banks and credit unions will be able to access Ninth Wave open data capabilities from within the Candescent digital banking platform. By adopting this model, financial institutions are expected to gain improved control over shared data, as well as stronger compliance with evolving regulatory standards. Ninth Wave founder and CEO George Anderson said “This partnership will allow financial institutions of all sizes to gain the operational efficiencies, reliability, and scalability of a single point of integration to open finance APIs and business applications.”
Reducto’s ingestion platform turns unstructured data that’s locked in complex documents into accurate LLM-ready inputs for AI pipelines
Reducto, the most accurate ingestion platform for unlocking unstructured data for AI pipelines, has raised a $24.5M series A round of funding led by Benchmark, alongside existing investors First Round Capital, BoxGroup and Y Combinator. “Reducto’s unique technology enables companies of all sizes to leverage LLMs across a variety of unstructured data, regardless of scale or complexity,” said Chetan Puttagunta, General Partner at Benchmark. “The team’s incredibly fast execution on product development further underscores their commitment to delivering state-of-the-art software to customers.” Reducto turns complex documents into accurate LLM-ready inputs, allowing AI teams to reliably use the vast data that’s locked in PDFs and spreadsheets. Ingestion is a core bottleneck for AI teams today because traditional approaches fail to extract and chunk unstructured data accurately. These input errors lead to inaccurate and hallucinated outputs, making LLM applications unreliable for many real-world use cases such as processing medical records and financial statements. In benchmark studies, Reducto has been proven to be significantly more accurate than legacy providers like AWS, Google and Microsoft – in some cases by a margin of 20+ percent, alongside significant processing speed improvements. This is critical for high-stakes, production AI use cases.