FICO has introduced a digital hub designed to connect organizations with data and analytics providers. This innovative new Marketplace offers easy access to data, artificial intelligence (AI) models, optimization tools, decision rulesets, and machine learning models, which deliver enterprise business outcomes from AI. With FICO Marketplace, FICO® Platform users can fast-track their journey to becoming an intelligent enterprise, because they will be able to: Unlock Value from Data Faster: by experimenting with new data sources and decision assets to determine predictive power and business value. Users can expect to cut the time required to access, validate and integrate new data sources by half. Leverage Decision Agents Across Multiple Use Cases, Improving Collaboration: with its open API architecture, it allows for any decision asset, data service, analytics model, software agent or third-party solution to address a wide range of use cases including customer management, fraud, originations, and marketing. The reusability of decision agents across multiple departments breaks down silos and improves collaboration. Drive Better Customer Experiences: by enabling a holistic view of each individual customer, as well as building innovative new intelligent solutions and analytic capabilities that come from industry collaboration. “FICO Marketplace will facilitate the type of collaboration across the industry that drives the next generation of intelligent solutions,” said Nikhil Behl, president, Software, FICO.
Bigeye introduces the first platform for governing AI data usage for enforceable policies that control how AI agents access and use high-quality, sensitive, and certified data, also covering observability and enforcement
Bigeye announced the industry’s first AI Trust Platform for agent data usage, defining a new technology category built for enterprise AI trust and governance. Bigeye is enabling safe adoption of agentic AI by developing a comprehensive platform that supports the governance, observability, and enforcement of AI systems interacting with enterprise data. Without visibility into agent behavior, lineage between data sources and outputs, or controls over sensitive data access, organizations are left exposed to compliance risks, bad decisions, and reputational damage. Delivering on this framework requires a new approach to managing and securing AI agent data. An AI Trust Platform meets these requirements and includes three foundational capabilities: Governance: Enforceable policies that control how AI agents access and use high-quality, sensitive, and certified data. Observability: Ensure the quality, security, and compliance posture of your data before it powers critical AI decisions through real-time lineage, classification, and anomaly detection. Enforcement: Monitor, guide, or steer every agent’s data access based on enterprise policy. Bigeye’s AI Trust Platform brings these capabilities together to give enterprises complete control over how agents access and act on data. The first version will be released in late 2025.
TensorStax’s data engineering AI agents can design and deploy data pipelines through structured and predictable orchestration using a deterministic control layer that sits between the LLM and the data stack
Startup TensorStax is building AI agents that can perform tasks on behalf of users with minimal intervention to the challenge of data engineering. The startup gets around this by creating a purpose-built abstraction layer to ensure its AI agents can design, build and deploy data pipelines with a high degree of reliability. Its proprietary LLM Compiler acts as a deterministic control layer that sits between the LLM and the data stack to facilitate structured and predictable orchestration across complex data systems. Among other things, it does the job of validating syntax, normalizing tool interfaces and resolving dependencies ahead of time. This helps to boost the success rates of its AI agents from 40% to 50% to as high as 90% in a variety of data engineering tasks, citing internal testing. The result is far fewer broken data pipelines, giving teams the confidence to offload various complicated engineering tasks to AI agents. TensorStax says its AI agents can help to mitigate the operational complexities involved in data engineering, freeing up engineers to focus on more complex and creative tasks, such as modeling business logic, designing scalable architectures and enhancing data quality. By integrating directly within each customer’s existing data stack, TensorStax makes it possible to introduce AI agent data engineers into the mix without disrupting workflows or rebuilding their data infrastructure. These agents are designed to work with dozens of common data engineering tools. The best thing is that TensorStax AI agents respond to simple commands. Constellation Research Inc. analyst Michael Ni said TensorStax appears to be architecturally different to others, with its LLM compiler, its integration with existing tools and its no-customer-data-touch approach.
RavenDB’s new feature allows developers to run GenAI tasks directly inside the database and use any LLM on their terms without requiring any middleware, external orchestration, or third-party services
RavenDB, a high-performance NoSQL document database trusted by developers and enterprises worldwide, has launched its new feature, bringing native GenAI capabilities directly into its core database engine, eliminating the need for middleware, external orchestration, or costly third-party services. RavenDB’s new feature supports any LLM (open-source or commercial), allowing teams to run GenAI tasks directly inside the database. Moving from prototype to production traditionally requires complex data pipelines, vendor-specific APIs, external services, and significant engineering effort. With this feature, RavenDB removes those barriers and bridges the gap between experimentation and production, giving developers complete control over cost, performance, and compliance. The result is a seamless transition from idea to implementation, making the leap to production almost as effortless as prototyping. What sets RavenDB apart is its fully integrated, flexible approach: developers can use any LLM on their terms. It’s optimized for cost and performance with smarter caching and fewer API calls, and includes enterprise-ready capabilities such as governance, monitoring, and built-in security, designed to meet the demands of modern, intelligent applications. By collapsing multiple infrastructure layers into a single intelligent operational database, RavenDB’s native GenAI capabilities significantly upgrade its data layer. This enhancement accelerates innovation by removing complexity for engineering leaders. Whether classifying documents, summarizing customer interactions, or automating workflows, teams can build powerful features directly from the data they already manage, with no dedicated AI team required.
Kong’s platform enables enterprises to securely manage both their APIs and Apache Kafka-powered real-time data streams by regulating the workloads interaction though encrypting the records and subjecting applications to authentication
Kong introduced Kong Event Gateway, a new tool for managing real-time data streams powered by Apache Kafka. According to the company, customers can now use Konnect to manage both their APIs and Kafka-powered data streams. That removes the need to use two separate sets of management tools, which can ease day-to-day maintenance tasks. Kafka makes it possible to create data streams called topics that connect to an application, detect when the application generates a new record and collect the record. Other workloads can subscribe to a topic to receive the records it collects. Kong Event Gateway acts as an intermediary between an application and the Kafka data streams to which it subscribes. Before data reaches the application, it goes through the Kong Event Gateway. The fact that information is routed through the tool allows it to regulate how workloads access the information. Using Kong Event Gateway, a company can require that applications perform authentication before accessing a Kafka data stream. The tool encrypts the records that are sent over the data stream to prevent unauthorized access. According to Kong, it doubles as an observability tool that enables administrators to monitor how workloads interact with the information transmitted by Kafka. Kafka transmits data using a custom network protocol. According to Kong, Kong Event Gateway allows applications to access data via standard HTTPS APIs instead of the custom protocol. That eases development by sparing the need for software teams to familiarize themselves with Kafka’s information streaming mechanism. Kong Event Gateway allows multiple workloads to share the same data stream without the need for copies. Administrators can create separate data access permissions for each workload. Another feature, Virtual Clusters, allows multiple software teams to share the same Kafka cluster without gaining access to one another’s data.
Palantir to embed CData’s open, standardized SQL interface and metadata layer across its analytics and AI platforms to enable connections to data sources without the need for individual APIs or formats
CData Software announced an expanded partnership with Palantir Technologies that integrates CData’s connectivity technology deeply into Palantir’s analytics platforms. The deal lets Palantir customers connect to data sources ranging from traditional databases to enterprise applications and development platforms without the need to learn individual application program interfaces or formats. CData’s technology provides a standardized SQL interface and consistent metadata layer across all connections. Palantir is licensing the technology across its Foundry, Gotham and Artificial Intelligence Platforms. Foundry is a data integration and analytics platform for commercial and industrial use. Gotham is primarily used by government and defense agencies. AIP is used to build and manage AI applications. CData says its approach is based on two architectural pillars: open standards and uniform behavior. Each of its connectors operates like a virtual database, translating SQL into native API calls under the hood. This abstraction not only simplifies development but also improves reliability and performance across platforms, Sharma said. The partnership will also extend Palantir’s AI ambitions. Using CData’s technology in its AIP allows AI models to query structured and unstructured data sources in real time using SQL. “We’re powering the data layer of their agent infrastructure,” Sharma said. “AI needs access to trusted, secure data, and that’s what we provide.”
Alation helps data teams turn messy, raw data into trusted, reusable data products for AI
Alation has launched its Data Products Builder Agent, an AI-powered tool that helps data teams turn messy, raw data into trusted, reusable data products. It removes the busywork of data teams, enabling them to deliver the data products that business users and AI need. The Data Products Builder Agent transforms raw data into productized, AI-ready assets that are easy to find and use in the Alation Data Products Marketplace. By automating the data product lifecycle, the Data Products Agent streamlines curation, packaging, and publishing processes. Based on user prompting, the agent identifies the right data to answer the user’s business question. It then auto-generates and documents the data product design specification and ensures data products meet marketplace and governance standards, all while keeping a human in the loop. This enables data teams to focus on strategic work while empowering the business with trusted, ready-to-use data products. The Alation Data Product definitions build on the Open Data Products Specification (ODPS), a YAML-based standard that enables open, portable, and extensible metadata for data products. Key capabilities of the Alation Data Products Builder Agent include: Effortless data product creation; Built-in trust; and Business-aligned relevance.
Coralogix’s AI agent simplifies access to deep observability data by translating natural language queries into detailed, system-level answers via a conversational platform
Data analytics platform Coralogix nearly doubled its valuation to over $1 billion in its latest funding round, co-founder and CEO Ariel Assaraf told as AI-driven enterprise offerings continue to pique investor delight. Coralogix raised $115 million in a round led by California-based venture growth firm NewView Capital. The fundraise comes three years after Coralogix’s previous external funding in 2022, where it raised $142 million. Valuations have faced downward pressure since then, as investors continue to sit on dry powder amid elevated interest rates and geopolitical tensions. Coralogix’s revenue increased seven times since 2022, Assaraf told. Coralogix also unveiled its new AI agent “olly,” aiming to simplify data monitoring via a conversational platform. “Olly makes deep observability data accessible to every team. Whether you ask, ‘What is wrong with the payment flow?’ or ‘Which service is frustrating our users the most?’ Olly translates those questions into detailed, system-level answers,” the company wrote on its blog.
Archive360’s cloud-native archiving platform provides governed data for AI and analytics by simplifying the process of connecting to and ingesting data from any enterprise application and offering full access controls
Archive360 has released the first modern archive platform that provides governed data for AI and analytics. The Archive360 Platform enables enterprises and government agencies to unlock the full potential of their archival assets with extensive data governance, security and compliance capabilities, and primed for intelligent insights. The Archive360 Modern Archiving Platform enables organizations to control how AI and analytics consume information from the archive, and to simplify the process of connecting to and ingesting data from any application, so organizations can start realizing value faster. The capability reduces the risk AI can pose to organizations by inadvertently exposing regulated data, company trade secrets, or simply ingesting faulty and irrelevant data. The Archive360 AI & Data Governance Platform is deployed as a cloud-native, class-based architecture. It provides each customer with a dedicated SaaS environment to enable them to completely segregate data and retain administrative access, entitlements, and the ability to integrate into their security protocols. It allows organizations to: Shift from application-centric to data-centric archiving; Protect, classify and retire enterprise data; and AI Activation.
Qlik launches Open Lakehouse offering 2.5x–5x faster query performance and up to 50% lower infrastructure costs, while maintaining full compatibility with the most widely used analytics and machine learning engines
Qlik announced the launch of Qlik Open Lakehouse, a fully managed Apache Iceberg solution built into Qlik Talend Cloud. Designed for enterprises under pressure to scale faster and spend less, Qlik Open Lakehouse delivers real-time ingestion, automated optimization, and multi-engine interoperability — without vendor lock-in or operational overhead. Qlik Open Lakehouse offers a new path: a fully managed lakehouse architecture powered by Apache Iceberg that delivers 2.5x–5x faster query performance and up to 50% lower infrastructure costs, while maintaining full compatibility with the most widely used analytics and machine learning engines. Qlik Open Lakehouse combines real-time ingestion, intelligent optimization, and true ecosystem interoperability in a single, fully managed platform: Real-time ingestion at enterprise scale; Intelligent Iceberg optimization, fully automated; Open by design, interoperable by default; Your compute, your cloud, your rules; One platform, end to end. As AI workloads demand faster access to broader, fresher datasets, open formats like Apache Iceberg are becoming the new foundation. Qlik Open Lakehouse responds to this shift by making it effortless to build and manage Iceberg-based architectures — without the need for custom code or pipeline babysitting. It also runs within the customer’s own AWS environment, ensuring data privacy, cost control, and full operational visibility.