Monte Carlo has launched unstructured data monitoring, a new capability that enables organizations to ensure trust in their unstructured data assets across documents, chat logs, images, and more, all without needing to write a single line of SQL. With its latest release, Monte Carlo becomes the first data + AI observability platform to provide AI-powered support for monitoring both structured and unstructured data types. Monte Carlo users can now apply customizable, AI-powered checks to unstructured fields, allowing users to monitor for the quality metrics that are relevant to their unique use case. Monte Carlo goes beyond the standard quality metrics and allows customers to use custom prompts and classifications so as to make monitoring truly meaningful. Monte Carlo continues its strategic partnership with Snowflake, the AI Data Cloud company, to support Snowflake Cortex Agents, Snowflake’s AI-powered agents that orchestrate across structured and unstructured data to provide more reliable AI-driven decisions. In addition, Monte Carlo is extending its partnership with Databricks to include observability for Databricks AI/BI – a compound AI system built into Databricks’ platform that generates rich insights from across the data + AI lifecycle – including ETL pipelines, lineage, and other queries. By supporting Snowflake Cortex Agents and Databricks AI/BI, Monte Carlo helps data teams ensure their foundational data is reliable and trustworthy enough to support real-time business insights driven by AI.
Structify’s AI platform combines visual language model with human oversight to simplify data preparation by letting users create custom datasets by specifying the data schema, selecting sources, and deploying AI agents to extract that data through navigating the web
Startup Structify is taking aim at one of the most notorious pain points in the world of artificial intelligence and data analytics: the painstaking process of data preparation. The company’s platform uses a proprietary visual language model called DoRa to automate the gathering, cleaning, and structuring of data — a process that typically consumes up to 80% of data scientists’ time. At its core, Structify allows users to create custom datasets by specifying the data schema, selecting sources, and deploying AI agents to extract that data. The platform can handle everything from SEC filings and LinkedIn profiles to news articles and specialized industry documents. What sets Structify apart, is their in-house model DoRa, which navigates the web like a human would. This approach allows Structify to support a free tier, which will help democratize access to structured data. Structify’s vision is to “commoditize data” — making it something that can be easily recreated if lost. Finance teams use it to extract information from pitch decks, construction companies turn complex geotechnical documents into readable tables, and sales teams gather real-time organizational charts for their accounts. A key differentiator for Structify is its “quadruple verification” process, which combines AI with human oversight. This approach addresses a critical concern in AI development: ensuring accuracy. What differentiates Structify, according to CEO Alex Reichenbach, is its combination of speed and accuracy. Reichenbach claimed they had sped up their agent “10x while cutting cost ~16x” through model optimization and infrastructure improvements.
Electron AI, the agentic assistant for data teams and analysts generates precise, context-aware mapping logic across source systems, semantic models, and destination schemas
Reactor Data announced the production launch and immediate availability of Electron AI – the embedded conversational AI assistant designed to help data teams and analysts create powerful data mappings, transformations and pipelines. Electron acts as an intelligent co-pilot, enabling data analysts and teams to generate precise, context-aware mapping logic across source systems, semantic models, and destination schemas – all through simple conversational interactions. Electron acts as a natural-language assistant familiar with all aspects of a company’s data pipelines including sources, source schemas, multi-step transformations including complex data combinations, output configurations and destination tables. Whether a business is normalizing product titles, mapping transactional IDs, or aligning common fields across disparate sources, Electron helps brands go from request to result faster, with less friction and fewer mistakes. Key Capabilities of Reactor Data’s Electron AI: Conversational and Multilanguage Coding: Ask Electron to write complex data transformations, and it returns both Python code and simple natural language expressions. Pipeline and Context-Aware: Electron is tightly integrated with Reactor’s modular pipeline tools for source, semantic, and destination processing. Electron understands source and destination schemas and rules to offer precise, pre-validated mappings. Iterative Authoring: Electron translates natural language into mapping expressions with null handling, coalescing, formatting, and refinement based on feedback.
FICO’s new Marketplace to connect enterprises to providers of data, AI/ML models, optimization tools and decision rulesets and cut the time required to access, validate and integrate new data sources
FICO has introduced a digital hub designed to connect organizations with data and analytics providers. This innovative new Marketplace offers easy access to data, artificial intelligence (AI) models, optimization tools, decision rulesets, and machine learning models, which deliver enterprise business outcomes from AI. With FICO Marketplace, FICO® Platform users can fast-track their journey to becoming an intelligent enterprise, because they will be able to: Unlock Value from Data Faster: by experimenting with new data sources and decision assets to determine predictive power and business value. Users can expect to cut the time required to access, validate and integrate new data sources by half. Leverage Decision Agents Across Multiple Use Cases, Improving Collaboration: with its open API architecture, it allows for any decision asset, data service, analytics model, software agent or third-party solution to address a wide range of use cases including customer management, fraud, originations, and marketing. The reusability of decision agents across multiple departments breaks down silos and improves collaboration. Drive Better Customer Experiences: by enabling a holistic view of each individual customer, as well as building innovative new intelligent solutions and analytic capabilities that come from industry collaboration. “FICO Marketplace will facilitate the type of collaboration across the industry that drives the next generation of intelligent solutions,” said Nikhil Behl, president, Software, FICO.
TensorStax’s data engineering AI agents can design and deploy data pipelines through structured and predictable orchestration using a deterministic control layer that sits between the LLM and the data stack
Startup TensorStax is building AI agents that can perform tasks on behalf of users with minimal intervention to the challenge of data engineering. The startup gets around this by creating a purpose-built abstraction layer to ensure its AI agents can design, build and deploy data pipelines with a high degree of reliability. Its proprietary LLM Compiler acts as a deterministic control layer that sits between the LLM and the data stack to facilitate structured and predictable orchestration across complex data systems. Among other things, it does the job of validating syntax, normalizing tool interfaces and resolving dependencies ahead of time. This helps to boost the success rates of its AI agents from 40% to 50% to as high as 90% in a variety of data engineering tasks, citing internal testing. The result is far fewer broken data pipelines, giving teams the confidence to offload various complicated engineering tasks to AI agents. TensorStax says its AI agents can help to mitigate the operational complexities involved in data engineering, freeing up engineers to focus on more complex and creative tasks, such as modeling business logic, designing scalable architectures and enhancing data quality. By integrating directly within each customer’s existing data stack, TensorStax makes it possible to introduce AI agent data engineers into the mix without disrupting workflows or rebuilding their data infrastructure. These agents are designed to work with dozens of common data engineering tools. The best thing is that TensorStax AI agents respond to simple commands. Constellation Research Inc. analyst Michael Ni said TensorStax appears to be architecturally different to others, with its LLM compiler, its integration with existing tools and its no-customer-data-touch approach.
Kong’s platform enables enterprises to securely manage both their APIs and Apache Kafka-powered real-time data streams by regulating the workloads interaction though encrypting the records and subjecting applications to authentication
Kong introduced Kong Event Gateway, a new tool for managing real-time data streams powered by Apache Kafka. According to the company, customers can now use Konnect to manage both their APIs and Kafka-powered data streams. That removes the need to use two separate sets of management tools, which can ease day-to-day maintenance tasks. Kafka makes it possible to create data streams called topics that connect to an application, detect when the application generates a new record and collect the record. Other workloads can subscribe to a topic to receive the records it collects. Kong Event Gateway acts as an intermediary between an application and the Kafka data streams to which it subscribes. Before data reaches the application, it goes through the Kong Event Gateway. The fact that information is routed through the tool allows it to regulate how workloads access the information. Using Kong Event Gateway, a company can require that applications perform authentication before accessing a Kafka data stream. The tool encrypts the records that are sent over the data stream to prevent unauthorized access. According to Kong, it doubles as an observability tool that enables administrators to monitor how workloads interact with the information transmitted by Kafka. Kafka transmits data using a custom network protocol. According to Kong, Kong Event Gateway allows applications to access data via standard HTTPS APIs instead of the custom protocol. That eases development by sparing the need for software teams to familiarize themselves with Kafka’s information streaming mechanism. Kong Event Gateway allows multiple workloads to share the same data stream without the need for copies. Administrators can create separate data access permissions for each workload. Another feature, Virtual Clusters, allows multiple software teams to share the same Kafka cluster without gaining access to one another’s data.
Alation helps data teams turn messy, raw data into trusted, reusable data products for AI
Alation has launched its Data Products Builder Agent, an AI-powered tool that helps data teams turn messy, raw data into trusted, reusable data products. It removes the busywork of data teams, enabling them to deliver the data products that business users and AI need. The Data Products Builder Agent transforms raw data into productized, AI-ready assets that are easy to find and use in the Alation Data Products Marketplace. By automating the data product lifecycle, the Data Products Agent streamlines curation, packaging, and publishing processes. Based on user prompting, the agent identifies the right data to answer the user’s business question. It then auto-generates and documents the data product design specification and ensures data products meet marketplace and governance standards, all while keeping a human in the loop. This enables data teams to focus on strategic work while empowering the business with trusted, ready-to-use data products. The Alation Data Product definitions build on the Open Data Products Specification (ODPS), a YAML-based standard that enables open, portable, and extensible metadata for data products. Key capabilities of the Alation Data Products Builder Agent include: Effortless data product creation; Built-in trust; and Business-aligned relevance.
Archive360’s cloud-native archiving platform provides governed data for AI and analytics by simplifying the process of connecting to and ingesting data from any enterprise application and offering full access controls
Archive360 has released the first modern archive platform that provides governed data for AI and analytics. The Archive360 Platform enables enterprises and government agencies to unlock the full potential of their archival assets with extensive data governance, security and compliance capabilities, and primed for intelligent insights. The Archive360 Modern Archiving Platform enables organizations to control how AI and analytics consume information from the archive, and to simplify the process of connecting to and ingesting data from any application, so organizations can start realizing value faster. The capability reduces the risk AI can pose to organizations by inadvertently exposing regulated data, company trade secrets, or simply ingesting faulty and irrelevant data. The Archive360 AI & Data Governance Platform is deployed as a cloud-native, class-based architecture. It provides each customer with a dedicated SaaS environment to enable them to completely segregate data and retain administrative access, entitlements, and the ability to integrate into their security protocols. It allows organizations to: Shift from application-centric to data-centric archiving; Protect, classify and retire enterprise data; and AI Activation.
Qlik launches Open Lakehouse offering 2.5x–5x faster query performance and up to 50% lower infrastructure costs, while maintaining full compatibility with the most widely used analytics and machine learning engines
Qlik announced the launch of Qlik Open Lakehouse, a fully managed Apache Iceberg solution built into Qlik Talend Cloud. Designed for enterprises under pressure to scale faster and spend less, Qlik Open Lakehouse delivers real-time ingestion, automated optimization, and multi-engine interoperability — without vendor lock-in or operational overhead. Qlik Open Lakehouse offers a new path: a fully managed lakehouse architecture powered by Apache Iceberg that delivers 2.5x–5x faster query performance and up to 50% lower infrastructure costs, while maintaining full compatibility with the most widely used analytics and machine learning engines. Qlik Open Lakehouse combines real-time ingestion, intelligent optimization, and true ecosystem interoperability in a single, fully managed platform: Real-time ingestion at enterprise scale; Intelligent Iceberg optimization, fully automated; Open by design, interoperable by default; Your compute, your cloud, your rules; One platform, end to end. As AI workloads demand faster access to broader, fresher datasets, open formats like Apache Iceberg are becoming the new foundation. Qlik Open Lakehouse responds to this shift by making it effortless to build and manage Iceberg-based architectures — without the need for custom code or pipeline babysitting. It also runs within the customer’s own AWS environment, ensuring data privacy, cost control, and full operational visibility.
Elastic’s new plugin to accelerate open-source vector search index build times and queries on Nvidia GPUs; integrates with Nvidia validated designs to enable on-premises AI agents
Elastic announced that Elasticsearch integrates with the new NVIDIA Enterprise AI Factory validated design to provide a recommended vector database for enterprises to build and deploy their own on-premises AI factories. Elastic will use NVIDIA cuVS to create a new Elasticsearch plugin that will accelerate vector search index build times and queries. NVIDIA Enterprise AI Factory validated designs enable Elastic customers to unlock faster, more relevant insights from their data. Elasticsearch is used throughout the industry for vector search and AI applications, with a thriving open source community. Elastic’s investment to accelerate vector search on GPUs builds upon previous longstanding efforts to optimize its vector database performance through hardware-accelerated CPU SIMD instructions, new vector data compression innovations like Better Binary Quantization and making Filtered HNSW faster. With Elasticsearch and the NVIDIA Enterprise AI Factory reference design, enterprises can unlock deeper insights and deliver more relevant, real-time information to AI agents and generative AI applications.