Indico Data has expanded its Data Enrichment Agents, enhancing document workflows with deeper, native access to proprietary and third-party datasets. The enrichment capabilities combine Indico’s growing library of proprietary data catalogs with seamless integration to proprietary data and trusted third-party providers. The available data spans commercial, personal, and property domains, and includes enriched details such as business credit and risk scores, crime statistics, driver safety and motor vehicle violations, VIN and registration data, proximity-based risk, co-tenancy exposure, property characteristics, permit activity, and more. The Data Enrichment Agents are now generally available to all Indico platform customers and can be activated across workflows including submission ingestion, underwriting clearance, claims FNOL, and policy servicing. By transforming unstructured submissions, claims, and policy documents into structured, decision-ready data, Indico enables insurers to act faster on high-value opportunities, streamline triage and intake, and improve the consistency and transparency of underwriting and claims decisions. Key benefits of Indico’s Data Enrichment Agents include: Embedded data access; Auto-fill missing data; Flexible provider ecosystem; and Proprietary data at a lower cost.
HighByte’s Industrial MCP Server enables AI agents to securely access all connected industrial systems and make real time or historical data requests on them by exposing data pipelines as “tools” and including descriptions and parameters
HighByte has released HighByte Intelligence Hub version 4.2 with an embedded Industrial Model Context Protocol (MCP) Server that powers Agentic AI and new LLM-assisted data contextualization via native connections to Amazon Bedrock, Azure OpenAI, Google Gemini, OpenAI, and local LLMs. HighByte Intelligence Hub provides the first Industrial MCP Server to expose data pipelines as “tools” to AI agents, including descriptions and parameters. With the Intelligence Hub, AI agents can securely access all connected industrial systems and make real time or historical data requests on them. John Harrington, Chief Product Officer at HighByte said “The Intelligence Hub is an Industrial DataOps solution that contextualizes and standardizes industrial data from diverse sources for diverse targets. Agentic AI clients on the factory floor are a natural extension of this approach. We’re enabling DataOps to feed AI, and AI to assist and scale DataOps.” The latest release also introduces Git integration and OpenTelemetry (OTel) support to scale and manage deployments using DevOps tooling for version control and observability. Users will also have access to new Databricks and TimescaleDB connectors and enhanced connectivity with Apache Kafka and Amazon S3 for cloud-to-edge use cases. Furthermore, the Oracle Database connection has been enhanced to support Change Data Capture (CDC), the Snowflake SQL connection now supports write operations, and the AVEVA PI System connection supports enhanced PI point metadata reads. These capabilities optimize bi-directional connectivity for the many disparate data services found in the cloud, data center, and factory floor.
Bright Data’s AI browser targeted at AI agents runs in the cloud, supports natural language prompts, bypasses CAPTCHAs, scripts, and bot defenses and mimics real user behavior to access and interact with the web at scale
Bright Data, the world’s #1 web data infrastructure company for AI & BI, has launched a powerful set of AI-powered web search and discovery tools designed to give LLMs and autonomous agents frictionless access to the open web: Deep Lookup (Beta): Deep Lookup (Beta) is a natural language research engine that answers complex, multi-layered questions in real-time, with structured insight. Deep Lookup (Beta) allows users to query across petabytes of unstructured and structured web data simultaneously, surfacing high-confidence answers to complex, multi-layered questions, without code. Unlike general-purpose LLMs that hallucinate or struggle with context, Deep Lookup (Beta) delivers verified, web-sourced insights, with links to cited sources, with structured outputs you can immediately act on—across thousands of verticals. Browser.ai: The industry’s first unblockable, AI-native browser. Designed specifically for autonomous agents, Browser.ai mimics real user behavior to access and interact with the web at scale. It runs in the cloud, supports natural language prompts, and bypasses CAPTCHAs, scripts, and bot defenses, making it ideal for scaling agent-based tasks like scraping, monitoring, and dynamic research. MCP Servers: A low-latency control layer that lets agents search, crawl, and extract live data in real-time. Built to power agentic workflows, MCP is designed for developers building Retrieval-Augmented Generation (RAG) pipelines, autonomous tools, and multi-agent systems that need to act in context, not just passively read.
Structify’s AI platform combines visual language model with human oversight to simplify data preparation by letting users create custom datasets by specifying the data schema, selecting sources, and deploying AI agents to extract that data through navigating the web
Startup Structify is taking aim at one of the most notorious pain points in the world of artificial intelligence and data analytics: the painstaking process of data preparation. The company’s platform uses a proprietary visual language model called DoRa to automate the gathering, cleaning, and structuring of data — a process that typically consumes up to 80% of data scientists’ time. At its core, Structify allows users to create custom datasets by specifying the data schema, selecting sources, and deploying AI agents to extract that data. The platform can handle everything from SEC filings and LinkedIn profiles to news articles and specialized industry documents. What sets Structify apart, is their in-house model DoRa, which navigates the web like a human would. This approach allows Structify to support a free tier, which will help democratize access to structured data. Structify’s vision is to “commoditize data” — making it something that can be easily recreated if lost. Finance teams use it to extract information from pitch decks, construction companies turn complex geotechnical documents into readable tables, and sales teams gather real-time organizational charts for their accounts. A key differentiator for Structify is its “quadruple verification” process, which combines AI with human oversight. This approach addresses a critical concern in AI development: ensuring accuracy. What differentiates Structify, according to CEO Alex Reichenbach, is its combination of speed and accuracy. Reichenbach claimed they had sped up their agent “10x while cutting cost ~16x” through model optimization and infrastructure improvements.
Electron AI, the agentic assistant for data teams and analysts generates precise, context-aware mapping logic across source systems, semantic models, and destination schemas
Reactor Data announced the production launch and immediate availability of Electron AI – the embedded conversational AI assistant designed to help data teams and analysts create powerful data mappings, transformations and pipelines. Electron acts as an intelligent co-pilot, enabling data analysts and teams to generate precise, context-aware mapping logic across source systems, semantic models, and destination schemas – all through simple conversational interactions. Electron acts as a natural-language assistant familiar with all aspects of a company’s data pipelines including sources, source schemas, multi-step transformations including complex data combinations, output configurations and destination tables. Whether a business is normalizing product titles, mapping transactional IDs, or aligning common fields across disparate sources, Electron helps brands go from request to result faster, with less friction and fewer mistakes. Key Capabilities of Reactor Data’s Electron AI: Conversational and Multilanguage Coding: Ask Electron to write complex data transformations, and it returns both Python code and simple natural language expressions. Pipeline and Context-Aware: Electron is tightly integrated with Reactor’s modular pipeline tools for source, semantic, and destination processing. Electron understands source and destination schemas and rules to offer precise, pre-validated mappings. Iterative Authoring: Electron translates natural language into mapping expressions with null handling, coalescing, formatting, and refinement based on feedback.
FICO’s new Marketplace to connect enterprises to providers of data, AI/ML models, optimization tools and decision rulesets and cut the time required to access, validate and integrate new data sources
FICO has introduced a digital hub designed to connect organizations with data and analytics providers. This innovative new Marketplace offers easy access to data, artificial intelligence (AI) models, optimization tools, decision rulesets, and machine learning models, which deliver enterprise business outcomes from AI. With FICO Marketplace, FICO® Platform users can fast-track their journey to becoming an intelligent enterprise, because they will be able to: Unlock Value from Data Faster: by experimenting with new data sources and decision assets to determine predictive power and business value. Users can expect to cut the time required to access, validate and integrate new data sources by half. Leverage Decision Agents Across Multiple Use Cases, Improving Collaboration: with its open API architecture, it allows for any decision asset, data service, analytics model, software agent or third-party solution to address a wide range of use cases including customer management, fraud, originations, and marketing. The reusability of decision agents across multiple departments breaks down silos and improves collaboration. Drive Better Customer Experiences: by enabling a holistic view of each individual customer, as well as building innovative new intelligent solutions and analytic capabilities that come from industry collaboration. “FICO Marketplace will facilitate the type of collaboration across the industry that drives the next generation of intelligent solutions,” said Nikhil Behl, president, Software, FICO.
TensorStax’s data engineering AI agents can design and deploy data pipelines through structured and predictable orchestration using a deterministic control layer that sits between the LLM and the data stack
Startup TensorStax is building AI agents that can perform tasks on behalf of users with minimal intervention to the challenge of data engineering. The startup gets around this by creating a purpose-built abstraction layer to ensure its AI agents can design, build and deploy data pipelines with a high degree of reliability. Its proprietary LLM Compiler acts as a deterministic control layer that sits between the LLM and the data stack to facilitate structured and predictable orchestration across complex data systems. Among other things, it does the job of validating syntax, normalizing tool interfaces and resolving dependencies ahead of time. This helps to boost the success rates of its AI agents from 40% to 50% to as high as 90% in a variety of data engineering tasks, citing internal testing. The result is far fewer broken data pipelines, giving teams the confidence to offload various complicated engineering tasks to AI agents. TensorStax says its AI agents can help to mitigate the operational complexities involved in data engineering, freeing up engineers to focus on more complex and creative tasks, such as modeling business logic, designing scalable architectures and enhancing data quality. By integrating directly within each customer’s existing data stack, TensorStax makes it possible to introduce AI agent data engineers into the mix without disrupting workflows or rebuilding their data infrastructure. These agents are designed to work with dozens of common data engineering tools. The best thing is that TensorStax AI agents respond to simple commands. Constellation Research Inc. analyst Michael Ni said TensorStax appears to be architecturally different to others, with its LLM compiler, its integration with existing tools and its no-customer-data-touch approach.
Kong’s platform enables enterprises to securely manage both their APIs and Apache Kafka-powered real-time data streams by regulating the workloads interaction though encrypting the records and subjecting applications to authentication
Kong introduced Kong Event Gateway, a new tool for managing real-time data streams powered by Apache Kafka. According to the company, customers can now use Konnect to manage both their APIs and Kafka-powered data streams. That removes the need to use two separate sets of management tools, which can ease day-to-day maintenance tasks. Kafka makes it possible to create data streams called topics that connect to an application, detect when the application generates a new record and collect the record. Other workloads can subscribe to a topic to receive the records it collects. Kong Event Gateway acts as an intermediary between an application and the Kafka data streams to which it subscribes. Before data reaches the application, it goes through the Kong Event Gateway. The fact that information is routed through the tool allows it to regulate how workloads access the information. Using Kong Event Gateway, a company can require that applications perform authentication before accessing a Kafka data stream. The tool encrypts the records that are sent over the data stream to prevent unauthorized access. According to Kong, it doubles as an observability tool that enables administrators to monitor how workloads interact with the information transmitted by Kafka. Kafka transmits data using a custom network protocol. According to Kong, Kong Event Gateway allows applications to access data via standard HTTPS APIs instead of the custom protocol. That eases development by sparing the need for software teams to familiarize themselves with Kafka’s information streaming mechanism. Kong Event Gateway allows multiple workloads to share the same data stream without the need for copies. Administrators can create separate data access permissions for each workload. Another feature, Virtual Clusters, allows multiple software teams to share the same Kafka cluster without gaining access to one another’s data.
Alation helps data teams turn messy, raw data into trusted, reusable data products for AI
Alation has launched its Data Products Builder Agent, an AI-powered tool that helps data teams turn messy, raw data into trusted, reusable data products. It removes the busywork of data teams, enabling them to deliver the data products that business users and AI need. The Data Products Builder Agent transforms raw data into productized, AI-ready assets that are easy to find and use in the Alation Data Products Marketplace. By automating the data product lifecycle, the Data Products Agent streamlines curation, packaging, and publishing processes. Based on user prompting, the agent identifies the right data to answer the user’s business question. It then auto-generates and documents the data product design specification and ensures data products meet marketplace and governance standards, all while keeping a human in the loop. This enables data teams to focus on strategic work while empowering the business with trusted, ready-to-use data products. The Alation Data Product definitions build on the Open Data Products Specification (ODPS), a YAML-based standard that enables open, portable, and extensible metadata for data products. Key capabilities of the Alation Data Products Builder Agent include: Effortless data product creation; Built-in trust; and Business-aligned relevance.
Archive360’s cloud-native archiving platform provides governed data for AI and analytics by simplifying the process of connecting to and ingesting data from any enterprise application and offering full access controls
Archive360 has released the first modern archive platform that provides governed data for AI and analytics. The Archive360 Platform enables enterprises and government agencies to unlock the full potential of their archival assets with extensive data governance, security and compliance capabilities, and primed for intelligent insights. The Archive360 Modern Archiving Platform enables organizations to control how AI and analytics consume information from the archive, and to simplify the process of connecting to and ingesting data from any application, so organizations can start realizing value faster. The capability reduces the risk AI can pose to organizations by inadvertently exposing regulated data, company trade secrets, or simply ingesting faulty and irrelevant data. The Archive360 AI & Data Governance Platform is deployed as a cloud-native, class-based architecture. It provides each customer with a dedicated SaaS environment to enable them to completely segregate data and retain administrative access, entitlements, and the ability to integrate into their security protocols. It allows organizations to: Shift from application-centric to data-centric archiving; Protect, classify and retire enterprise data; and AI Activation.