Striim has launched Sherlock AI and Sentinel AI – two governance AI agents powered by Snowflake Cortex AI – that help organizations detect, tag, and protect sensitive upstream data in transit, minimizing exposure risks, preventing compliance penalties, and safeguarding corporate reputation through continuous, near real-time monitoring. Alok Pareek, Co-Founder and Executive Vice President of Engineering and Products at Striim. “The new Sherlock AI identifies blind spots by discovering sensitive data prior to data sharing or movement. Since data doesn’t stay in one place, Striim’s Sentinel AI agent complements Sherlock by protecting sensitive information in real time as it moves through enterprise data pipelines. This upstream application of AI-driven intelligence not only helps prevent sensitive data leaks but also enables auditing of the detection measures in place, significantly lowering costs and saving time for both organizations and regulators.” Sherlock AI delivers transparency by pinpointing sensitive information within datasets before they’re shared or transferred through data pipelines across on-premise or cloud-based enterprise data repositories, third-party databases, and SaaS environments. This helps organizations assess potential risks upstream and proactively apply appropriate governance measures.
Bigeye introduces the first platform for governing AI data usage for enforceable policies that control how AI agents access and use high-quality, sensitive, and certified data, also covering observability and enforcement
Bigeye announced the industry’s first AI Trust Platform for agent data usage, defining a new technology category built for enterprise AI trust and governance. Bigeye is enabling safe adoption of agentic AI by developing a comprehensive platform that supports the governance, observability, and enforcement of AI systems interacting with enterprise data. Without visibility into agent behavior, lineage between data sources and outputs, or controls over sensitive data access, organizations are left exposed to compliance risks, bad decisions, and reputational damage. Delivering on this framework requires a new approach to managing and securing AI agent data. An AI Trust Platform meets these requirements and includes three foundational capabilities: Governance: Enforceable policies that control how AI agents access and use high-quality, sensitive, and certified data. Observability: Ensure the quality, security, and compliance posture of your data before it powers critical AI decisions through real-time lineage, classification, and anomaly detection. Enforcement: Monitor, guide, or steer every agent’s data access based on enterprise policy. Bigeye’s AI Trust Platform brings these capabilities together to give enterprises complete control over how agents access and act on data. The first version will be released in late 2025.
RavenDB’s new feature allows developers to run GenAI tasks directly inside the database and use any LLM on their terms without requiring any middleware, external orchestration, or third-party services
RavenDB, a high-performance NoSQL document database trusted by developers and enterprises worldwide, has launched its new feature, bringing native GenAI capabilities directly into its core database engine, eliminating the need for middleware, external orchestration, or costly third-party services. RavenDB’s new feature supports any LLM (open-source or commercial), allowing teams to run GenAI tasks directly inside the database. Moving from prototype to production traditionally requires complex data pipelines, vendor-specific APIs, external services, and significant engineering effort. With this feature, RavenDB removes those barriers and bridges the gap between experimentation and production, giving developers complete control over cost, performance, and compliance. The result is a seamless transition from idea to implementation, making the leap to production almost as effortless as prototyping. What sets RavenDB apart is its fully integrated, flexible approach: developers can use any LLM on their terms. It’s optimized for cost and performance with smarter caching and fewer API calls, and includes enterprise-ready capabilities such as governance, monitoring, and built-in security, designed to meet the demands of modern, intelligent applications. By collapsing multiple infrastructure layers into a single intelligent operational database, RavenDB’s native GenAI capabilities significantly upgrade its data layer. This enhancement accelerates innovation by removing complexity for engineering leaders. Whether classifying documents, summarizing customer interactions, or automating workflows, teams can build powerful features directly from the data they already manage, with no dedicated AI team required.
Palantir to embed CData’s open, standardized SQL interface and metadata layer across its analytics and AI platforms to enable connections to data sources without the need for individual APIs or formats
CData Software announced an expanded partnership with Palantir Technologies that integrates CData’s connectivity technology deeply into Palantir’s analytics platforms. The deal lets Palantir customers connect to data sources ranging from traditional databases to enterprise applications and development platforms without the need to learn individual application program interfaces or formats. CData’s technology provides a standardized SQL interface and consistent metadata layer across all connections. Palantir is licensing the technology across its Foundry, Gotham and Artificial Intelligence Platforms. Foundry is a data integration and analytics platform for commercial and industrial use. Gotham is primarily used by government and defense agencies. AIP is used to build and manage AI applications. CData says its approach is based on two architectural pillars: open standards and uniform behavior. Each of its connectors operates like a virtual database, translating SQL into native API calls under the hood. This abstraction not only simplifies development but also improves reliability and performance across platforms, Sharma said. The partnership will also extend Palantir’s AI ambitions. Using CData’s technology in its AIP allows AI models to query structured and unstructured data sources in real time using SQL. “We’re powering the data layer of their agent infrastructure,” Sharma said. “AI needs access to trusted, secure data, and that’s what we provide.”
Coralogix’s AI agent simplifies access to deep observability data by translating natural language queries into detailed, system-level answers via a conversational platform
Data analytics platform Coralogix nearly doubled its valuation to over $1 billion in its latest funding round, co-founder and CEO Ariel Assaraf told as AI-driven enterprise offerings continue to pique investor delight. Coralogix raised $115 million in a round led by California-based venture growth firm NewView Capital. The fundraise comes three years after Coralogix’s previous external funding in 2022, where it raised $142 million. Valuations have faced downward pressure since then, as investors continue to sit on dry powder amid elevated interest rates and geopolitical tensions. Coralogix’s revenue increased seven times since 2022, Assaraf told. Coralogix also unveiled its new AI agent “olly,” aiming to simplify data monitoring via a conversational platform. “Olly makes deep observability data accessible to every team. Whether you ask, ‘What is wrong with the payment flow?’ or ‘Which service is frustrating our users the most?’ Olly translates those questions into detailed, system-level answers,” the company wrote on its blog.
Workato’a AI app helps employees search across every system and data source, receive personalized intelligent assistance with full enterprise context, take secure, role-based multi-step actions and launch workflows from a single interface
Workato has launched Workato GO, an AI super app designed to help employees search, act, and orchestrate work across all systems, apps, and data sources. Workato GO connects to all business tools, providing a single starting point for intelligent, secure, and scalable work. It combines three essential capabilities into a single experience: searching across every system and data source, receiving intelligent assistance tailored to each user, and taking secure, role-based actions. Workato GO is built on Workato ONE, the leading enterprise platform that powers integration, automation, and trusted agent execution. It enables users to complete workflows, kick off processes, and orchestrate results directly from a single interface. With GO, customers will have access to: Enterprise Search: Search across all your systems, data, and workflows. A single place to search across all your applications, documents, and processes, so employees can find exactly what they need, fast. Enables employees to instantly find information across all of their company’s documents, data, applications, and business processes from a single, unified interface. Employee Assistant: The personalized starting point for every employee. Combines intelligent assistance with full enterprise context, and the ability to take action without switching between tools. Deep Action™: Move from knowledge to execution. Take multi-step actions across multiple systems, trigger automations, and launch workflows—including human-in-the-loop processes—right from the search results or assistant. Agents at the Core: Flexible, open agent architecture that integrates Workato and third-party agents, enabling automation and orchestration across all systems. Extensibility: Easily extend and customize capabilities with recipes and agents—adapting Workato GO to your unique business needs.
TELUS Digital’s off-the-shelf STEM datasets including coding and reasoning data are curated by diverse pool of experts to offer enterprises access to high-quality, AI-ready data that has been cleaned, labeled and formatted
A new TELUS Digital survey of 1,000 U.S. adults found that 87% respondents (up from 75% in 2023) believe companies should be transparent about how they source data for GenAI models. Additionally, 65% believe that the exclusion of high-quality, verified content, such as information from trusted media sources (e.g. New York Times, Reuters, Bloomberg), can lead to inaccurate and/or biased large language model (LLM) responses. “As AI systems become more specialized and embedded in high-stakes use cases, the quality of the datasets used to optimize outputs is emerging as a key differentiator for enterprises between average performance and having the potential to drive real-world impacts,” said Amith Nair, Global VP and General Manager, Data & AI Solutions, TELUS Digital. “We’re well past the era where general crowdsourced or internet data can meet today’s enterprises’ more complex and specialized use cases. This is reflected in the shift in our clients’ requests from ‘wisdom of the crowd’ datasets to ‘wisdom of the experts’. Experts and industry professionals are helping curate such datasets to ensure they are technically sound, contextually relevant and responsibly built. In high-stakes domains like healthcare or finance, even a single mislabelled data point can distort model behavior in ways that are difficult to detect and costly to correct.“ In response to evolving industry dynamics, TELUS Digital Experience has launched 13 off-the-shelf STEM (science, technology, engineering and mathematics) datasets, including coding and reasoning data that is critical for LLM advancements. The datasets have been expertly-curated by a diverse pool of contributors, including Ph.D. researchers, professors, graduate students and working professionals from around the world. This gives enterprises access to high-quality data that has been cleaned, labeled and formatted for immediate integration into AI training workflows.
Treasure Data has released its MCP Server that allows AI assistants like Claude, GitHub Copilot Chat, and Windsurf to interact directly with intelligent CDP
Treasure Data, the Intelligent Customer Data Platform (CDP) built for enterprise scale and powered by AI, has released its MCP Server, a new open-source connector that allows AI assistants like Claude, GitHub Copilot Chat, and Windsurf to interact directly with your Treasure Data environment. Powered by the open Model Context Protocol (MCP), this solution gives data teams a new superpower: the ability to explore and analyze customer data in an easy and effective way, using plain language and a conversation window. With the Treasure Data MCP Server, teams can query parent segments and segments, explore tables, and analyze data using natural language, making data insights more accessible than ever. The MCP Server acts as a local bridge between your LLM-enabled tools and the Treasure Data platform. Once configured, it allows AI agents to securely interact with your CDP through structured tool calls. Instead of spending an hour writing multi-step SQL and debugging joins, the AI does it for you, writing, refining, and executing the query directly within Treasure Data. The MCP Server handles the permissions, safely limits results, and ensures your API keys and environment variables are managed securely. For most enterprises, the biggest barrier to using AI effectively isn’t the model, it’s the data. If an LLM can’t access high-quality, governed data, it can’t generate useful insights. The Treasure Data MCP Server removes that barrier. The AI accesses the CDP directly, securely and intelligently, so teams can finally start having productive conversations with their customer data.
‘Elliptic’s platform offers enterprises the ability to ingest data streams directly into internal data lakes, customized workflows and AI models via subscription enabling them to directly query data
Elliptic has announced an industry first, offering direct access to its market-leading datasets and intelligence, ‘Elliptic Data Fabric,’ via subscription. ‘Elliptic Data Fabric’ offers customers the ability to ingest and subscribe to data streams directly, enabling access to Elliptic’s data and intelligence in the format, schema, and delivery method that best meets their specific needs. Elliptic’s data and intelligence feeds the customer’s internal data lakes, customized workflows and AI models — accelerating decision-making, modernizing connectivity and letting enterprises and agencies directly query the data, run internal analytics and compose leaner data workflows. Elliptic Data Fabric has use cases for multiple industries. Elliptic Blocklist is a direct plug-in data and intelligence subscription service used by exchanges, stablecoin issuers, and payments providers. The Blocklist is regularly updated with the latest intelligence. This enables customers to directly query data to either permit or block withdrawals to unhosted wallets without adding friction to the transaction flow. Elliptic Counterparty Risk is being used by banks and financial institutions to help them easily assess indirect digital asset risk stemming from their customers by enriching their fiat transaction screening workflows with custom intelligence on thousands of VASPs. By seamlessly integrating Elliptic’s VASP data into internal screening workflows, organizations uncover hidden risks by detecting when customers interact with high-risk or unregistered crypto platforms. Government agencies are already leveraging Elliptic Data Fabric to access operation-ready blockchain data and intelligence, seamlessly integrated into their environments, mission-specific use cases, and analyst workflows.
Snowflake’s hybrid search capability combines semantic search and keyword search to do retrieval on unstructured data and orchestrates it amongst those two multiple data sets of text and structured data using AI agents
Snowflake introduced several platform updates designed to expand interoperability, improve performance and reduce operational cost. The focus of these enhancements was artificial intelligence and how Snowflake intends to help customers embrace the agentic revolution. A key focus for Snowflake has been providing tools for unstructured data. The company unveiled new additions in June to its Cortex portfolio, which expanded capabilities to query data across diverse formats, including unstructured image, audio or long-form text files, according to Christian Kleinerman, executive vice president of product at Snowflake. “Cortex Analyst is our ability to do text-to-structured data. Cortex Search is our hybrid search capability that does semantic search and keyword search to do retrieval on unstructured data and to orchestrate it amongst those two multiple data sets of … Cortex agents.” Snowflake is also seeking to improve the ease and speed of structured and unstructured data integration. The company announced Openflow, a managed service that’s designed to reduce the time and effort spent wrangling ingest pipelines while supporting batch and streaming workloads. Openflow can be implemented in multiple environments, according to Kleinerman. “Openflow has two deployment models. One is typical Snowflake; it’s Snowflake-managed resources. It’s in the cloud, but there’s also BYOC, bring your own cloud, which can be deployed in the customer’s virtual private cloud.” Snowflake’s role in the enterprise is evolving from providing data management tools to serving as a platform on which other businesses can be built. Kleinerman cited Capital One Financial Corp. as an example, which recently announced two new features, built on the Snowflake platform, for its enterprise B2B Capital One Software division.