Regula, a global identity verification solution developer, has added personal data masking functionality to its Regula Forensic Studio (RFS) software. This feature allows document experts to protect personal data with a single click, meeting growing privacy demands without disrupting workflows. The Regula ecosystem, from real-time ID verification to in-depth forensic analysis, now supports robust privacy controls natively. The new capability allows document experts to blur or hide personally identifiable information (PII) directly within forensic workflows, ensuring sensitive data is handled responsibly while meeting global requirements. In addition to the personal data masking feature, the latest RFS release includes 40+ updates focused on speed, customization, and forensic precision: New analysis tools: Yellow dot analysis for tracing document origins and detecting unauthorized duplicates. Smarter imaging: Per-light-source gamma correction and full-spectrum HDR imaging (not just UV), improving clarity across all materials. Streamlined collaboration: Video screen capture and camera recording capabilities support team training and case reviews. Faster insights: Hyperspectral imaging is now 20% faster without compromising detail. Improved digital zoom: Expanded up to 16x for detailed inspections. Visual reporting: Ability to generate composite images under varied lighting, ideal for expert reports or courtroom presentations. Integrated workflows: Automated document searches in the Information Reference System (IRS) after MRZ reading to reduce manual steps. Flexible video modes: Three options for different examination tasks—real-time viewing without frame skipping, high-resolution capture, and an expanded A4 field-of-view mode. Wider OS compatibility: Now supports Rocky and Debian Linux distributions, expanding deployment options.
Monte Carlo’s low-code observability solution lets users apply custom prompts and AI-powered checks to unstructured fields, to monitor for the quality metrics that are relevant to their unique use case
Monte Carlo has launched unstructured data monitoring, a new capability that enables organizations to ensure trust in their unstructured data assets across documents, chat logs, images, and more, all without needing to write a single line of SQL. With its latest release, Monte Carlo becomes the first data + AI observability platform to provide AI-powered support for monitoring both structured and unstructured data types. Monte Carlo users can now apply customizable, AI-powered checks to unstructured fields, allowing users to monitor for the quality metrics that are relevant to their unique use case. Monte Carlo goes beyond the standard quality metrics and allows customers to use custom prompts and classifications so as to make monitoring truly meaningful. Monte Carlo continues its strategic partnership with Snowflake, the AI Data Cloud company, to support Snowflake Cortex Agents, Snowflake’s AI-powered agents that orchestrate across structured and unstructured data to provide more reliable AI-driven decisions. In addition, Monte Carlo is extending its partnership with Databricks to include observability for Databricks AI/BI – a compound AI system built into Databricks’ platform that generates rich insights from across the data + AI lifecycle – including ETL pipelines, lineage, and other queries. By supporting Snowflake Cortex Agents and Databricks AI/BI, Monte Carlo helps data teams ensure their foundational data is reliable and trustworthy enough to support real-time business insights driven by AI.
Snorkel AI’s platform offers programmatic tooling to create AI-ready data for building fine-grained, domain-specific evaluation of models against the generic off-the-shelf “LLM-as-a-judge” approach
Snorkel AI has announced general availability of two new product offerings on the Snorkel AI Data Development Platform: 1) Snorkel Evaluate enables users to build specialized, fine-grained evaluation of models and agents. Powered by Snorkel AI’s unique programmatic approach to curating AI ready data, this new offering allows enterprises to scale their evaluation workflows to confidently deploy AI systems to production. Snorkel Evaluate includes programmatic tooling for benchmark dataset creation, the development of specialized evaluators, and error mode correction. These tools help users go beyond generic datasets and off-the-shelf “LLM-as-a-judge” approaches to efficiently build actionable, domain-specific evaluations. 2) Snorkel Expert Data-as-a-Service is a white-glove solution to deliver expert datasets for frontier AI system evaluation and tuning to enterprises. Leading LLM developers are already partnering with Snorkel AI to create datasets for advanced reasoning, agentic tool use, multi-turn user interaction, and domain-specific knowledge. The offering combies Snorkel’s network of highly trained subject matter experts with its unique programmatic technology platform for data labeling and quality control, enabling efficient delivery of specialized datasets. Snorkel Expert Data-as-a-Service equips enterprises to mix in-house expertise and data with proprietary datasets developed using outsourced expertise.
Snowflake’s acquisition of Crunchy Data to enable it to offer enterprise-grade, fully managed and automated PostgreSQL for powering agentic AI at scale
Snowflake Inc. said that it’s buying a database startup called Crunchy Data Solutions Inc. in a $250 million deal that’s expected to close imminently, bolstering its agentic AI capabilities. The startup has developed a cloud-based database platform that makes it simple for businesses and government agencies to use PostgreSQL without having to manage the underlying infrastructure. Executive Vice President of Product Christian Kleinerman and Crunchy Data founder and Chief Executive Paul Laurence explained that the upcoming Snowflake Postgres platform will “simplify how developers build, deploy and scale agents and apps.” They were referring to AI agents, which are widely expected to become the next big thing after generative AI, taking actions on behalf of humans to automate complex work with minimal human supervision. When it launches as a technology preview in the coming weeks, Snowflake Postgres will be an enterprise-grade PostgreSQL offering that will give developers the full power and flexibility found in the original, open-source Postgres database, together with the superior operational standards, governance, security and flexibility of Snowflake’s cloud data warehouse. According to Snowflake, it will help developers to speed up the development of new AI agents and simplify the way they access data. “Access to a PostgreSQL database directly within Snowflake has the potential to be incredibly impactful for our team and our customers, as it would allow us to securely deploy our Snowflake Native App, LandingLens, into our customers’ account,” said Dan Maloney, CEO of Snowflake customer LandingAI Inc. “This integration is a key building block in making it simpler to build, deploy and run AI applications directly on the Snowflake platform.” The advantage of having a PostgreSQL offering is that it is flexible enough to be the underlying database for AI agents that leverage data from their respective cloud platforms.
Snowflake’s data ingestion service handles the extraction of any type of data directly from source systems, then performs the transform and load processes using prebuilt or custom connectors for rapid AI deployment
Snowflake announced the general availability of Openflow — a fully managed data ingestion service that pulls any type of data from virtually any source, streamlining the process of mobilizing information for rapid AI deployment. Powered by Apache NiFi, Openflow uses connectors — prebuilt or custom — with Snowflake’s embedded governance and security. Whether it’s unstructured multimodal content from Box or real-time event streams, Openflow plugs in, unifies, and makes all data types readily available in Snowflake’s AI Data Cloud. While Snowflake has offered ingestion options like Snowpipe for streaming or individual connectors, Openflow delivers a “comprehensive, effortless solution for ingesting virtually all enterprise data.” Snowflake’s Snowpipe and Snowpipe Streaming remain a key foundation for customers bringing data into Snowflake, and focus on the ‘load’ of the ETL process. Openflow, on the other hand, handles the extraction of data directly from source systems, then performs the transform and load processes. It is also integrated with our new Snowpipe Streaming architecture, so data can be streamed into Snowflake once it is extracted. This ultimately unlocks new use cases where AI can analyze a complete picture of enterprise data, including documents, images, and real-time events, directly within Snowflake. Once the insights are extracted, they can return to the source system using the connector. Openflow currently supports 200+ ready-to-use connectors and processors. Creating new connectors takes just a few minutes, speeding up time to value. Users also get security features such as role-based authorization, encryption in transit, and secrets management to keep data protected end-to-end. As the next step, Snowflake aims to make Openflow the backbone of real-time, intelligent data movement across distributed systems – powering the age of AI agents.
Striim launches AI agents for near real-time data governance, that continuously analyzes live data streams to detect and protect sensitive information as it moves – automating encryption, masking, and compliance enforcement in real time
Striim has launched Sherlock AI and Sentinel AI – two governance AI agents powered by Snowflake Cortex AI – that help organizations detect, tag, and protect sensitive upstream data in transit, minimizing exposure risks, preventing compliance penalties, and safeguarding corporate reputation through continuous, near real-time monitoring. Alok Pareek, Co-Founder and Executive Vice President of Engineering and Products at Striim. “The new Sherlock AI identifies blind spots by discovering sensitive data prior to data sharing or movement. Since data doesn’t stay in one place, Striim’s Sentinel AI agent complements Sherlock by protecting sensitive information in real time as it moves through enterprise data pipelines. This upstream application of AI-driven intelligence not only helps prevent sensitive data leaks but also enables auditing of the detection measures in place, significantly lowering costs and saving time for both organizations and regulators.” Sherlock AI delivers transparency by pinpointing sensitive information within datasets before they’re shared or transferred through data pipelines across on-premise or cloud-based enterprise data repositories, third-party databases, and SaaS environments. This helps organizations assess potential risks upstream and proactively apply appropriate governance measures.
Bigeye introduces the first platform for governing AI data usage for enforceable policies that control how AI agents access and use high-quality, sensitive, and certified data, also covering observability and enforcement
Bigeye announced the industry’s first AI Trust Platform for agent data usage, defining a new technology category built for enterprise AI trust and governance. Bigeye is enabling safe adoption of agentic AI by developing a comprehensive platform that supports the governance, observability, and enforcement of AI systems interacting with enterprise data. Without visibility into agent behavior, lineage between data sources and outputs, or controls over sensitive data access, organizations are left exposed to compliance risks, bad decisions, and reputational damage. Delivering on this framework requires a new approach to managing and securing AI agent data. An AI Trust Platform meets these requirements and includes three foundational capabilities: Governance: Enforceable policies that control how AI agents access and use high-quality, sensitive, and certified data. Observability: Ensure the quality, security, and compliance posture of your data before it powers critical AI decisions through real-time lineage, classification, and anomaly detection. Enforcement: Monitor, guide, or steer every agent’s data access based on enterprise policy. Bigeye’s AI Trust Platform brings these capabilities together to give enterprises complete control over how agents access and act on data. The first version will be released in late 2025.
RavenDB’s new feature allows developers to run GenAI tasks directly inside the database and use any LLM on their terms without requiring any middleware, external orchestration, or third-party services
RavenDB, a high-performance NoSQL document database trusted by developers and enterprises worldwide, has launched its new feature, bringing native GenAI capabilities directly into its core database engine, eliminating the need for middleware, external orchestration, or costly third-party services. RavenDB’s new feature supports any LLM (open-source or commercial), allowing teams to run GenAI tasks directly inside the database. Moving from prototype to production traditionally requires complex data pipelines, vendor-specific APIs, external services, and significant engineering effort. With this feature, RavenDB removes those barriers and bridges the gap between experimentation and production, giving developers complete control over cost, performance, and compliance. The result is a seamless transition from idea to implementation, making the leap to production almost as effortless as prototyping. What sets RavenDB apart is its fully integrated, flexible approach: developers can use any LLM on their terms. It’s optimized for cost and performance with smarter caching and fewer API calls, and includes enterprise-ready capabilities such as governance, monitoring, and built-in security, designed to meet the demands of modern, intelligent applications. By collapsing multiple infrastructure layers into a single intelligent operational database, RavenDB’s native GenAI capabilities significantly upgrade its data layer. This enhancement accelerates innovation by removing complexity for engineering leaders. Whether classifying documents, summarizing customer interactions, or automating workflows, teams can build powerful features directly from the data they already manage, with no dedicated AI team required.
Palantir to embed CData’s open, standardized SQL interface and metadata layer across its analytics and AI platforms to enable connections to data sources without the need for individual APIs or formats
CData Software announced an expanded partnership with Palantir Technologies that integrates CData’s connectivity technology deeply into Palantir’s analytics platforms. The deal lets Palantir customers connect to data sources ranging from traditional databases to enterprise applications and development platforms without the need to learn individual application program interfaces or formats. CData’s technology provides a standardized SQL interface and consistent metadata layer across all connections. Palantir is licensing the technology across its Foundry, Gotham and Artificial Intelligence Platforms. Foundry is a data integration and analytics platform for commercial and industrial use. Gotham is primarily used by government and defense agencies. AIP is used to build and manage AI applications. CData says its approach is based on two architectural pillars: open standards and uniform behavior. Each of its connectors operates like a virtual database, translating SQL into native API calls under the hood. This abstraction not only simplifies development but also improves reliability and performance across platforms, Sharma said. The partnership will also extend Palantir’s AI ambitions. Using CData’s technology in its AIP allows AI models to query structured and unstructured data sources in real time using SQL. “We’re powering the data layer of their agent infrastructure,” Sharma said. “AI needs access to trusted, secure data, and that’s what we provide.”
Coralogix’s AI agent simplifies access to deep observability data by translating natural language queries into detailed, system-level answers via a conversational platform
Data analytics platform Coralogix nearly doubled its valuation to over $1 billion in its latest funding round, co-founder and CEO Ariel Assaraf told as AI-driven enterprise offerings continue to pique investor delight. Coralogix raised $115 million in a round led by California-based venture growth firm NewView Capital. The fundraise comes three years after Coralogix’s previous external funding in 2022, where it raised $142 million. Valuations have faced downward pressure since then, as investors continue to sit on dry powder amid elevated interest rates and geopolitical tensions. Coralogix’s revenue increased seven times since 2022, Assaraf told. Coralogix also unveiled its new AI agent “olly,” aiming to simplify data monitoring via a conversational platform. “Olly makes deep observability data accessible to every team. Whether you ask, ‘What is wrong with the payment flow?’ or ‘Which service is frustrating our users the most?’ Olly translates those questions into detailed, system-level answers,” the company wrote on its blog.
