Oracle Corp unveiled MCP Server for Oracle Database, a new Model Context Protocol offering that brings AI-powered interaction directly into its core database platform to help developers and analysts query and manage data using natural language. The new MCP server enables LLMs to securely connect to Oracle Database and interact with it contextually while respecting user permissions and roles. MCP Server for Oracle Database allows users to interact with Oracle’s core database platform using natural language, with the server translating questions into SQL queries, helping users retrieve insights from data without needing to write complex code, making tasks such as performance diagnostics, schema summarization and query generation easier. The integration has been designed to simplify the process of working with SQL queries and navigating complex data schemas. MCP Server for Oracle Database AI agents can act as copilots for developers and analysts by generating code and analyzing performance. The protocol also supports read and write operations, allowing users to take action through the AI assistant, such as creating indexes, checking performance plans, or optimizing workloads. The AI agent operates strictly within the access boundaries of the authenticated user by using a private, dedicated schema to isolate the agent’s interactions from production data, allowing it to generate summaries or sample datasets for language models without exposing full records.
Ataccama brings AI to data lineage- Business users can now trace a data point’s origin and understand how it was profiled or flagged without relying on IT
Ataccama has released Ataccama ONE v16.2, the latest version of its unified data trust platform. This release makes it easier for business users to understand how data moves and changes across systems without writing a single line of SQL. With intuitive, compact lineage views and improved performance, teams can make better decisions with greater confidence and speed. Business users can now trace a data point’s origin and understand how it was profiled or flagged without relying on IT. Ataccama shows how data flows through systems and provides plain-language descriptions of the steps behind every number. For example, in a financial services setting, a data steward can immediately see how a risk score was derived or how a flagged transaction passed through a series of enrichment and quality checks. That kind of visibility shortens reviews, streamlines audits, and gives business teams the confidence to act on the data in front of them. Key features include: AI-powered data lineage. Automatically generates readable descriptions of how data was transformed both upstream and downstream, clarifying filters, joins, and calculations, so business users can understand the logic behind each dataset without reading SQL. Compact lineage diagrams. Presents a simplified, high-level view of data flows with the option to drill into details on demand. This makes it easier to identify issues, answer audit questions, and align stakeholders on how data flows through the organization. Edge processing for secure lineage. Enables metadata extraction from on-prem or restricted environments without moving sensitive data to the cloud. Organizations can maintain compliance, minimize risk, and still get full visibility into their data pipelines, regardless of where the data lives. Expanded pushdown support and performance enhancements. Users can now execute profiling and data quality workloads in pushdown mode for BigQuery and Azure Synapse, minimizing data movement and improving performance for large-scale workloads. The release also includes volume support for Databricks Unity Catalog, further optimizing execution within modern cloud platforms.
StarTree to employ Apache Iceberg as the analytic layer on top of its data lakehouse to enable querying Iceberg directly with subsecond latency, without the need for intermediate pipelines, duplicate storage and external databases for real-time applications
StarTree Inc., which sells a real-time analytics platform and cloud service based on the Apache Pinot open-source online analytical processing database, becomes the latest data analytics provider to announce full support for Apache Iceberg. The StarTree Cloud managed service will employ Iceberg as the analytic and serving layer on top of its data lakehouse. The move creates new use cases for Iceberg in real-time applications requiring high concurrency across thousands of simultaneous users. In particular, it enables Iceberg to be more easily applied to customer-facing scenarios where organizations want to expose data externally without relying on complex, multi-step pipelines. Iceberg is a management layer that sits atop data files in cloud storage to improve consistency, manageability and query performance. It has been rapidly gaining acceptance as a de facto table standard, replacing an assortment of proprietary alternatives. StarTree enables direct querying of Iceberg tables without the need to move or transform the underlying data. The integration supports open formats and leverages performance-enhancing features, including Pinot indexing and materialization, local caching and intelligent prefetching. StarTree enables various indexes and pre-aggregated materializations to be defined directly on Iceberg tables. Indexes for numerical data, text, JavaScript Object Notation, geospatial data and other types can be distributed locally on compute nodes or stored in object storage. Chief Marketing Officer Chad Meley said “By querying Iceberg directly with subsecond latency, we’re eliminating the need for intermediate pipelines, duplicate storage and external databases.”
Elastic’s vector search capability enables flexible filter definition at query time, even after documents have been ingested delivering up to 5X speedups unlike traditional approaches that apply filters post-search or require pre-indexing
Elastic, the Search AI Company, announced new performance and cost-efficiency breakthroughs with two significant enhancements to its vector search. Users now benefit from ACORN, a smart filtering algorithm, in addition to Better Binary Quantization (BBQ) as the default for high-dimensional dense vectors. These capabilities improve both query performance and ranking quality, providing developers with new tools to build scalable, high-performance AI applications while lowering infrastructure costs. ACORN-1 is a new algorithm for filtered k-Nearest Neighbor (kNN) search in Elasticsearch. It tightly integrates filtering into the traversal of the HNSW graph, the core of Elasticsearch’s approximate nearest neighbor search engine. Unlike traditional approaches that apply filters post-search or require pre-indexing, ACORN enables flexible filter definition at query time, even after documents have been ingested. In real-world filtered vector search benchmarks, ACORN delivers up to 5X speedups, improving latency without compromising result accuracy.
ParadeDB’s open-source Postgres extension facilitates full-text search and analytics directly in Postgres without the need to transfer data to a separate source and can support heavy workloads that require frequent updating
ParadeDB is an open source Postgres extension that facilitates full-text search and analytics directly in Postgres without users needing to transfer data to a separate source. The platform integrates with other data infrastructure tools, including Google Cloud SQL, Azure Postgres, and Amazon RDS, among others. “Postgres is becoming the default database of the world, and you still can’t do good search over that information, believe it or not,” Philippe Noël, the co-founder and CEO of ParadeDB said. ParadeDB isn’t the first company to try to solve Postgres search. Noël said that Elasticsearch works by moving data back and forth between itself and Postgres, which can work, but this system isn’t great for heavy workloads or processes that require frequent updating. “That breaks all the time,” Noël said. “The two databases are not meant to work together. There’s a lot of compatibility issues, there’s a lot of latency issues, higher costs, and all of that deteriorates the user experience.” ParadeDB claims to eliminate a lot of those challenges by building as an extension on top of Postgres directly, no data transfer required.
MindBridge’s integration of its AI-powered financial decision intelligence with Snowflake to enable finance teams leverage secure data pipelines for automated analysis and real-time risk insights within their existing workflows
MindBridge, the leader in AI-powered financial decision intelligence, announced its integration with Snowflake, enabling finance teams to seamlessly analyze financial data for continuous, AI-powered analysis. With this new integration, organizations can securely connect MindBridge to their data without complicated processes or manual work. By leveraging secure data pipelines within the Snowflake AI Data Cloud, organizations can easily and securely use MindBridge for automated analysis, without adding complexity or risk. Risk scores and insights from MindBridge can also be leveraged in existing workflows, shortening the time to identify and act on findings and minimizing additional training or complex implementations. Every time data is updated, MindBridge automatically runs its analysis, so finance teams always have a consistent, up-to-date view of their financial risk. Key Benefits of the Integration: Simple, scalable integration – MindBridge connects directly to the Snowflake AI Data Cloud, leveraging secure data pipelines to automate analysis within existing governance frameworks. Real-time financial risk insights; Enterprise-grade security and control; Frictionless insights delivery – With automated data delivery and analysis execution, business users can access the latest results in the MindBridge UI or within their existing workflow systems, providing more flexibility to surface insights where and when they’re needed most — without disrupting established processes; Integrated risk intelligence – Risk scores and analysis results are retrieved via API back into the Snowflake platform, enabling continuous risk monitoring, deeper investigations, and integrated reporting alongside other business KPIs.
CapStorm’s AI solution enables business users to ask complex data questions in plain English and receive real-time dashboards, instantly, across Salesforce, ERPs, CRMs, and data warehouses without writing a single line of code
Secure Salesforce data management solutions provider CapStorm has launched CapStorm:AI, an AI-powered solution that lets users “talk to their data” using plain English. Designed for organizations seeking secure, self-hosted insights across their Salesforce and SQL environments, CapStorm:AI enables business users to ask complex data questions and receive real-time dashboards, instantly, and without writing a single line of code. CapStorm:AI brings together CapStorm’s proven near real-time Salesforce data replication with a powerful AI engine that understands how businesses’ data connects. It works with leading SQL databases like SQL Server and PostgreSQL, as well as cloud data warehouses like Snowflake and Amazon Redshift, giving users instant insights across systems, no technical expertise required. Best of all, it keeps everything inside their own environment, so data stays secure and fully under the user’s control. CapStorm:AI is designed with security and compliance in mind, making it ideal for regulated industries and enterprises that demand full control of their data. CapStorm:AI gives users a faster, easier way to get answers: Natural Language Dashboards: Ask a business question in plain English and receive a real-time dashboard, instantly. Instant Access Across Systems: Understand how data connects across Salesforce, ERPs, CRMs, and data warehouses, without needing custom joins or pipelines. Near Real-Time Insights: Built on CapStorm’s trusted replication technology, ensuring your answers are always up to date. Flexible Deployment Options: CapStorm:AI can be deployed using an organization’s own on-prem database for full control, or hosted in a secure AWS environment managed by CapStorm.
iMerit is building expert-led, high-quality data for finetuning generative AI models through use of domain-specific experts for generating and evaluating problems for the model to solve and human-in-the-loop labelling
AI data platform iMerit believes the next step toward integrating AI tools at the enterprise level is not more data, but better data. The startup has quietly built itself into a trusted data annotation partner for companies working in computer vision, medical imaging, autonomous mobility, and other AI applications that require high-accuracy, human-in-the-loop labeling. Now, iMerit is bringing its Scholars program out of beta. The goal of the program is to build a growing workforce of experts to fine-tune generative AI models for enterprise applications and, increasingly, foundational models. iMerit doesn’t claim to replace Scale AI’s core offering of high-throughput, developer-focused “blitz data.” Instead, it’s betting that now is the right moment to double down on expert-led, high-quality data, the kind that requires deep human judgment and domain-specific oversight. iMerit’s experts are tasked with finetuning, or “tormenting,” enterprise and foundational AI models using the startup’s proprietary platform Ango Hub. Ango allows iMerit’s “Scholars” to interact with the customer’s model to generate and evaluate problems for the model to solve. For iMerritt, attracting and retaining cognitive experts is key to success because the experts aren’t just doing a few tasks and disappearing; they’re working on projects for multiple years. The goal is to grow across other enterprise applications, including finance and medicine.
Penske Logistics taps Snowflake AI to develop AI-based program that flags drivers at risk of quitting based on work patterns, route history and behavioral signals
Penske Logistics has leveraged Snowflake Inc.’s evolving artificial intelligence capabilities through a strategic partnership that’s reshaping the supply chain landscape. “We have onboard telematics devices inside our fleet that are generating millions of data points, including things like hard braking, following too closely, fuel consumption and so on,” said Vishwa Ram, vice president of data science and analytics at Penske Logistics. “Getting all of that data in one place and adding it up with other sets of data that we have that are contextual is a huge challenge for us.” “We’re accustomed now to disruption being normal, and as a result, organizations see just how important it is to invest in that visibility element so they can see the disruption as it’s coming, or at least be able to react in real time when it does happen,” he said. For Penske, this means leveraging predictive analytics to foresee supplier delays and reroute resources before bottlenecks occur. Additionally, the company applies AI in workforce retention. With drivers making up over half of its workforce, driver satisfaction is key. Penske developed an AI-based program that flags drivers at risk of quitting based on work patterns, route history and behavioral signals. Armed with these insights, frontline managers proactively engage with drivers, often adjusting schedules or simply checking in, according to Ram.
Arctera.io’s backup and storage solution combines data management, cyber resiliency and data compliance and is designed to monitor customers’ data environments for potential security breaches by tracking deterministic AI, that changes over time
Arctera.io is going full steam ahead on data and artificial intelligence management. Arctera offers three backup and storage options that originated from Veritas Technologies LLC, covering data management, cyber resiliency and data compliance. All three areas are under a microscope in the era of AI adoption. Arctera focuses on data management that meets regulatory requirements and remains secure against ongoing threats from cyberattackers. AI presents a particular challenge to data resilience and recovery because it’s constantly changing, according to Matt Waxman, chief product officer at Arctera.io. “We have built IT around the notion that software is deterministic,” he explained. “The notion that [this] is static, that the software that you’re acquiring is going to be the same software at least for quite a long period of time. That’s not the case with AI. So what you bring in terms of a model is self-learning, and it’s going to adjust over time.” For this reason, it’s crucial to have multiple ways to back up your data and multiple ways to keep track of it. While Waxman advises against “AI whitewashing,” or applying AI to problems indiscriminately, Arctera has successfully implemented use cases for AI related to data compliance and monitoring. Arctera attempts to kill two birds with one stone by employing its data compliance software to monitor customers’ data environments for potential security breaches. The next step is governing the growing host of AI agents, according to Waxman.