Starburst announced at AI & Datanova, a new set of capabilities designed to operationalize the Agentic Workforce—a paradigm where humans and AI agents collaborate seamlessly across workflows to reason, decide, and act faster and with confidence. With new, built-in support for model-to-data architectures, multi-agent interoperability, and an open vector store on Iceberg, Starburst delivers the first lakehouse platform that empowers AI agents, with unified enterprise data, governed data products, and metadata, empowering humans and AI to reason, act, and decide faster while ensuring trust and control. To further strengthen enterprise confidence in AI, Starburst is introducing advanced observability and visualization features for its agent framework. Organizations can now monitor usage of LLM interactions, set guardrails with usage limits, and view activity through intuitive dashboards. In addition, Starburst’s agent can visualize responses into charts and graphs giving teams not only accurate answers but also clear, actionable insights. These capabilities provide a new level of transparency, governance, and usability as enterprises scale AI adoption. Starburst’s new AI capabilities are built upon the core principle of flexibility, giving organizations the freedom to choose between model-to-data and data-to-model architectures. This approach enables enterprises to scale AI securely, while preserving sovereignty, reducing infrastructure costs, and ensuring compliance. These enhancements include: Multi-Agent Ready Infrastructure: A new MCP server and agent API allows enterprises to create, manage, and orchestrate multiple AI agents along-side the Starburst agent. This enables customers to develop multi-agent and AI application solutions that are geared to complete tasks of growing complexity. Open & Interoperable Vector Access: Starburst unifies access to vector stores, enabling retrieval augmented generation (RAG) and search tasks across Iceberg, PostgreSQL + PGVector, Elasticsearch and more. Enterprises gain flexibility to choose the right vector solution for each workload without lock-in or fragmentation. Model Usage Monitoring & Control: Starburst offers enterprise-grade AI model monitoring and governance. Teams can track, audit, and control AI usage across agents and workloads with dashboards, preventing cost overruns and ensuring compliance for confident, scalable AI adoption. Deeper Insights & Visualization: An extension of Starburst’s conversational analytics agent enables users to ask questions across different data product domains and provide back a natural language response in natural language, a visualization, or combination of the two. The agent is able to understand the user intent and question to do data discovery to find the right data before query processing to answer the question.
Redis acquires Featureform, a framework for managing and orchestrating structured data signals to power real‑time data delivery into AI agents
Redis announced the acquisition of Featureform, a powerful framework for managing, defining, and orchestrating structured data signals. The acquisition helps Redis solve one of the most critical challenges developers face with production AI: getting structured data into models quickly, reliably, and with full observability. Featureform will become a part of Redis’ feature store solution, complementing the fastest benchmarked vector database powered by Redis Query Engine, and the most advanced semantic caching service, Redis LangCache. Featureform will allow developers to: Define features as reusable, versioned pipelines; Unify training and inference workflows across batch and streaming; Maintain point-in-time correctness for offline model training; Serve low-latency features using Redis in production; Detect data drift and monitor changes to feature distributions. Rowan Trollope, CEO of Redis. “By integrating Featureform’s powerful framework into our platform, we’re better enabling developers to deliver context to agents at exactly the right moment, so they reason, act, and interact accurately and intuitively.”
Oracle launches MCP Server for Oracle Database to allow users to securely interact with core database platform and navigate complex data schemas using natural language, with the server translating questions into SQL queries
Oracle Corp unveiled MCP Server for Oracle Database, a new Model Context Protocol offering that brings AI-powered interaction directly into its core database platform to help developers and analysts query and manage data using natural language. The new MCP server enables LLMs to securely connect to Oracle Database and interact with it contextually while respecting user permissions and roles. MCP Server for Oracle Database allows users to interact with Oracle’s core database platform using natural language, with the server translating questions into SQL queries, helping users retrieve insights from data without needing to write complex code, making tasks such as performance diagnostics, schema summarization and query generation easier. The integration has been designed to simplify the process of working with SQL queries and navigating complex data schemas. MCP Server for Oracle Database AI agents can act as copilots for developers and analysts by generating code and analyzing performance. The protocol also supports read and write operations, allowing users to take action through the AI assistant, such as creating indexes, checking performance plans, or optimizing workloads. The AI agent operates strictly within the access boundaries of the authenticated user by using a private, dedicated schema to isolate the agent’s interactions from production data, allowing it to generate summaries or sample datasets for language models without exposing full records.
Ataccama brings AI to data lineage- Business users can now trace a data point’s origin and understand how it was profiled or flagged without relying on IT
Ataccama has released Ataccama ONE v16.2, the latest version of its unified data trust platform. This release makes it easier for business users to understand how data moves and changes across systems without writing a single line of SQL. With intuitive, compact lineage views and improved performance, teams can make better decisions with greater confidence and speed. Business users can now trace a data point’s origin and understand how it was profiled or flagged without relying on IT. Ataccama shows how data flows through systems and provides plain-language descriptions of the steps behind every number. For example, in a financial services setting, a data steward can immediately see how a risk score was derived or how a flagged transaction passed through a series of enrichment and quality checks. That kind of visibility shortens reviews, streamlines audits, and gives business teams the confidence to act on the data in front of them. Key features include: AI-powered data lineage. Automatically generates readable descriptions of how data was transformed both upstream and downstream, clarifying filters, joins, and calculations, so business users can understand the logic behind each dataset without reading SQL. Compact lineage diagrams. Presents a simplified, high-level view of data flows with the option to drill into details on demand. This makes it easier to identify issues, answer audit questions, and align stakeholders on how data flows through the organization. Edge processing for secure lineage. Enables metadata extraction from on-prem or restricted environments without moving sensitive data to the cloud. Organizations can maintain compliance, minimize risk, and still get full visibility into their data pipelines, regardless of where the data lives. Expanded pushdown support and performance enhancements. Users can now execute profiling and data quality workloads in pushdown mode for BigQuery and Azure Synapse, minimizing data movement and improving performance for large-scale workloads. The release also includes volume support for Databricks Unity Catalog, further optimizing execution within modern cloud platforms.
StarTree to employ Apache Iceberg as the analytic layer on top of its data lakehouse to enable querying Iceberg directly with subsecond latency, without the need for intermediate pipelines, duplicate storage and external databases for real-time applications
StarTree Inc., which sells a real-time analytics platform and cloud service based on the Apache Pinot open-source online analytical processing database, becomes the latest data analytics provider to announce full support for Apache Iceberg. The StarTree Cloud managed service will employ Iceberg as the analytic and serving layer on top of its data lakehouse. The move creates new use cases for Iceberg in real-time applications requiring high concurrency across thousands of simultaneous users. In particular, it enables Iceberg to be more easily applied to customer-facing scenarios where organizations want to expose data externally without relying on complex, multi-step pipelines. Iceberg is a management layer that sits atop data files in cloud storage to improve consistency, manageability and query performance. It has been rapidly gaining acceptance as a de facto table standard, replacing an assortment of proprietary alternatives. StarTree enables direct querying of Iceberg tables without the need to move or transform the underlying data. The integration supports open formats and leverages performance-enhancing features, including Pinot indexing and materialization, local caching and intelligent prefetching. StarTree enables various indexes and pre-aggregated materializations to be defined directly on Iceberg tables. Indexes for numerical data, text, JavaScript Object Notation, geospatial data and other types can be distributed locally on compute nodes or stored in object storage. Chief Marketing Officer Chad Meley said “By querying Iceberg directly with subsecond latency, we’re eliminating the need for intermediate pipelines, duplicate storage and external databases.”
Elastic’s vector search capability enables flexible filter definition at query time, even after documents have been ingested delivering up to 5X speedups unlike traditional approaches that apply filters post-search or require pre-indexing
Elastic, the Search AI Company, announced new performance and cost-efficiency breakthroughs with two significant enhancements to its vector search. Users now benefit from ACORN, a smart filtering algorithm, in addition to Better Binary Quantization (BBQ) as the default for high-dimensional dense vectors. These capabilities improve both query performance and ranking quality, providing developers with new tools to build scalable, high-performance AI applications while lowering infrastructure costs. ACORN-1 is a new algorithm for filtered k-Nearest Neighbor (kNN) search in Elasticsearch. It tightly integrates filtering into the traversal of the HNSW graph, the core of Elasticsearch’s approximate nearest neighbor search engine. Unlike traditional approaches that apply filters post-search or require pre-indexing, ACORN enables flexible filter definition at query time, even after documents have been ingested. In real-world filtered vector search benchmarks, ACORN delivers up to 5X speedups, improving latency without compromising result accuracy.
Oracle’s distributed database to offer a cloud-native, serverless experience with full support for SQL syntax and data types, embedded AI capabilities, multi-region availability, real-time inference and RAG workflows directly within the data layer
With the launch of its globally distributed Exadata Database on Exascale infrastructure, Oracle is not simply extending its legacy capabilities into new markets, it’s making a bold claim to leadership in distributed data management for AI-native workloads. Oracle is leaning into its DNA, leveraging deep enterprise roots — full-featured SQL support and engineered systems — to assert a differentiated position. Oracle claims its new product is more than just another distributed database offering; rather the company says its latest move represents a convergence of infrastructure, database technology and AI readiness that few, if any, other vendors can match. The underlying thesis is that as AI systems become embedded into mission-critical workflows, customers will need more than speed and scale; they’ll demand automation, consistency, high availability and compliance with data sovereignty laws. Oracle believes it can deliver all of the above in a package that promises a cloud-native, serverless experience that runs across geographies, clouds and business functions. What’s new with this announcement is Oracle’s decision to make these capabilities more accessible and cost-effective through Exascale, which is a serverless version of its engineered Exadata infrastructure. Oracle claims that its distributed database was designed from the ground up to support full SQL syntax and data types. Oracle says it supports full data type coverage and SQL syntax out of the box, making it easier for organizations to lift and shift their applications into a distributed context without rewriting code. This becomes critical in the AI era. One of the most notable aspects of the announcement is Oracle’s direct linkage between distributed databases and the emerging world of agentic AI. Unlike traditional software, agentic systems generate large, bursty, machine-driven traffic patterns and require immediate access to accurate, sovereign-compliant data. Perhaps the most strategically important aspect of Oracle’s offering is its emphasis on co-locating AI with business data. In contrast to many AI architectures that involve lifting data into external stores for vector search and model training, Oracle is bringing AI to the data. By integrating vector search directly into the database engine and accelerating those searches with hardware optimizations via Exadata, Oracle enables real-time inference and retrieval-augmented generation (RAG) workflows directly within the data layer. This convergence simplifies architecture, reduces ETL overhead and ensures data security and compliance. It also means that AI workloads benefit from the same enterprise-grade replication, availability and observability as transactional applications. By combining full SQL support, data sovereignty compliance, active-active replication and embedded AI capabilities in a serverless, elastic form factor, Oracle is presenting a compelling vision of what distributed data infrastructure can and should be in the AI-native enterprise.
Beeks Financial Cloud uses edge-based AI and ML to analyze multi-source market/infrastructure data in real time, identifying unseen risks, latency, and arbitrage opportunities instantly
Beeks Financial Cloud has launched Beeks Market Edge Intelligence, an AI and machine learning platform designed to monitor market and infrastructure data in real time within colocation facilities and trading environments. It transforms raw data into instant actionable insights, detecting hidden anomalies, predicting potential disruptions, and identifying trading opportunities that traditional tools may miss. The platform processes live order and infrastructure data directly at the network edge, eliminating delays from conventional systems. It alerts teams to issues like latency spikes, packet loss, and feed quality problems before they affect trading. Using context-aware pattern analysis, it forecasts problems by factoring in trading calendars, market events, and historical infrastructure baselines, enabling predictive alerts. This helps firms anticipate bottlenecks, capacity constraints, and risk scenarios, reducing operational risk while maintaining execution quality. Beyond monitoring, the platform identifies trading signals invisible to conventional feeds, detecting arbitrage opportunities and order flow irregularities directly from network and market data. It integrates live and historical data with market events, trading calendars, and even weather conditions to ensure accurate, timely predictions—all while keeping data on-premises. By detecting infrastructure issues early and extracting hidden trading signals, Beeks’ platform enables firms to respond faster and optimize operations.
New Striim 5.2 release introduces native real-time AI agents for predictive analytics, data governance, and vector embeddings—modernizing enterprise pipelines across multi-cloud and legacy sources
With an ever-expanding multi-cloud data estate, enterprises are grappling with brittle data pipelines, ETL based batch lag, lack of automated agents, and siloed data architectures that are complex to integrate. Striim’s latest product release: Striim 5.2, empowers enterprises to close this gap by adding new endpoint connectors such as Neon serverless Postgres, IBM DB2 z/OS, Microsoft Dynamics and others. It delivers native, real-time, automated AI agents that augment data pipelines without adding operational complexity. This release also adds real-time support for legacy integration from mainframe sources, and data delivery into serverless PostgreSQL and open lakehouse destinations. Striim 5.2 introduces new capabilities to enable AI across three strategic pillars — Enterprise Modernization and Digital Transformation, Data Interoperability, and Real-Time AI — enabling data and analytics/AI teams to accelerate their next generation application roadmap without rewriting it from scratch. Key highlights include: Accelerating Real-time AI: Striim is taking major strides to bring AI directly into real-time data pipelines and applications. Striim recently released the Sherlock and Sentinel AI agents to enable in-flight sensitive data governance. With 5.2., Striim is introducing two new AI agents – Foreseer for anomaly detection and forecasting, and Euclid for real-time vector embedding generation – enabling teams to embed intelligence directly into data streams. Striim is also expanding support to AI-ready databases like Crunchy Data and Neon, built to handle AI agent workloads and in-database AI applications. Driving Enterprise Modernization: Striim now supports reading data in real-time from IBM DB2 on z/OS, making it easier for organizations to modernize their legacy systems. Enterprises can integrate their mainframe data to the cloud and build high-throughput data pipelines that can read data in real-time from a wide array enterprise-grade systems, such as: IBM DB2, Oracle, Snowflake, SQL Server and others, powering analytics, applications, and insights across the business. Powering Digital Transformation: Enterprises are increasingly using Apache Iceberg to provide data interoperability to break data silos, build broad ecosystem adoption, and to future-proof their data architectures. In addition to Delta, Stiim now supports writing data in the Iceberg format to cloud data lakes and to cloud data warehouses such as Snowflake and Google BigQuery. Customers can easily extend their existing data pipelines to take advantage of Iceberg tables without having to rearchitect their applications.
Oracle supercharges databases and cloud apps by integrating OpenAI GPT-5’s advanced reasoning, code generation, and agentic AI directly into business-critical workflows
Oracle has deployed OpenAI GPT-5 across its database portfolio and suite of SaaS applications, including Oracle Fusion Cloud Applications, Oracle NetSuite, and Oracle Industry Applications, such as Oracle Health. By uniting trusted business data with frontier AI, Oracle is enabling customers to natively leverage sophisticated coding and reasoning capabilities in their business-critical workflows. With the development of GPT-5, Oracle will help customers: Enhance multi-step reasoning and orchestration across business processes; Accelerate code generation, bug resolution, and documentation; Increase accuracy and depth in business insights and recommendations. “The combination of industry-leading AI for data capabilities of Oracle Database 23ai and GPT-5 will help enterprises achieve breakthrough insights, innovations, and productivity,” said Kris Rice, senior vice president, Database Software Development, Oracle. “Oracle AI Vector and Select AI together with GPT-5 enable easier and more effective data search and analysis. Oracle’s SQLcl MCP Server enables GPT-5 to easily access data in Oracle Database. These capabilities enable users to search across all their data, run secure AI-powered operations, and use generative AI directly from SQL—helping to unlock the full potential of AI on enterprise data.” “GPT-5 will bring our Fusion Applications customers OpenAI’s sophisticated reasoning and deep-thinking capabilities,” said Meeten Bhavsar, senior vice president, Applications Development, Oracle. “The newest model from OpenAI will be able to power more complex AI agent-driven processes with capabilities that enable advanced automation, higher productivity, and faster decision making.”
