CRM and marketing automation company Klaviyo Inc. announced the general availability of its enhanced Model Context Protocol server that gives marketers the ability to connect AI tools such as Claude Desktop, Cursor, VS Code, Windsurf and other local or web-based tools directly with Klaviyo. The enhanced MCP server includes improved reporting context and a new remote server for broader accessibility, making it easier for marketers to bring AI into their workflows with more opportunity for speed and automation. The solution assists marketers that want to scale up performance based on training AI platforms to deliver better insights, recommendations and content. The remote MCP server offers secure online setup and real-time access to create, read and update data through Klaviyo’s API without adding complexity to the marketing technology stack. The MCP server makes it easy for marketers to accelerate their work in Klaviyo with AI tools, including a conversational chat interface that allows customers to interact with Klaviyo using natural language prompts. Marketers can quickly ask questions such as which campaign is driving the most revenue, how clickthrough rates have changed over time or compare performance across accounts. Using the platform, marketers can request AI-generated suggestions for new audience segments, subject lines modeled on top performers, or strategies to improve open rates in key flows. Additionally, the MCP server also supports AI-driven execution, letting marketers move from idea to action. With simple prompts, users can upload event profiles, draft promotional emails, or add images directly into Klaviyo.
Oracle launches MCP Server for Oracle Database to allow users to securely interact with core database platform and navigate complex data schemas using natural language, with the server translating questions into SQL queries
Oracle Corp unveiled MCP Server for Oracle Database, a new Model Context Protocol offering that brings AI-powered interaction directly into its core database platform to help developers and analysts query and manage data using natural language. The new MCP server enables LLMs to securely connect to Oracle Database and interact with it contextually while respecting user permissions and roles. MCP Server for Oracle Database allows users to interact with Oracle’s core database platform using natural language, with the server translating questions into SQL queries, helping users retrieve insights from data without needing to write complex code, making tasks such as performance diagnostics, schema summarization and query generation easier. The integration has been designed to simplify the process of working with SQL queries and navigating complex data schemas. MCP Server for Oracle Database AI agents can act as copilots for developers and analysts by generating code and analyzing performance. The protocol also supports read and write operations, allowing users to take action through the AI assistant, such as creating indexes, checking performance plans, or optimizing workloads. The AI agent operates strictly within the access boundaries of the authenticated user by using a private, dedicated schema to isolate the agent’s interactions from production data, allowing it to generate summaries or sample datasets for language models without exposing full records.
Ataccama brings AI to data lineage- Business users can now trace a data point’s origin and understand how it was profiled or flagged without relying on IT
Ataccama has released Ataccama ONE v16.2, the latest version of its unified data trust platform. This release makes it easier for business users to understand how data moves and changes across systems without writing a single line of SQL. With intuitive, compact lineage views and improved performance, teams can make better decisions with greater confidence and speed. Business users can now trace a data point’s origin and understand how it was profiled or flagged without relying on IT. Ataccama shows how data flows through systems and provides plain-language descriptions of the steps behind every number. For example, in a financial services setting, a data steward can immediately see how a risk score was derived or how a flagged transaction passed through a series of enrichment and quality checks. That kind of visibility shortens reviews, streamlines audits, and gives business teams the confidence to act on the data in front of them. Key features include: AI-powered data lineage. Automatically generates readable descriptions of how data was transformed both upstream and downstream, clarifying filters, joins, and calculations, so business users can understand the logic behind each dataset without reading SQL. Compact lineage diagrams. Presents a simplified, high-level view of data flows with the option to drill into details on demand. This makes it easier to identify issues, answer audit questions, and align stakeholders on how data flows through the organization. Edge processing for secure lineage. Enables metadata extraction from on-prem or restricted environments without moving sensitive data to the cloud. Organizations can maintain compliance, minimize risk, and still get full visibility into their data pipelines, regardless of where the data lives. Expanded pushdown support and performance enhancements. Users can now execute profiling and data quality workloads in pushdown mode for BigQuery and Azure Synapse, minimizing data movement and improving performance for large-scale workloads. The release also includes volume support for Databricks Unity Catalog, further optimizing execution within modern cloud platforms.
StarTree to employ Apache Iceberg as the analytic layer on top of its data lakehouse to enable querying Iceberg directly with subsecond latency, without the need for intermediate pipelines, duplicate storage and external databases for real-time applications
StarTree Inc., which sells a real-time analytics platform and cloud service based on the Apache Pinot open-source online analytical processing database, becomes the latest data analytics provider to announce full support for Apache Iceberg. The StarTree Cloud managed service will employ Iceberg as the analytic and serving layer on top of its data lakehouse. The move creates new use cases for Iceberg in real-time applications requiring high concurrency across thousands of simultaneous users. In particular, it enables Iceberg to be more easily applied to customer-facing scenarios where organizations want to expose data externally without relying on complex, multi-step pipelines. Iceberg is a management layer that sits atop data files in cloud storage to improve consistency, manageability and query performance. It has been rapidly gaining acceptance as a de facto table standard, replacing an assortment of proprietary alternatives. StarTree enables direct querying of Iceberg tables without the need to move or transform the underlying data. The integration supports open formats and leverages performance-enhancing features, including Pinot indexing and materialization, local caching and intelligent prefetching. StarTree enables various indexes and pre-aggregated materializations to be defined directly on Iceberg tables. Indexes for numerical data, text, JavaScript Object Notation, geospatial data and other types can be distributed locally on compute nodes or stored in object storage. Chief Marketing Officer Chad Meley said “By querying Iceberg directly with subsecond latency, we’re eliminating the need for intermediate pipelines, duplicate storage and external databases.”
Elastic’s vector search capability enables flexible filter definition at query time, even after documents have been ingested delivering up to 5X speedups unlike traditional approaches that apply filters post-search or require pre-indexing
Elastic, the Search AI Company, announced new performance and cost-efficiency breakthroughs with two significant enhancements to its vector search. Users now benefit from ACORN, a smart filtering algorithm, in addition to Better Binary Quantization (BBQ) as the default for high-dimensional dense vectors. These capabilities improve both query performance and ranking quality, providing developers with new tools to build scalable, high-performance AI applications while lowering infrastructure costs. ACORN-1 is a new algorithm for filtered k-Nearest Neighbor (kNN) search in Elasticsearch. It tightly integrates filtering into the traversal of the HNSW graph, the core of Elasticsearch’s approximate nearest neighbor search engine. Unlike traditional approaches that apply filters post-search or require pre-indexing, ACORN enables flexible filter definition at query time, even after documents have been ingested. In real-world filtered vector search benchmarks, ACORN delivers up to 5X speedups, improving latency without compromising result accuracy.
Oracle’s distributed database to offer a cloud-native, serverless experience with full support for SQL syntax and data types, embedded AI capabilities, multi-region availability, real-time inference and RAG workflows directly within the data layer
With the launch of its globally distributed Exadata Database on Exascale infrastructure, Oracle is not simply extending its legacy capabilities into new markets, it’s making a bold claim to leadership in distributed data management for AI-native workloads. Oracle is leaning into its DNA, leveraging deep enterprise roots — full-featured SQL support and engineered systems — to assert a differentiated position. Oracle claims its new product is more than just another distributed database offering; rather the company says its latest move represents a convergence of infrastructure, database technology and AI readiness that few, if any, other vendors can match. The underlying thesis is that as AI systems become embedded into mission-critical workflows, customers will need more than speed and scale; they’ll demand automation, consistency, high availability and compliance with data sovereignty laws. Oracle believes it can deliver all of the above in a package that promises a cloud-native, serverless experience that runs across geographies, clouds and business functions. What’s new with this announcement is Oracle’s decision to make these capabilities more accessible and cost-effective through Exascale, which is a serverless version of its engineered Exadata infrastructure. Oracle claims that its distributed database was designed from the ground up to support full SQL syntax and data types. Oracle says it supports full data type coverage and SQL syntax out of the box, making it easier for organizations to lift and shift their applications into a distributed context without rewriting code. This becomes critical in the AI era. One of the most notable aspects of the announcement is Oracle’s direct linkage between distributed databases and the emerging world of agentic AI. Unlike traditional software, agentic systems generate large, bursty, machine-driven traffic patterns and require immediate access to accurate, sovereign-compliant data. Perhaps the most strategically important aspect of Oracle’s offering is its emphasis on co-locating AI with business data. In contrast to many AI architectures that involve lifting data into external stores for vector search and model training, Oracle is bringing AI to the data. By integrating vector search directly into the database engine and accelerating those searches with hardware optimizations via Exadata, Oracle enables real-time inference and retrieval-augmented generation (RAG) workflows directly within the data layer. This convergence simplifies architecture, reduces ETL overhead and ensures data security and compliance. It also means that AI workloads benefit from the same enterprise-grade replication, availability and observability as transactional applications. By combining full SQL support, data sovereignty compliance, active-active replication and embedded AI capabilities in a serverless, elastic form factor, Oracle is presenting a compelling vision of what distributed data infrastructure can and should be in the AI-native enterprise.
Beeks Financial Cloud uses edge-based AI and ML to analyze multi-source market/infrastructure data in real time, identifying unseen risks, latency, and arbitrage opportunities instantly
Beeks Financial Cloud has launched Beeks Market Edge Intelligence, an AI and machine learning platform designed to monitor market and infrastructure data in real time within colocation facilities and trading environments. It transforms raw data into instant actionable insights, detecting hidden anomalies, predicting potential disruptions, and identifying trading opportunities that traditional tools may miss. The platform processes live order and infrastructure data directly at the network edge, eliminating delays from conventional systems. It alerts teams to issues like latency spikes, packet loss, and feed quality problems before they affect trading. Using context-aware pattern analysis, it forecasts problems by factoring in trading calendars, market events, and historical infrastructure baselines, enabling predictive alerts. This helps firms anticipate bottlenecks, capacity constraints, and risk scenarios, reducing operational risk while maintaining execution quality. Beyond monitoring, the platform identifies trading signals invisible to conventional feeds, detecting arbitrage opportunities and order flow irregularities directly from network and market data. It integrates live and historical data with market events, trading calendars, and even weather conditions to ensure accurate, timely predictions—all while keeping data on-premises. By detecting infrastructure issues early and extracting hidden trading signals, Beeks’ platform enables firms to respond faster and optimize operations.
New Striim 5.2 release introduces native real-time AI agents for predictive analytics, data governance, and vector embeddings—modernizing enterprise pipelines across multi-cloud and legacy sources
With an ever-expanding multi-cloud data estate, enterprises are grappling with brittle data pipelines, ETL based batch lag, lack of automated agents, and siloed data architectures that are complex to integrate. Striim’s latest product release: Striim 5.2, empowers enterprises to close this gap by adding new endpoint connectors such as Neon serverless Postgres, IBM DB2 z/OS, Microsoft Dynamics and others. It delivers native, real-time, automated AI agents that augment data pipelines without adding operational complexity. This release also adds real-time support for legacy integration from mainframe sources, and data delivery into serverless PostgreSQL and open lakehouse destinations. Striim 5.2 introduces new capabilities to enable AI across three strategic pillars — Enterprise Modernization and Digital Transformation, Data Interoperability, and Real-Time AI — enabling data and analytics/AI teams to accelerate their next generation application roadmap without rewriting it from scratch. Key highlights include: Accelerating Real-time AI: Striim is taking major strides to bring AI directly into real-time data pipelines and applications. Striim recently released the Sherlock and Sentinel AI agents to enable in-flight sensitive data governance. With 5.2., Striim is introducing two new AI agents – Foreseer for anomaly detection and forecasting, and Euclid for real-time vector embedding generation – enabling teams to embed intelligence directly into data streams. Striim is also expanding support to AI-ready databases like Crunchy Data and Neon, built to handle AI agent workloads and in-database AI applications. Driving Enterprise Modernization: Striim now supports reading data in real-time from IBM DB2 on z/OS, making it easier for organizations to modernize their legacy systems. Enterprises can integrate their mainframe data to the cloud and build high-throughput data pipelines that can read data in real-time from a wide array enterprise-grade systems, such as: IBM DB2, Oracle, Snowflake, SQL Server and others, powering analytics, applications, and insights across the business. Powering Digital Transformation: Enterprises are increasingly using Apache Iceberg to provide data interoperability to break data silos, build broad ecosystem adoption, and to future-proof their data architectures. In addition to Delta, Stiim now supports writing data in the Iceberg format to cloud data lakes and to cloud data warehouses such as Snowflake and Google BigQuery. Customers can easily extend their existing data pipelines to take advantage of Iceberg tables without having to rearchitect their applications.
Oracle supercharges databases and cloud apps by integrating OpenAI GPT-5’s advanced reasoning, code generation, and agentic AI directly into business-critical workflows
Oracle has deployed OpenAI GPT-5 across its database portfolio and suite of SaaS applications, including Oracle Fusion Cloud Applications, Oracle NetSuite, and Oracle Industry Applications, such as Oracle Health. By uniting trusted business data with frontier AI, Oracle is enabling customers to natively leverage sophisticated coding and reasoning capabilities in their business-critical workflows. With the development of GPT-5, Oracle will help customers: Enhance multi-step reasoning and orchestration across business processes; Accelerate code generation, bug resolution, and documentation; Increase accuracy and depth in business insights and recommendations. “The combination of industry-leading AI for data capabilities of Oracle Database 23ai and GPT-5 will help enterprises achieve breakthrough insights, innovations, and productivity,” said Kris Rice, senior vice president, Database Software Development, Oracle. “Oracle AI Vector and Select AI together with GPT-5 enable easier and more effective data search and analysis. Oracle’s SQLcl MCP Server enables GPT-5 to easily access data in Oracle Database. These capabilities enable users to search across all their data, run secure AI-powered operations, and use generative AI directly from SQL—helping to unlock the full potential of AI on enterprise data.” “GPT-5 will bring our Fusion Applications customers OpenAI’s sophisticated reasoning and deep-thinking capabilities,” said Meeten Bhavsar, senior vice president, Applications Development, Oracle. “The newest model from OpenAI will be able to power more complex AI agent-driven processes with capabilities that enable advanced automation, higher productivity, and faster decision making.”
Precisely and Opendatasoft partner to deliver integrated data marketplace combining robust data integrity and self-service sharing for trusted, AI-ready, compliant data across enterprises
Precisely announced a new strategic technology partnership with Opendatasoft, a data marketplace solution provider. Together, they will deliver an integrated data marketplace designed to simplify access to trusted, AI-ready data across businesses and teams – seamlessly and in compliance with governance requirements. The new data marketplace will integrate with the Precisely Data Integrity Suite to solve these challenges by combining the Suite’s robust data management capabilities with the intuitive, self-service experience of Opendatasoft’s data sharing platform. This powerful combination will ensure that accurate, consistent, and contextual data products are not only well-managed behind the scenes – they are also easy to discover, use, and share across the organization, with partners, or even through public channels. The result is improved accessibility, faster adoption, and a frictionless experience that supports enterprise-wide compliance and data-sharing needs. Franck Carassus, CSO and Co-Founder of Opendatasoft said “Together with Precisely, we’re enabling them to support greater data sharing and consumption by business users, unlocking new opportunities for AI and analytics, and maximizing ROI on their data investments.” By creating a flexible foundation for AI, analytics, and automation, customers can streamline operations, reduce the cost of ownership, and accelerate time-to-insight. Precisely enables organizations to modernize with intelligence and resilience – empowering them to build the modern data architectures needed to support dynamic data marketplaces and self-service access across the enterprise.