Elastic, the Search AI Company, announced new performance and cost-efficiency breakthroughs with two significant enhancements to its vector search. Users now benefit from ACORN, a smart filtering algorithm, in addition to Better Binary Quantization (BBQ) as the default for high-dimensional dense vectors. These capabilities improve both query performance and ranking quality, providing developers with new tools to build scalable, high-performance AI applications while lowering infrastructure costs. ACORN-1 is a new algorithm for filtered k-Nearest Neighbor (kNN) search in Elasticsearch. It tightly integrates filtering into the traversal of the HNSW graph, the core of Elasticsearch’s approximate nearest neighbor search engine. Unlike traditional approaches that apply filters post-search or require pre-indexing, ACORN enables flexible filter definition at query time, even after documents have been ingested. In real-world filtered vector search benchmarks, ACORN delivers up to 5X speedups, improving latency without compromising result accuracy.
Oracle’s distributed database to offer a cloud-native, serverless experience with full support for SQL syntax and data types, embedded AI capabilities, multi-region availability, real-time inference and RAG workflows directly within the data layer
With the launch of its globally distributed Exadata Database on Exascale infrastructure, Oracle is not simply extending its legacy capabilities into new markets, it’s making a bold claim to leadership in distributed data management for AI-native workloads. Oracle is leaning into its DNA, leveraging deep enterprise roots — full-featured SQL support and engineered systems — to assert a differentiated position. Oracle claims its new product is more than just another distributed database offering; rather the company says its latest move represents a convergence of infrastructure, database technology and AI readiness that few, if any, other vendors can match. The underlying thesis is that as AI systems become embedded into mission-critical workflows, customers will need more than speed and scale; they’ll demand automation, consistency, high availability and compliance with data sovereignty laws. Oracle believes it can deliver all of the above in a package that promises a cloud-native, serverless experience that runs across geographies, clouds and business functions. What’s new with this announcement is Oracle’s decision to make these capabilities more accessible and cost-effective through Exascale, which is a serverless version of its engineered Exadata infrastructure. Oracle claims that its distributed database was designed from the ground up to support full SQL syntax and data types. Oracle says it supports full data type coverage and SQL syntax out of the box, making it easier for organizations to lift and shift their applications into a distributed context without rewriting code. This becomes critical in the AI era. One of the most notable aspects of the announcement is Oracle’s direct linkage between distributed databases and the emerging world of agentic AI. Unlike traditional software, agentic systems generate large, bursty, machine-driven traffic patterns and require immediate access to accurate, sovereign-compliant data. Perhaps the most strategically important aspect of Oracle’s offering is its emphasis on co-locating AI with business data. In contrast to many AI architectures that involve lifting data into external stores for vector search and model training, Oracle is bringing AI to the data. By integrating vector search directly into the database engine and accelerating those searches with hardware optimizations via Exadata, Oracle enables real-time inference and retrieval-augmented generation (RAG) workflows directly within the data layer. This convergence simplifies architecture, reduces ETL overhead and ensures data security and compliance. It also means that AI workloads benefit from the same enterprise-grade replication, availability and observability as transactional applications. By combining full SQL support, data sovereignty compliance, active-active replication and embedded AI capabilities in a serverless, elastic form factor, Oracle is presenting a compelling vision of what distributed data infrastructure can and should be in the AI-native enterprise.
Beeks Financial Cloud uses edge-based AI and ML to analyze multi-source market/infrastructure data in real time, identifying unseen risks, latency, and arbitrage opportunities instantly
Beeks Financial Cloud has launched Beeks Market Edge Intelligence, an AI and machine learning platform designed to monitor market and infrastructure data in real time within colocation facilities and trading environments. It transforms raw data into instant actionable insights, detecting hidden anomalies, predicting potential disruptions, and identifying trading opportunities that traditional tools may miss. The platform processes live order and infrastructure data directly at the network edge, eliminating delays from conventional systems. It alerts teams to issues like latency spikes, packet loss, and feed quality problems before they affect trading. Using context-aware pattern analysis, it forecasts problems by factoring in trading calendars, market events, and historical infrastructure baselines, enabling predictive alerts. This helps firms anticipate bottlenecks, capacity constraints, and risk scenarios, reducing operational risk while maintaining execution quality. Beyond monitoring, the platform identifies trading signals invisible to conventional feeds, detecting arbitrage opportunities and order flow irregularities directly from network and market data. It integrates live and historical data with market events, trading calendars, and even weather conditions to ensure accurate, timely predictions—all while keeping data on-premises. By detecting infrastructure issues early and extracting hidden trading signals, Beeks’ platform enables firms to respond faster and optimize operations.
New Striim 5.2 release introduces native real-time AI agents for predictive analytics, data governance, and vector embeddings—modernizing enterprise pipelines across multi-cloud and legacy sources
With an ever-expanding multi-cloud data estate, enterprises are grappling with brittle data pipelines, ETL based batch lag, lack of automated agents, and siloed data architectures that are complex to integrate. Striim’s latest product release: Striim 5.2, empowers enterprises to close this gap by adding new endpoint connectors such as Neon serverless Postgres, IBM DB2 z/OS, Microsoft Dynamics and others. It delivers native, real-time, automated AI agents that augment data pipelines without adding operational complexity. This release also adds real-time support for legacy integration from mainframe sources, and data delivery into serverless PostgreSQL and open lakehouse destinations. Striim 5.2 introduces new capabilities to enable AI across three strategic pillars — Enterprise Modernization and Digital Transformation, Data Interoperability, and Real-Time AI — enabling data and analytics/AI teams to accelerate their next generation application roadmap without rewriting it from scratch. Key highlights include: Accelerating Real-time AI: Striim is taking major strides to bring AI directly into real-time data pipelines and applications. Striim recently released the Sherlock and Sentinel AI agents to enable in-flight sensitive data governance. With 5.2., Striim is introducing two new AI agents – Foreseer for anomaly detection and forecasting, and Euclid for real-time vector embedding generation – enabling teams to embed intelligence directly into data streams. Striim is also expanding support to AI-ready databases like Crunchy Data and Neon, built to handle AI agent workloads and in-database AI applications. Driving Enterprise Modernization: Striim now supports reading data in real-time from IBM DB2 on z/OS, making it easier for organizations to modernize their legacy systems. Enterprises can integrate their mainframe data to the cloud and build high-throughput data pipelines that can read data in real-time from a wide array enterprise-grade systems, such as: IBM DB2, Oracle, Snowflake, SQL Server and others, powering analytics, applications, and insights across the business. Powering Digital Transformation: Enterprises are increasingly using Apache Iceberg to provide data interoperability to break data silos, build broad ecosystem adoption, and to future-proof their data architectures. In addition to Delta, Stiim now supports writing data in the Iceberg format to cloud data lakes and to cloud data warehouses such as Snowflake and Google BigQuery. Customers can easily extend their existing data pipelines to take advantage of Iceberg tables without having to rearchitect their applications.
Oracle supercharges databases and cloud apps by integrating OpenAI GPT-5’s advanced reasoning, code generation, and agentic AI directly into business-critical workflows
Oracle has deployed OpenAI GPT-5 across its database portfolio and suite of SaaS applications, including Oracle Fusion Cloud Applications, Oracle NetSuite, and Oracle Industry Applications, such as Oracle Health. By uniting trusted business data with frontier AI, Oracle is enabling customers to natively leverage sophisticated coding and reasoning capabilities in their business-critical workflows. With the development of GPT-5, Oracle will help customers: Enhance multi-step reasoning and orchestration across business processes; Accelerate code generation, bug resolution, and documentation; Increase accuracy and depth in business insights and recommendations. “The combination of industry-leading AI for data capabilities of Oracle Database 23ai and GPT-5 will help enterprises achieve breakthrough insights, innovations, and productivity,” said Kris Rice, senior vice president, Database Software Development, Oracle. “Oracle AI Vector and Select AI together with GPT-5 enable easier and more effective data search and analysis. Oracle’s SQLcl MCP Server enables GPT-5 to easily access data in Oracle Database. These capabilities enable users to search across all their data, run secure AI-powered operations, and use generative AI directly from SQL—helping to unlock the full potential of AI on enterprise data.” “GPT-5 will bring our Fusion Applications customers OpenAI’s sophisticated reasoning and deep-thinking capabilities,” said Meeten Bhavsar, senior vice president, Applications Development, Oracle. “The newest model from OpenAI will be able to power more complex AI agent-driven processes with capabilities that enable advanced automation, higher productivity, and faster decision making.”
Precisely and Opendatasoft partner to deliver integrated data marketplace combining robust data integrity and self-service sharing for trusted, AI-ready, compliant data across enterprises
Precisely announced a new strategic technology partnership with Opendatasoft, a data marketplace solution provider. Together, they will deliver an integrated data marketplace designed to simplify access to trusted, AI-ready data across businesses and teams – seamlessly and in compliance with governance requirements. The new data marketplace will integrate with the Precisely Data Integrity Suite to solve these challenges by combining the Suite’s robust data management capabilities with the intuitive, self-service experience of Opendatasoft’s data sharing platform. This powerful combination will ensure that accurate, consistent, and contextual data products are not only well-managed behind the scenes – they are also easy to discover, use, and share across the organization, with partners, or even through public channels. The result is improved accessibility, faster adoption, and a frictionless experience that supports enterprise-wide compliance and data-sharing needs. Franck Carassus, CSO and Co-Founder of Opendatasoft said “Together with Precisely, we’re enabling them to support greater data sharing and consumption by business users, unlocking new opportunities for AI and analytics, and maximizing ROI on their data investments.” By creating a flexible foundation for AI, analytics, and automation, customers can streamline operations, reduce the cost of ownership, and accelerate time-to-insight. Precisely enables organizations to modernize with intelligence and resilience – empowering them to build the modern data architectures needed to support dynamic data marketplaces and self-service access across the enterprise.
Klaviyo’s enhanced MCP server securely links AI tools to customer data via real-time APIs and remote access; enabling conversational analytics, audience suggestions, and automated campaign execution.
CRM and marketing automation company Klaviyo Inc. announced the general availability of its enhanced Model Context Protocol server that gives marketers the ability to connect AI tools such as Claude Desktop, Cursor, VS Code, Windsurf and other local or web-based tools directly with Klaviyo. The enhanced MCP server includes improved reporting context and a new remote server for broader accessibility, making it easier for marketers to bring AI into their workflows with more opportunity for speed and automation. The solution assists marketers that want to scale up performance based on training AI platforms to deliver better insights, recommendations and content. The remote MCP server offers secure online setup and real-time access to create, read and update data through Klaviyo’s API without adding complexity to the marketing technology stack. The MCP server makes it easy for marketers to accelerate their work in Klaviyo with AI tools, including a conversational chat interface that allows customers to interact with Klaviyo using natural language prompts. Marketers can quickly ask questions such as which campaign is driving the most revenue, how clickthrough rates have changed over time or compare performance across accounts. Using the platform, marketers can request AI-generated suggestions for new audience segments, subject lines modeled on top performers, or strategies to improve open rates in key flows. Additionally, the MCP server also supports AI-driven execution, letting marketers move from idea to action. With simple prompts, users can upload event profiles, draft promotional emails, or add images directly into Klaviyo.
ParadeDB’s open-source Postgres extension facilitates full-text search and analytics directly in Postgres without the need to transfer data to a separate source and can support heavy workloads that require frequent updating
ParadeDB is an open source Postgres extension that facilitates full-text search and analytics directly in Postgres without users needing to transfer data to a separate source. The platform integrates with other data infrastructure tools, including Google Cloud SQL, Azure Postgres, and Amazon RDS, among others. “Postgres is becoming the default database of the world, and you still can’t do good search over that information, believe it or not,” Philippe Noël, the co-founder and CEO of ParadeDB said. ParadeDB isn’t the first company to try to solve Postgres search. Noël said that Elasticsearch works by moving data back and forth between itself and Postgres, which can work, but this system isn’t great for heavy workloads or processes that require frequent updating. “That breaks all the time,” Noël said. “The two databases are not meant to work together. There’s a lot of compatibility issues, there’s a lot of latency issues, higher costs, and all of that deteriorates the user experience.” ParadeDB claims to eliminate a lot of those challenges by building as an extension on top of Postgres directly, no data transfer required.
MindBridge’s integration of its AI-powered financial decision intelligence with Snowflake to enable finance teams leverage secure data pipelines for automated analysis and real-time risk insights within their existing workflows
MindBridge, the leader in AI-powered financial decision intelligence, announced its integration with Snowflake, enabling finance teams to seamlessly analyze financial data for continuous, AI-powered analysis. With this new integration, organizations can securely connect MindBridge to their data without complicated processes or manual work. By leveraging secure data pipelines within the Snowflake AI Data Cloud, organizations can easily and securely use MindBridge for automated analysis, without adding complexity or risk. Risk scores and insights from MindBridge can also be leveraged in existing workflows, shortening the time to identify and act on findings and minimizing additional training or complex implementations. Every time data is updated, MindBridge automatically runs its analysis, so finance teams always have a consistent, up-to-date view of their financial risk. Key Benefits of the Integration: Simple, scalable integration – MindBridge connects directly to the Snowflake AI Data Cloud, leveraging secure data pipelines to automate analysis within existing governance frameworks. Real-time financial risk insights; Enterprise-grade security and control; Frictionless insights delivery – With automated data delivery and analysis execution, business users can access the latest results in the MindBridge UI or within their existing workflow systems, providing more flexibility to surface insights where and when they’re needed most — without disrupting established processes; Integrated risk intelligence – Risk scores and analysis results are retrieved via API back into the Snowflake platform, enabling continuous risk monitoring, deeper investigations, and integrated reporting alongside other business KPIs.
CapStorm’s AI solution enables business users to ask complex data questions in plain English and receive real-time dashboards, instantly, across Salesforce, ERPs, CRMs, and data warehouses without writing a single line of code
Secure Salesforce data management solutions provider CapStorm has launched CapStorm:AI, an AI-powered solution that lets users “talk to their data” using plain English. Designed for organizations seeking secure, self-hosted insights across their Salesforce and SQL environments, CapStorm:AI enables business users to ask complex data questions and receive real-time dashboards, instantly, and without writing a single line of code. CapStorm:AI brings together CapStorm’s proven near real-time Salesforce data replication with a powerful AI engine that understands how businesses’ data connects. It works with leading SQL databases like SQL Server and PostgreSQL, as well as cloud data warehouses like Snowflake and Amazon Redshift, giving users instant insights across systems, no technical expertise required. Best of all, it keeps everything inside their own environment, so data stays secure and fully under the user’s control. CapStorm:AI is designed with security and compliance in mind, making it ideal for regulated industries and enterprises that demand full control of their data. CapStorm:AI gives users a faster, easier way to get answers: Natural Language Dashboards: Ask a business question in plain English and receive a real-time dashboard, instantly. Instant Access Across Systems: Understand how data connects across Salesforce, ERPs, CRMs, and data warehouses, without needing custom joins or pipelines. Near Real-Time Insights: Built on CapStorm’s trusted replication technology, ensuring your answers are always up to date. Flexible Deployment Options: CapStorm:AI can be deployed using an organization’s own on-prem database for full control, or hosted in a secure AWS environment managed by CapStorm.