With an ever-expanding multi-cloud data estate, enterprises are grappling with brittle data pipelines, ETL based batch lag, lack of automated agents, and siloed data architectures that are complex to integrate. Striim’s latest product release: Striim 5.2, empowers enterprises to close this gap by adding new endpoint connectors such as Neon serverless Postgres, IBM DB2 z/OS, Microsoft Dynamics and others. It delivers native, real-time, automated AI agents that augment data pipelines without adding operational complexity. This release also adds real-time support for legacy integration from mainframe sources, and data delivery into serverless PostgreSQL and open lakehouse destinations. Striim 5.2 introduces new capabilities to enable AI across three strategic pillars — Enterprise Modernization and Digital Transformation, Data Interoperability, and Real-Time AI — enabling data and analytics/AI teams to accelerate their next generation application roadmap without rewriting it from scratch. Key highlights include: Accelerating Real-time AI: Striim is taking major strides to bring AI directly into real-time data pipelines and applications. Striim recently released the Sherlock and Sentinel AI agents to enable in-flight sensitive data governance. With 5.2., Striim is introducing two new AI agents – Foreseer for anomaly detection and forecasting, and Euclid for real-time vector embedding generation – enabling teams to embed intelligence directly into data streams. Striim is also expanding support to AI-ready databases like Crunchy Data and Neon, built to handle AI agent workloads and in-database AI applications. Driving Enterprise Modernization: Striim now supports reading data in real-time from IBM DB2 on z/OS, making it easier for organizations to modernize their legacy systems. Enterprises can integrate their mainframe data to the cloud and build high-throughput data pipelines that can read data in real-time from a wide array enterprise-grade systems, such as: IBM DB2, Oracle, Snowflake, SQL Server and others, powering analytics, applications, and insights across the business. Powering Digital Transformation: Enterprises are increasingly using Apache Iceberg to provide data interoperability to break data silos, build broad ecosystem adoption, and to future-proof their data architectures. In addition to Delta, Stiim now supports writing data in the Iceberg format to cloud data lakes and to cloud data warehouses such as Snowflake and Google BigQuery. Customers can easily extend their existing data pipelines to take advantage of Iceberg tables without having to rearchitect their applications.
Oracle supercharges databases and cloud apps by integrating OpenAI GPT-5’s advanced reasoning, code generation, and agentic AI directly into business-critical workflows
Oracle has deployed OpenAI GPT-5 across its database portfolio and suite of SaaS applications, including Oracle Fusion Cloud Applications, Oracle NetSuite, and Oracle Industry Applications, such as Oracle Health. By uniting trusted business data with frontier AI, Oracle is enabling customers to natively leverage sophisticated coding and reasoning capabilities in their business-critical workflows. With the development of GPT-5, Oracle will help customers: Enhance multi-step reasoning and orchestration across business processes; Accelerate code generation, bug resolution, and documentation; Increase accuracy and depth in business insights and recommendations. “The combination of industry-leading AI for data capabilities of Oracle Database 23ai and GPT-5 will help enterprises achieve breakthrough insights, innovations, and productivity,” said Kris Rice, senior vice president, Database Software Development, Oracle. “Oracle AI Vector and Select AI together with GPT-5 enable easier and more effective data search and analysis. Oracle’s SQLcl MCP Server enables GPT-5 to easily access data in Oracle Database. These capabilities enable users to search across all their data, run secure AI-powered operations, and use generative AI directly from SQL—helping to unlock the full potential of AI on enterprise data.” “GPT-5 will bring our Fusion Applications customers OpenAI’s sophisticated reasoning and deep-thinking capabilities,” said Meeten Bhavsar, senior vice president, Applications Development, Oracle. “The newest model from OpenAI will be able to power more complex AI agent-driven processes with capabilities that enable advanced automation, higher productivity, and faster decision making.”
Precisely and Opendatasoft partner to deliver integrated data marketplace combining robust data integrity and self-service sharing for trusted, AI-ready, compliant data across enterprises
Precisely announced a new strategic technology partnership with Opendatasoft, a data marketplace solution provider. Together, they will deliver an integrated data marketplace designed to simplify access to trusted, AI-ready data across businesses and teams – seamlessly and in compliance with governance requirements. The new data marketplace will integrate with the Precisely Data Integrity Suite to solve these challenges by combining the Suite’s robust data management capabilities with the intuitive, self-service experience of Opendatasoft’s data sharing platform. This powerful combination will ensure that accurate, consistent, and contextual data products are not only well-managed behind the scenes – they are also easy to discover, use, and share across the organization, with partners, or even through public channels. The result is improved accessibility, faster adoption, and a frictionless experience that supports enterprise-wide compliance and data-sharing needs. Franck Carassus, CSO and Co-Founder of Opendatasoft said “Together with Precisely, we’re enabling them to support greater data sharing and consumption by business users, unlocking new opportunities for AI and analytics, and maximizing ROI on their data investments.” By creating a flexible foundation for AI, analytics, and automation, customers can streamline operations, reduce the cost of ownership, and accelerate time-to-insight. Precisely enables organizations to modernize with intelligence and resilience – empowering them to build the modern data architectures needed to support dynamic data marketplaces and self-service access across the enterprise.
Klaviyo’s enhanced MCP server securely links AI tools to customer data via real-time APIs and remote access; enabling conversational analytics, audience suggestions, and automated campaign execution.
CRM and marketing automation company Klaviyo Inc. announced the general availability of its enhanced Model Context Protocol server that gives marketers the ability to connect AI tools such as Claude Desktop, Cursor, VS Code, Windsurf and other local or web-based tools directly with Klaviyo. The enhanced MCP server includes improved reporting context and a new remote server for broader accessibility, making it easier for marketers to bring AI into their workflows with more opportunity for speed and automation. The solution assists marketers that want to scale up performance based on training AI platforms to deliver better insights, recommendations and content. The remote MCP server offers secure online setup and real-time access to create, read and update data through Klaviyo’s API without adding complexity to the marketing technology stack. The MCP server makes it easy for marketers to accelerate their work in Klaviyo with AI tools, including a conversational chat interface that allows customers to interact with Klaviyo using natural language prompts. Marketers can quickly ask questions such as which campaign is driving the most revenue, how clickthrough rates have changed over time or compare performance across accounts. Using the platform, marketers can request AI-generated suggestions for new audience segments, subject lines modeled on top performers, or strategies to improve open rates in key flows. Additionally, the MCP server also supports AI-driven execution, letting marketers move from idea to action. With simple prompts, users can upload event profiles, draft promotional emails, or add images directly into Klaviyo.
Klaviyo’s enhanced MCP server securely links AI tools to customer data via real-time APIs and remote access; enabling conversational analytics, audience suggestions, and automated campaign execution.
CRM and marketing automation company Klaviyo Inc. announced the general availability of its enhanced Model Context Protocol server that gives marketers the ability to connect AI tools such as Claude Desktop, Cursor, VS Code, Windsurf and other local or web-based tools directly with Klaviyo. The enhanced MCP server includes improved reporting context and a new remote server for broader accessibility, making it easier for marketers to bring AI into their workflows with more opportunity for speed and automation. The solution assists marketers that want to scale up performance based on training AI platforms to deliver better insights, recommendations and content. The remote MCP server offers secure online setup and real-time access to create, read and update data through Klaviyo’s API without adding complexity to the marketing technology stack. The MCP server makes it easy for marketers to accelerate their work in Klaviyo with AI tools, including a conversational chat interface that allows customers to interact with Klaviyo using natural language prompts. Marketers can quickly ask questions such as which campaign is driving the most revenue, how clickthrough rates have changed over time or compare performance across accounts. Using the platform, marketers can request AI-generated suggestions for new audience segments, subject lines modeled on top performers, or strategies to improve open rates in key flows. Additionally, the MCP server also supports AI-driven execution, letting marketers move from idea to action. With simple prompts, users can upload event profiles, draft promotional emails, or add images directly into Klaviyo.
Vast Data’s SyncEngine helps AI agents to tap unstructured data from every source; provides unlimited ingest throughput via scale‑out nodes, deep metadata indexing, and real‑time searchable catalogs spanning cloud, on‑prem, and edge
Data storage company Vast Data Inc., which is in the process of transforming itself into an “operating system” for artificial intelligence, announced a new capability called Vast SyncEngine. The company says it acts as a “universal data router,” combining a highly performant onboarding system for unstructured data with a global catalog for building AI data pipelines. Available at no additional cost to existing customers, Vast SyncEngine is designed to simplify headaches around discovering and mobilizing distributed, unstructured datasets and software-as-a-service tools, so these data sources can quickly be plugged into their AI applications. Vast SyncEngine combines core distributed computing services, including storage, compute, messaging and reasoning into a unified data layer that spans cloud, on-premises and edge environments to power AI applications and agents. Vast SyncEngine works by collapsing data cataloging, migration and transformation into a single, no-cost capability within the Vast AI OS platform, enabling simpler integration and faster time to insights with lower costs. It enables teams to catalog and search across trillions of files and objects leveraging the Vast DataBase, with unlimited ingest throughput that’s bound only by the performance of the source and target systems. Moreover, its massively parallel architecture supports rapid scaling, simply by adding more nodes. By using Vast SyncEngine, companies can quickly build real-time, searchable data catalogs that span everything from traditional file and object systems to enterprise applications such as Google Drive, Salesforce, Microsoft Office and Sharepoint, and more besides. It supports deep metadata indexing to make data instantly discoverable. All of this unstructured information, the company says, can be moved without needing to create custom scripts and data transformations, fed into Vast’s InsightEngine and DataEngine platforms, which optimize it for AI applications and agentic workloads. As an added benefit, companies should see much lower costs, as Vast SyncEngine is essentially a replacement for existing data transformation and migration tools.
FICO’s new platform “households” disparate data, catalogs lineage, and turns time‑based feature combinations into predictive lift for risk, fraud, and marketing decisions
FICO is evolving into a data analytics powerhouse, introducing an AI-driven platform designed to streamline financial operations while safeguarding the very data that fuels it. Managing financial data through one platform layer can be like looking at a “spiderweb,” according to Bill Waid, chief product and technology officer. Clients will be relying on data from different sources, trying to avoid redundancy and create linkage between pieces of the same data. “We have our own proprietary way of actually identifying linkage and householding,” Waid said. “And actually being able to bring together and unify that data so that you have common contexts for each of them, but we also provide an open-source way of doing that where they might be done by the data partner itself.” FICO helps its users create a core data catalog, from which they can derive predictive reasoning through AI features. The goal is collecting data and harnessing AI for a specific, concrete goal, Waid emphasized. “Just collecting data isn’t actually a good business outcome,” he said. “A good business outcome is that I’ve used that data … because very often the predictiveness isn’t in the data itself, it’s in some combination of those data elements over time.”
Consumer AI agents and zero‑party data will shift loyalty from discounts to context, as brands operationalize per‑app actions and segment‑specific perks such as Chipotle’s student‑authenticated rewards
If there’s one co-hort that understands the value of their data it is Gen Z. And if the value exchange is strong and they trust your brand (emphasis on brand trust) then they will be the first in the pool on sharing coveted zero party data with that trusted brand. Zero-party data is the information customers willingly volunteer about their preferences, needs, and context. Gen Z’s hope is brands will actually listen and enhance their experiences – “think training their algorithms.” Smart programs use onboarding and ongoing touchpoints to ask simple, thoughtful questions—and then act on them immediately. e.l.f. Beauty Squad and My Purina stand out for making the exchange obvious: tell us what you want, and we’ll make the experience better right away. This isn’t about surveys; it’s about creating a dialogue. Brands that do this well build trust faster and generate more relevant offers without relying on guesswork. When programs like Marriott Bonvoy and Starbucks connect memberships and recognize the shared customer while they’re actually on the road, the benefit lands in the moment, not in a quarterly statement. Principal Analyst at Forrester, John Pedini’s forward look is compelling—consumers will soon bring their own AI agents that assemble itineraries around “non-negotiables” and flex on everything else. Pedini flagged Chipotle Rewards U, a student-authenticated extension calibrated to campus life. It’s segment-specific by design—game days, finals, and perks that feel earned, not generic—and it signals a broader shift: loyalty moving from linear discounts to contextual relevance. In my own coverage of travel and experience brands, the programs that grow faster are the ones that make the next decision feel obvious. Pedini flagged Chipotle Rewards U, a student-authenticated extension calibrated to campus life. It’s segment-specific by design—game days, finals, and perks that feel earned, not generic—and it signals a broader shift: loyalty moving from linear discounts to contextual relevance.
Oracle launches MCP Server for Oracle Database to allow users to securely interact with core database platform and navigate complex data schemas using natural language, with the server translating questions into SQL queries
Oracle Corp unveiled MCP Server for Oracle Database, a new Model Context Protocol offering that brings AI-powered interaction directly into its core database platform to help developers and analysts query and manage data using natural language. The new MCP server enables LLMs to securely connect to Oracle Database and interact with it contextually while respecting user permissions and roles. MCP Server for Oracle Database allows users to interact with Oracle’s core database platform using natural language, with the server translating questions into SQL queries, helping users retrieve insights from data without needing to write complex code, making tasks such as performance diagnostics, schema summarization and query generation easier. The integration has been designed to simplify the process of working with SQL queries and navigating complex data schemas. MCP Server for Oracle Database AI agents can act as copilots for developers and analysts by generating code and analyzing performance. The protocol also supports read and write operations, allowing users to take action through the AI assistant, such as creating indexes, checking performance plans, or optimizing workloads. The AI agent operates strictly within the access boundaries of the authenticated user by using a private, dedicated schema to isolate the agent’s interactions from production data, allowing it to generate summaries or sample datasets for language models without exposing full records.
Ataccama brings AI to data lineage- Business users can now trace a data point’s origin and understand how it was profiled or flagged without relying on IT
Ataccama has released Ataccama ONE v16.2, the latest version of its unified data trust platform. This release makes it easier for business users to understand how data moves and changes across systems without writing a single line of SQL. With intuitive, compact lineage views and improved performance, teams can make better decisions with greater confidence and speed. Business users can now trace a data point’s origin and understand how it was profiled or flagged without relying on IT. Ataccama shows how data flows through systems and provides plain-language descriptions of the steps behind every number. For example, in a financial services setting, a data steward can immediately see how a risk score was derived or how a flagged transaction passed through a series of enrichment and quality checks. That kind of visibility shortens reviews, streamlines audits, and gives business teams the confidence to act on the data in front of them. Key features include: AI-powered data lineage. Automatically generates readable descriptions of how data was transformed both upstream and downstream, clarifying filters, joins, and calculations, so business users can understand the logic behind each dataset without reading SQL. Compact lineage diagrams. Presents a simplified, high-level view of data flows with the option to drill into details on demand. This makes it easier to identify issues, answer audit questions, and align stakeholders on how data flows through the organization. Edge processing for secure lineage. Enables metadata extraction from on-prem or restricted environments without moving sensitive data to the cloud. Organizations can maintain compliance, minimize risk, and still get full visibility into their data pipelines, regardless of where the data lives. Expanded pushdown support and performance enhancements. Users can now execute profiling and data quality workloads in pushdown mode for BigQuery and Azure Synapse, minimizing data movement and improving performance for large-scale workloads. The release also includes volume support for Databricks Unity Catalog, further optimizing execution within modern cloud platforms.