Veris AI, a platform that lets companies safely train and test AI agents through novel high-fidelity simulated experiences, emerged from stealth and has raised $8.5M in seed funding. Veris allows developers to train agents using experience rather than prompt engineering and human-generated data. Veris’ dynamic, realistic, simulated environments gives enterprises a safe space for reinforcement learning and continuous improvement, ultimately helping them deploy and scale more accurate AI agents. Mehdi Jamei, CEO and co-founder of Veris said, ”We are building Veris to unlock the potential of agentic AI for enterprises – both by solving existing problems and improving the speed and quality in which new agents can come into production.”
Trust3 AI’s customizable guardrails, integrate with Snowflake RBAC mitigating prompt injection, unauthorized access, and sensitive data exposure
Trust3 AI has unveiled its innovative AI trust layer, focused on improving Cortex AI adoption and providing enterprises with robust tools to address challenges in scaling and managing AI systems. Available as a native app on Snowflake, Trust3 AI key components include—Trust3 IQ, Trust3 Visibility, and Trust3 Guard. Together, these features create a groundbreaking platform that simplifies semantic layer building, enhances contextual understanding, and ensures AI systems’ reliable and secure operation at scale. Trust3 IQ unifies structured and unstructured datasets, delivering a unified semantic layer that bolsters accuracy and enhances retrieval-augmented generation (RAG), and Cortex agents. Offering a significant increase in reliability of conversational AI, Trust3 IQ accelerates AI adoption across multiple industries. Through Trust3 Visibility, enterprises gain real-time observability for all AI assets built on Snowflake and other platforms, enabling teams to measure, catalog, and govern systems effectively. This governance layer provides critical oversight for system behavior, ownership, and performance, mitigating risks related to security, legal and operational efficiency. Additionally, Trust3 Guard powers dynamic, context-aware security controls to safeguard structured and unstructured data. Its customizable guardrails, integrated with Snowflake RBAC and guardrails, mitigate risks such as prompt injection, unauthorized access, and sensitive data exposure, ensuring operational trust and compliance. Trust3 AI accelerates AI adoption and builds trust with end users by: Building enterprise context faster with intelligent metadata and semantic models; Delivering real-time observability for proactive risk mitigation; and Enhancing operational efficiency and reducing AI costs with adaptive controls.
IBM is looking to offer pre-built domain-specific agents and connectors for 80+ enterprise apps to support multi-agent collaboration across hybrid environments using open standards, semantic control planes and interoperability
At IBM Think 2025, IBM doubled down on a bold agentic AI strategy to unify digital labor across hybrid enterprises. IBM’s approach stands out by focusing on open standards, semantic control planes and an architecture that invites a diverse ecosystem of agents. The scale and complexity of enterprise ecosystems demand AI agents that are not just intelligent, but also interoperable. IBM’s announcements at Think 2025 centered around this very idea, championing openness as the foundation of digital labor. With watsonx Orchestrate positioned as the orchestrator of the agentic enterprise, IBM introduced a platform capable of managing, coordinating, and integrating agents built with any framework. “Over the next three years, one billion agents will be built on the basis of generative AI,” Rob Thomas, senior vice president, software and chief commercial officer at IBM, said during IBM Think. “They will need to work with each other seamlessly.” The semantic control plane at the heart of watsonx Orchestrate enables AI agents to interpret goals, decompose them into executable tasks and route those tasks to the right digital workers. This allows agents to operate collaboratively across hybrid environments —on-prem, cloud and software-as-a-service — without being trapped in vendor-specific ecosystems. “We help our clients integrate,” Arvind Krishna, chief executive officer of IBM, said during IBM Think. “We want to meet them where they are.” Through its new Agent Connect partner program, IBM is opening the doors for SaaS vendors, integrators, and developers to contribute agents to the watsonx ecosystem, according to Hebner. These agents can be built using any stack and plugged directly into IBM’s orchestration framework, where they benefit from observability, governance and semantic interoperability. Platforms from competitors such as Microsoft Corp. and SAS Institute are actively integrating causal AI, advanced reasoning and knowledge graph capabilities to help AI agents make better decisions, not just automate workflows. With multi-agent collaboration, pre-built domain-specific agents, and connectors for over 80 enterprise apps, IBM is building a semantic operating system for AI agents, enabling businesses to plug and play digital labor as easily as integrating software components.
Deloitte and Snowflake’s governed data platform for banks to offer a single source of truth by removing legacy data silos, generating real-time insights and unlocking use cases across customer onboarding, claims processing and underwriting
Deloitte Consulting LLP’s data leaders say clients want more than strategies — they expect accelerated delivery, real-time insight and a measurable return on investment. Deloitte’s collaboration with Snowflake meets that demand by building a governed data platform designed to speed time to value, according to Bibhu Patnaik, AI and data principal at Deloitte. Legacy data silos remain a major hurdle for financial institutions aiming to adopt AI at scale. Deloitte and Snowflake are working to dismantle these barriers and lay the foundation for a governed data platform that supports both regulatory mandates and innovation goals. The shift enables firms to tap into Snowflake’s scalable architecture while staying compliant with evolving oversight requirements, according to Patnaik. “Bank categorization is changing from a category four to a category three … and that’s where Snowflake shines. That’s where we are partnering with Snowflake to create that governed data platform ecosystem: A single source of truth which people can rely on, mainly for our financials and regulatory purposes as well.” The success of AI depends on harmonized, high-quality data that can be used across multiple functions, according to Sidhu. Deloitte’s work with Snowflake includes building a governed data platform readiness layer, enabling the delivery of real-time insights and unlocking use cases across customer onboarding, claims processing and underwriting. Deloitte sees financial institutions leapfrogging from outdated workflows to agent-based architectures. With Snowflake Cortex AI and a governed data platform supporting multi-agent thinking entering the mainstream, clients are moving beyond robotic process automation in search of more creative ways to reinvent internal operations, according to Patnaik.
OpenAI’s AI coding agent can now connect to the web to install dependencies, run tests that require external resources, upgrade packages and perform more tasks that demand internet connectivity while letting the user control domain access
Open AI expanded the capabilities of its Codex software engineering agent that launched last month and also making it available to more users. It’s also making more tools available to developers of voice agents through the OpenAI Agents software development kit. Codex is now able to connect to the web to install dependencies, run tests that require external resources, upgrade packages and more tasks that demand internet connectivity. However, the company said internet access remains switched off by default, and must be enabled for specific environments. Users will be able to control which domains it can access. The capability is coming to ChatGPT Plus, Pro and Teams users first, with Enterprise users expected to get it later. There are several other updates too. For instance, Codex now has the ability to update pull requests when performing a follow up task, and users will also be able to dictate tasks to it, instead of typing them out. Elsewhere, it gets support for binary files when applying patches, improved error messages for setup scripts, and increased limits on task diffs, up from 1 megabyte to 5 megabytes, and higher limits of 10 minutes on setup script durations, up from five minutes previously. OpenAI has also re-enabled live activities for iOS Codex users after fixing a problem pertaining to missed notifications, and removed the two-factor authentication requirement for users with single sign-on or social logins. OpenAI Agents SDK now supports TypeScript, with features including guardrails, handoffs, tracing and the Model Context Protocol, which provides a standardized way for agents to use third-party software tools such as browsers. The Agents SDK launched in March, providing tools for integrating AI models and agents with internal systems. Other updates in the Agents SDK include support for human-in-the-loop approvals, so developers can pause tool executions, serialize and store agent states and approve or reject specific calls, the company said. The Traces dashboard is getting support for Realtime application programming interface sessions, making it easier for developers to visualize voice agent runs. Finally, there’s an updated speech-to-speech model available in the SDK that delivers improved instruction following capabilities, interruption behavior and tool-calling consistency. Developers will also be able to control how fast the voice speaks.
OpenAI adds 1 million paying business subscribers since February, driven by broadbased adoption of ChatGPT across industries
OpenAI reportedly added 1 million paying business subscribers since February. Those additions lifted the number of such subscribers from two million in February to three million currently. The figure includes subscribers to ChatGPT Enterprise, ChatGPT Team and ChatGPT Edu. CNBC, which reported the same numbers of paying business subscribers, said that OpenAI Chief Operating Officer Brad Lightcap told it in an interview that the company’s business tools are being adopted across industries, including highly regulated ones like financial services and health care. “There’s this really tight interconnect between the growth of ChatGPT as a consumer tool and its adoption in the enterprise and in business,” Lightcap said. OpenAI reported in September that in the year after it launched the first of its three business products, it gained 1 million paying business users.
Mistral’s vibe coding agent serves as an “in-IDE” assistant enabling developers to perform everything from instant completions to multi-step refactoring through an integrated cloud platform, on air-gapped, on-prem GPUs
AI startup Mistral is releasing its own “vibe coding” client, Mistral Code, to compete with incumbents like Windsurf, Anysphere’s Cursor, and GitHub Copilot. Mistral Code, a fork of the open source project Continue, is an AI-powered coding assistant that bundles Mistral’s models, an “in-IDE” assistant, local deployment options, and enterprise tools into a single package. A private beta is available for JetBrains development platforms and Microsoft’s VS Code. “Our goal with Mistral Code is simple: deliver best-in-class coding models to enterprise developers, enabling everything from instant completions to multi-step refactoring through an integrated platform deployable in the cloud, on reserved capacity, or air-gapped, on-prem GPUs,” Mistral said. Mistral Code is said to be powered by a combination of in-house models including Codestral (for code autocomplete), Codestral Embed (for code search and retrieval), Devstral (for “agentic” coding tasks), and Mistral Medium (for chat assistance). The client supports more than 80 programming languages and a number of third-party plug-ins, and can reason over things like files, terminal outputs, and issues, the company said. “Customers can fine-tune or post-train the underlying models on private repositories or distill lightweight variants,” Mistral explained. “For IT managers, a rich admin console exposes granular platform controls, deep observability, seat management, and usage analytics.” Mistral said it plans to continue making improvements to Mistral Code and contribute at least a portion of those upgrades to the Continue open source project.
ChatGPT gains the ability to record and transcribe meetings, and new connectors are enabling it to search for information across some of their most commonly used cloud services
ChatGPT is getting a host of new capabilities for business users, including connectors for cloud services such as Box, Dropbox, Google Drive, OneDrive and SharePoint. In addition, OpenAI also announced that its chatbot is getting the ability to record business meetings and support for Model Context Protocol connections, enabling ChatGPT to use external tools to aid in deep research. OpenAI said it’s focused on making ChatGPT even better for office workers rather than devs, with the new connectors enabling it to search for information across some of their most commonly used cloud services. OpenAI said the connectors will follow the organization’s policies around access control to ensure that users can only gather insights or get answers from documents and files they’re allowed to open. A second update sees ChatGPT gain the ability to record and transcribe meetings, the company said. It explained that the chatbot will be able to generate notes about the meeting, with time-stamped citations, so users can check to see exactly what was said. It will also be able to suggest actions based on whatever was discussed during the meeting, as it can understand exactly what was said. It will work in tandem with the new connectors, too. Users could query their meeting notes, and ChatGPT would be able to look up any relevant information held in Box or another service to provide more insights. Users will also be able to convert action items into a Canvas document, which is OpenAI’s tool for coding and writing projects. The final update refers to new connectors that are meant to enhance ChatGPT’s deep research capabilities. This is an agentic AI capability, because ChatGPT performs the research autonomously, without any input or guidance from the user beyond the initial prompt. The new connectors leverage the MCP protocol, which provides a standardized way for AI agents to use third-party tools. Previously, ChatGPT could only perform research using a web browser, but OpenAI said it can now use select tools from Google LLC and Microsoft Corp., as well as HubSpot and Linear, to aid in its research efforts.
Perplexity Labs debuts an AI coworker Labs that is different from its “Deep Research” feature in that it can use more tools to create project deliverables turning prompts into spreadsheets, dashboards and simple web apps
Perplexity has launched Perplexity Labs, a new AI tool that can turn prompts into spreadsheets, dashboards and simple web apps. The tool functions as a virtual team that performs 10 minutes or more of self-supervised work using tools like deep web browsing, code execution and chart and image creation. According to the AI startup, “Labs can accomplish in 10 minutes what would previously have taken days of work, tedious research, and coordination of many different skills.” Perplexity said Labs is different from its “Deep Research” feature in that it can use more tools to create project deliverables.
Thread AI’s composable AI infrastructure connects AI models, data, and automation into adaptable, end-to-end workflows aligned with enterprise-specific needs to rapidly prototype and deploy event-driven, distributed AI agents
Thread AI, a leader in composable AI infrastructure, has raised $20 million in Series A funding. Despite the rapid adoption of AI, many organizations struggle integrating AI into complex, evolving environments. They often must choose between rigid, pre-built AI tools that don’t fit their workflows, or costly custom solutions requiring extensive engineering. Thread AI addresses this gap with composable infrastructure that connects AI models, data, and automation into adaptable, end-to-end workflows aligned with each organization’s specific needs. Unlike traditional RPA, ETL, or workflow engines that mirror human workflows or require large infrastructure investments, Thread AI’s Lemma platform allows enterprises to rapidly prototype and deploy event-driven, distributed AI workflows and agents. Lemma supports unlimited AI models, APIs, and applications all within a single platform built with enterprise-grade security. This speeds up deployment, reduces operational burden, and simplifies infrastructure, while maintaining governance, observability, and seamless AI model upgrades. As a result, Thread AI equips enterprises with the flexibility to keep up with rapidly changing AI ecosystem, and the cross-functionality to unlock the power of AI across their entire organization. Lemma users report a 70% improvement in process response times, along with significant efficiency gains as AI-powered workflows reduce operational bottlenecks. Early customers have expanded their AI implementations by 250% to 500%, demonstrating Thread AI’s scalability and practical impact.