OpenAI reportedly plans to turn ChatGPT into a “super-assistant” that is personalized to each user and available to them via the chatbot’s website, the company’s native apps, phone, email and third-party resources like Apple’s Siri. The plan is described in an OpenAI internal document from late 2024 that came to light in the Department of Justice’s antitrust case against Google. The super-assistant will be able to handle tedious daily tasks like answering questions and managing calendars, and more complicated ones like coding. It will be, the document said: “One that knows you, understands what you care about, and helps with any task that a smart, trustworthy, emotionally intelligent person with a computer could do.” OpenAI has announced several updates over the last month that suggest the company aims to expand the capabilities of its artificial intelligence tools. Chief Operating Officer, Brad Lightcap said that OpenAI wants to build an “ambient computer layer” that doesn’t require users to look at a screen.
Veris AI’s platform allows developers to train and test AI agents using dynamic, realistic, high-fidelity simulated experiences rather than prompt engineering and human-generated data to enable deploying more accurate agents
Veris AI, a platform that lets companies safely train and test AI agents through novel high-fidelity simulated experiences, emerged from stealth and has raised $8.5M in seed funding. Veris allows developers to train agents using experience rather than prompt engineering and human-generated data. Veris’ dynamic, realistic, simulated environments gives enterprises a safe space for reinforcement learning and continuous improvement, ultimately helping them deploy and scale more accurate AI agents. Mehdi Jamei, CEO and co-founder of Veris said, ”We are building Veris to unlock the potential of agentic AI for enterprises – both by solving existing problems and improving the speed and quality in which new agents can come into production.”
OpenAI plans to turn ChatGPT into a “super-assistant” that is personalized to each user and available to them via the chatbot’s website, the company’s native apps, phone, email and third-party platforms
OpenAI reportedly plans to turn ChatGPT into a “super-assistant” that is personalized to each user and available to them via the chatbot’s website, the company’s native apps, phone, email and third-party resources like Apple’s Siri. The plan is described in an OpenAI internal document from late 2024 that came to light in the Department of Justice’s antitrust case against Google. The super-assistant will be able to handle tedious daily tasks like answering questions and managing calendars, and more complicated ones like coding. It will be, the document said: “One that knows you, understands what you care about, and helps with any task that a smart, trustworthy, emotionally intelligent person with a computer could do.” OpenAI has announced several updates over the last month that suggest the company aims to expand the capabilities of its artificial intelligence tools. Chief Operating Officer, Brad Lightcap said that OpenAI wants to build an “ambient computer layer” that doesn’t require users to look at a screen.
Veris AI’s platform allows developers to train and test AI agents using dynamic, realistic, high-fidelity simulated experiences rather than prompt engineering and human-generated data to enable deploying more accurate agents
Veris AI, a platform that lets companies safely train and test AI agents through novel high-fidelity simulated experiences, emerged from stealth and has raised $8.5M in seed funding. Veris allows developers to train agents using experience rather than prompt engineering and human-generated data. Veris’ dynamic, realistic, simulated environments gives enterprises a safe space for reinforcement learning and continuous improvement, ultimately helping them deploy and scale more accurate AI agents. Mehdi Jamei, CEO and co-founder of Veris said, ”We are building Veris to unlock the potential of agentic AI for enterprises – both by solving existing problems and improving the speed and quality in which new agents can come into production.”
Trust3 AI’s customizable guardrails, integrate with Snowflake RBAC mitigating prompt injection, unauthorized access, and sensitive data exposure
Trust3 AI has unveiled its innovative AI trust layer, focused on improving Cortex AI adoption and providing enterprises with robust tools to address challenges in scaling and managing AI systems. Available as a native app on Snowflake, Trust3 AI key components include—Trust3 IQ, Trust3 Visibility, and Trust3 Guard. Together, these features create a groundbreaking platform that simplifies semantic layer building, enhances contextual understanding, and ensures AI systems’ reliable and secure operation at scale. Trust3 IQ unifies structured and unstructured datasets, delivering a unified semantic layer that bolsters accuracy and enhances retrieval-augmented generation (RAG), and Cortex agents. Offering a significant increase in reliability of conversational AI, Trust3 IQ accelerates AI adoption across multiple industries. Through Trust3 Visibility, enterprises gain real-time observability for all AI assets built on Snowflake and other platforms, enabling teams to measure, catalog, and govern systems effectively. This governance layer provides critical oversight for system behavior, ownership, and performance, mitigating risks related to security, legal and operational efficiency. Additionally, Trust3 Guard powers dynamic, context-aware security controls to safeguard structured and unstructured data. Its customizable guardrails, integrated with Snowflake RBAC and guardrails, mitigate risks such as prompt injection, unauthorized access, and sensitive data exposure, ensuring operational trust and compliance. Trust3 AI accelerates AI adoption and builds trust with end users by: Building enterprise context faster with intelligent metadata and semantic models; Delivering real-time observability for proactive risk mitigation; and Enhancing operational efficiency and reducing AI costs with adaptive controls.
IBM is looking to offer pre-built domain-specific agents and connectors for 80+ enterprise apps to support multi-agent collaboration across hybrid environments using open standards, semantic control planes and interoperability
At IBM Think 2025, IBM doubled down on a bold agentic AI strategy to unify digital labor across hybrid enterprises. IBM’s approach stands out by focusing on open standards, semantic control planes and an architecture that invites a diverse ecosystem of agents. The scale and complexity of enterprise ecosystems demand AI agents that are not just intelligent, but also interoperable. IBM’s announcements at Think 2025 centered around this very idea, championing openness as the foundation of digital labor. With watsonx Orchestrate positioned as the orchestrator of the agentic enterprise, IBM introduced a platform capable of managing, coordinating, and integrating agents built with any framework. “Over the next three years, one billion agents will be built on the basis of generative AI,” Rob Thomas, senior vice president, software and chief commercial officer at IBM, said during IBM Think. “They will need to work with each other seamlessly.” The semantic control plane at the heart of watsonx Orchestrate enables AI agents to interpret goals, decompose them into executable tasks and route those tasks to the right digital workers. This allows agents to operate collaboratively across hybrid environments —on-prem, cloud and software-as-a-service — without being trapped in vendor-specific ecosystems. “We help our clients integrate,” Arvind Krishna, chief executive officer of IBM, said during IBM Think. “We want to meet them where they are.” Through its new Agent Connect partner program, IBM is opening the doors for SaaS vendors, integrators, and developers to contribute agents to the watsonx ecosystem, according to Hebner. These agents can be built using any stack and plugged directly into IBM’s orchestration framework, where they benefit from observability, governance and semantic interoperability. Platforms from competitors such as Microsoft Corp. and SAS Institute are actively integrating causal AI, advanced reasoning and knowledge graph capabilities to help AI agents make better decisions, not just automate workflows. With multi-agent collaboration, pre-built domain-specific agents, and connectors for over 80 enterprise apps, IBM is building a semantic operating system for AI agents, enabling businesses to plug and play digital labor as easily as integrating software components.
Deloitte and Snowflake’s governed data platform for banks to offer a single source of truth by removing legacy data silos, generating real-time insights and unlocking use cases across customer onboarding, claims processing and underwriting
Deloitte Consulting LLP’s data leaders say clients want more than strategies — they expect accelerated delivery, real-time insight and a measurable return on investment. Deloitte’s collaboration with Snowflake meets that demand by building a governed data platform designed to speed time to value, according to Bibhu Patnaik, AI and data principal at Deloitte. Legacy data silos remain a major hurdle for financial institutions aiming to adopt AI at scale. Deloitte and Snowflake are working to dismantle these barriers and lay the foundation for a governed data platform that supports both regulatory mandates and innovation goals. The shift enables firms to tap into Snowflake’s scalable architecture while staying compliant with evolving oversight requirements, according to Patnaik. “Bank categorization is changing from a category four to a category three … and that’s where Snowflake shines. That’s where we are partnering with Snowflake to create that governed data platform ecosystem: A single source of truth which people can rely on, mainly for our financials and regulatory purposes as well.” The success of AI depends on harmonized, high-quality data that can be used across multiple functions, according to Sidhu. Deloitte’s work with Snowflake includes building a governed data platform readiness layer, enabling the delivery of real-time insights and unlocking use cases across customer onboarding, claims processing and underwriting. Deloitte sees financial institutions leapfrogging from outdated workflows to agent-based architectures. With Snowflake Cortex AI and a governed data platform supporting multi-agent thinking entering the mainstream, clients are moving beyond robotic process automation in search of more creative ways to reinvent internal operations, according to Patnaik.
OpenAI’s AI coding agent can now connect to the web to install dependencies, run tests that require external resources, upgrade packages and perform more tasks that demand internet connectivity while letting the user control domain access
Open AI expanded the capabilities of its Codex software engineering agent that launched last month and also making it available to more users. It’s also making more tools available to developers of voice agents through the OpenAI Agents software development kit. Codex is now able to connect to the web to install dependencies, run tests that require external resources, upgrade packages and more tasks that demand internet connectivity. However, the company said internet access remains switched off by default, and must be enabled for specific environments. Users will be able to control which domains it can access. The capability is coming to ChatGPT Plus, Pro and Teams users first, with Enterprise users expected to get it later. There are several other updates too. For instance, Codex now has the ability to update pull requests when performing a follow up task, and users will also be able to dictate tasks to it, instead of typing them out. Elsewhere, it gets support for binary files when applying patches, improved error messages for setup scripts, and increased limits on task diffs, up from 1 megabyte to 5 megabytes, and higher limits of 10 minutes on setup script durations, up from five minutes previously. OpenAI has also re-enabled live activities for iOS Codex users after fixing a problem pertaining to missed notifications, and removed the two-factor authentication requirement for users with single sign-on or social logins. OpenAI Agents SDK now supports TypeScript, with features including guardrails, handoffs, tracing and the Model Context Protocol, which provides a standardized way for agents to use third-party software tools such as browsers. The Agents SDK launched in March, providing tools for integrating AI models and agents with internal systems. Other updates in the Agents SDK include support for human-in-the-loop approvals, so developers can pause tool executions, serialize and store agent states and approve or reject specific calls, the company said. The Traces dashboard is getting support for Realtime application programming interface sessions, making it easier for developers to visualize voice agent runs. Finally, there’s an updated speech-to-speech model available in the SDK that delivers improved instruction following capabilities, interruption behavior and tool-calling consistency. Developers will also be able to control how fast the voice speaks.
OpenAI adds 1 million paying business subscribers since February, driven by broadbased adoption of ChatGPT across industries
OpenAI reportedly added 1 million paying business subscribers since February. Those additions lifted the number of such subscribers from two million in February to three million currently. The figure includes subscribers to ChatGPT Enterprise, ChatGPT Team and ChatGPT Edu. CNBC, which reported the same numbers of paying business subscribers, said that OpenAI Chief Operating Officer Brad Lightcap told it in an interview that the company’s business tools are being adopted across industries, including highly regulated ones like financial services and health care. “There’s this really tight interconnect between the growth of ChatGPT as a consumer tool and its adoption in the enterprise and in business,” Lightcap said. OpenAI reported in September that in the year after it launched the first of its three business products, it gained 1 million paying business users.
Mistral’s vibe coding agent serves as an “in-IDE” assistant enabling developers to perform everything from instant completions to multi-step refactoring through an integrated cloud platform, on air-gapped, on-prem GPUs
AI startup Mistral is releasing its own “vibe coding” client, Mistral Code, to compete with incumbents like Windsurf, Anysphere’s Cursor, and GitHub Copilot. Mistral Code, a fork of the open source project Continue, is an AI-powered coding assistant that bundles Mistral’s models, an “in-IDE” assistant, local deployment options, and enterprise tools into a single package. A private beta is available for JetBrains development platforms and Microsoft’s VS Code. “Our goal with Mistral Code is simple: deliver best-in-class coding models to enterprise developers, enabling everything from instant completions to multi-step refactoring through an integrated platform deployable in the cloud, on reserved capacity, or air-gapped, on-prem GPUs,” Mistral said. Mistral Code is said to be powered by a combination of in-house models including Codestral (for code autocomplete), Codestral Embed (for code search and retrieval), Devstral (for “agentic” coding tasks), and Mistral Medium (for chat assistance). The client supports more than 80 programming languages and a number of third-party plug-ins, and can reason over things like files, terminal outputs, and issues, the company said. “Customers can fine-tune or post-train the underlying models on private repositories or distill lightweight variants,” Mistral explained. “For IT managers, a rich admin console exposes granular platform controls, deep observability, seat management, and usage analytics.” Mistral said it plans to continue making improvements to Mistral Code and contribute at least a portion of those upgrades to the Continue open source project.