Salesforce plans to acquire Convergence.ai to accelerate the development of its next-generation AI agents. The company signed a definitive agreement for the acquisition and expects Convergence’s team and technology to play a “central role” in advancing its AI agent platform, Agentforce. The acquisition is expected to close in the second quarter of Salesforce’s fiscal year 2026, subject to customary closing conditions. “The next wave of customer interaction and employee productivity will be driven by highly capable AI agents that can navigate the complexities of today’s digital work,” Adam Evans, executive vice president and general manager, Salesforce AI Platform at Salesforce, said. “Convergence’s innovative approach to building adaptive, intelligent agents is incredibly impressive.” Convergence’s technology enables AI agents to navigate dynamic interfaces and adapt in real time so they can manage things like web-based workflows and multi-step processes. The company’s talent is also expected to contribute to deep research, task automation and industry-specific solutions that will advance Salesforce’s broader AI roadmap.
UiPath automations and agents can now integrate directly into Microsoft Copilot Studio to automate complex end-to-end processes at scale
UiPath announced new capabilities that enable the orchestration of Microsoft Copilot Studio agents alongside UiPath and other third-party agents using UiPath Maestro™, an enterprise orchestration solution to seamlessly coordinate agents, robots, and people across complex processes. Developers can now orchestrate Microsoft Copilot Studio agents directly from Maestro. This capability builds on bi-directional integration between the UiPath Platform™ and Microsoft Copilot Studio recently announced by Microsoft, that facilitates seamless interaction between UiPath and Microsoft agents and automations — allowing customers to automate complex end-to-end processes, enable contextual decision-making, improve scalability, and unlock new levels of productivity. Developers can now embed UiPath automations and AI agents directly into Microsoft Copilot Studio and integrate Copilot agents within UiPath Studio— all while orchestrating seamlessly across platforms with UiPath Maestro. UiPath Maestro can leverage the bi-directional integration with Copilot Studio to give customers built-in capabilities to build, manage, and orchestrate agents built in Microsoft Copilot Studio and other platforms in a controlled and scalable way—all while driving tangible business outcomes. Johnson Controls enhanced an existing automation—originally built with UiPath robots and Power Automate—by adding a UiPath agent for confidence-based document extraction. The result: a 500% return on investment and projected savings of 18,000 hours annually that were previously spent on manual document review. The integration extends other new capabilities that elevate business processes and drive smarter outcomes with agentic automation across departments and platforms.
Nvidia’s new AI marketplace to offer developers a unified interface to tap into an expanded list of GPU cloud providers for AI workloads in addition to hyperscalers
Nvidia is launching an AI marketplace for developers to tap an expanded list of graphics processing unit (GPU) cloud providers in addition to hyperscalers. Called DGX Cloud Lepton, the service acts as a unified interface linking developers to a decentralized network of cloud providers that offer Nvidia’s GPUs for AI workloads. Typically, developers must rely on cloud hyperscalers like Amazon Web Services, Microsoft Azure or Google Cloud to access GPUs. However, with GPUs in high demand, Nvidia seeks to open the availability of GPUs from an expanded roster of cloud providers beyond hyperscalers. When one cloud provider has some idle GPUs in between jobs, these chips will be available in the marketplace for another developer to tap. The marketplace will include GPU cloud providers CoreWeave, Crusoe, Lambda, SoftBank and others. The move comes as Nvidia looks to address growing frustration among startups, enterprises and researchers over limited GPU availability. With AI model training requiring vast compute resources — especially for large language models and computer vision systems — developers often face long wait times or capacity shortages. Nvidia CEO Jensen Huang said that the computing power needed to train the next stage of AI has “grown tremendously.”
Microsoft’s new tools can build and manage multi-agent workflows and simulate agent behavior locally before deploying to the cloud while ensuring interoperability across different open-source frameworks like MCP and Agent2Agent
Microsoft Corp. is rolling out a suite of new tools and services that are designed to accelerate the development and deployment of the autonomous assistants called artificial intelligence agents across its platforms. The Azure AI Foundry Agent Service is now generally available, allowing developers to build, manage, and scale AI agents that automate business processes. It supports multi-agent workflows, meaning specialized agents can collaborate on complex tasks. The service integrates with various Microsoft services and supports open protocols like Agent2Agent and Model Context Protocol, ensuring interoperability across different agent frameworks. To streamline deployment and testing, Microsoft has introduced a unified runtime that merges the Semantic Kernel SDK and AutoGen framework, enabling developers to simulate agent behavior locally before deploying to the cloud. The service also includes AgentOps, a set of monitoring and optimization tools, and allows developers to use Azure Cosmos DB for thread storage. Another major announcement is Copilot Tuning, a feature that lets businesses fine-tune Microsoft 365 Copilot using their own organizational data. This means law firms can create AI agents that generate legal documents in their house style, while consultancies can build Q&A agents based on their regulatory expertise. The feature will be available in June through the Copilot Tuning Program, but only for organizations with at least 5,000 Microsoft 365 Copilot licenses. Microsoft is also previewing new developer tools for Microsoft Teams, including secure peer-to-peer communication via the A2A protocol, agent memory for contextual user experiences, and improved development environments for JavaScript and C#.
Nvidia DGX Spark and DGX Station personal AI supercomputers to enable developers to prototype, fine-tune and inference models with networking speeds of up to 800Gb/s for high-speed connectivity and multi-station scaling
Nvidia announced that Taiwan’s system manufacturers are set to build Nvidia DGX Spark and DGX Station systems. Growing partnerships with Acer, Gigabyte and MSI will extend the availability of DGX Spark and DGX Station personal AI supercomputers. Powered by the Nvidia Grace Blackwell platform, DGX Spark and DGX Station will enable developers to prototype, fine-tune and inference models from the desktop to the data center. DGX Spark is equipped with the Nvidia GB10 Grace Blackwell Superchip and fifth-generation Tensor Cores. It delivers up to 1 petaflop of AI compute and 128GB of unified memory, and enables seamless exporting of models to Nvidia DGX Cloud or any accelerated cloud or data center infrastructure. Built for the most demanding AI workloads, DGX Station features the Nvidia GB300 Grace Blackwell Ultra Desktop Superchip, which offers up to 20 petaflops of AI performance and 784GB of unified system memory. The system also includes the Nvidia ConnectX-8 SuperNIC, supporting networking speeds of up to 800Gb/s for high-speed connectivity and multi-station scaling. DGX Station can serve as an individual desktop for one user running advanced AI models using local data, or as an on-demand, centralized compute node for multiple users. The system supports Nvidia Multi-Instance GPU technology to partition into as many as seven instances — each with its own high-bandwidth memory, cache and compute cores — serving as a personal cloud for data science and AI development teams. To give developers a familiar user experience, DGX Spark and DGX Station mirror the software architecture that powers industrial-strength AI factories. Both systems use the Nvidia DGX operating system, preconfigured with the latest Nvidia AI software stack, and include access to Nvidia NIM microservices and Nvidia Blueprints. Developers can use common tools, such as PyTorch, Jupyter and Ollama, to prototype, fine-tune and perform inference on DGX Spark and seamlessly deploy to DGX Cloud or any accelerated data center or cloud infrastructure.
NLWeb from Microsoft combines semi-structured data, RSS and LLMs to turn any website into an AI app powered by natural language, that lets visitors query the contents using their voice
Microsoft has launched NLWeb, an open-source project that aims to transform any existing website into an artificial intelligence-powered application by integrating natural language capabilities. The project, which was announced at Microsoft Build 2025, aims to provide developers with the fastest and easiest way to turn any website into an AI app powered by the large language model of their choice. Once integrated, people can query the contents of any website using their voice, just as they do with AI assistants such as ChatGPT or Microsoft Copilot. NLWeb uses semistructured data from websites like Schema.org and RSS information, combining these with Language Learning Models (LLMs) to create a natural language interface accessible to both humans and AI agents. The project is technology-agnostic, supporting major operating systems besides Windows, such as Android, iOS, and Linux. Microsoft aims to bring the benefits of generative AI search directly to every website and is building NLWeb with an eye toward future AI agents.
Accenture, Dell and NVIDIA partner to offer a full-stack solution for rapidly scaling AI in private, on-prem environments through one-click deployment, modular, reusable frameworks, automated workflows, and dynamic cloud-to-edge orchestration
Accenture in collaboration with Dell Technologies and NVIDIA, is providing an AI solution built on Dell Technologies infrastructure with NVIDIA AI Enterprise software. This helps organizations – particularly those within regulated industries or those with substantial investments in on-premises infrastructure – capitalize on the burgeoning opportunities of agentic AI. This collaboration extends the reach of the Accenture AI Refinery™ platform, bringing agentic AI capabilities with a one-click deployment to Dell’s high-performance, NVIDIA-accelerated infrastructure, helping companies realize value more quickly and reduce total cost of ownership. Accenture will further facilitate AI deployment with NVIDIA Enterprise AI Factory validated design, a guide for organizations to build on-premise AI factories leveraging NVIDIA Blackwell and a broad ecosystem of AI partners. The solution helps organizations rapidly scale AI in private, on-prem environments. It provides support for key requirements, including data sovereignty and compliance to help meet regulatory and data residency mandates; resiliency and high availability to meet business continuity requirements, security and privacy controls needed for air-gapped environments or restricted network zones; ultra-low latency for real-time uses cases like manufacturing or healthcare imagining; and edge or offline use cases critical for remote, disconnected environments where reliable internet access is limited or unavailable. Preconfigured packages integrate Accenture’s AI Refinery and the Dell AI Factory with NVIDIA, which includes NVIDIA Enterprise AI software, streamlining data transfer and indexing to empower data-driven agentic insights. This unified, full-stack solution helps to accelerate enterprise AI transformation by enabling rapid service prototyping with modular, reusable frameworks, automated workflows, and dynamic cloud-to-edge orchestration.
Accenture, Dell and NVIDIA partner to offer a full-stack solution for rapidly scaling AI in private, on-prem environments through one-click deployment, modular, reusable frameworks, automated workflows, and dynamic cloud-to-edge orchestration
Accenture in collaboration with Dell Technologies and NVIDIA, is providing an AI solution built on Dell Technologies infrastructure with NVIDIA AI Enterprise software. This helps organizations – particularly those within regulated industries or those with substantial investments in on-premises infrastructure – capitalize on the burgeoning opportunities of agentic AI. This collaboration extends the reach of the Accenture AI Refinery™ platform, bringing agentic AI capabilities with a one-click deployment to Dell’s high-performance, NVIDIA-accelerated infrastructure, helping companies realize value more quickly and reduce total cost of ownership. Accenture will further facilitate AI deployment with NVIDIA Enterprise AI Factory validated design, a guide for organizations to build on-premise AI factories leveraging NVIDIA Blackwell and a broad ecosystem of AI partners. The solution helps organizations rapidly scale AI in private, on-prem environments. It provides support for key requirements, including data sovereignty and compliance to help meet regulatory and data residency mandates; resiliency and high availability to meet business continuity requirements, security and privacy controls needed for air-gapped environments or restricted network zones; ultra-low latency for real-time uses cases like manufacturing or healthcare imagining; and edge or offline use cases critical for remote, disconnected environments where reliable internet access is limited or unavailable. Preconfigured packages integrate Accenture’s AI Refinery and the Dell AI Factory with NVIDIA, which includes NVIDIA Enterprise AI software, streamlining data transfer and indexing to empower data-driven agentic insights. This unified, full-stack solution helps to accelerate enterprise AI transformation by enabling rapid service prototyping with modular, reusable frameworks, automated workflows, and dynamic cloud-to-edge orchestration.
Mistral unveils Devstral: an open-source AI model optimized for coding tasks, outperforming peers on SWE-Bench and runnable on consumer hardware
AI startup Mistral announced a new AI model focused on coding: Devstral. Devstral, which Mistral says was developed in partnership with AI company All Hands AI, is openly available under an Apache 2.0 license, meaning it can be used commercially without restriction. Mistral claims that Devstral outperforms other open models like Google’s Gemma 3 27B and Chinese AI lab DeepSeek’s V3 on SWE-Bench Verified, a benchmark measuring coding skills. “Devstral excels at using tools to explore codebases, editing multiple files and power[ing] software engineering agents,” writes Mistral. “[I]t runs over code agent scaffolds such as OpenHands or SWE-Agent, which define the interface between the model and the test cases […] Devstral is light enough to run on a single [Nvidia] RTX 4090 or a Mac with 32GB RAM, making it an ideal choice for local deployment and on-device use.” Devstral, which Mistral is calling a “research preview,” can be downloaded from AI development platforms, including Hugging Face, and also tapped through Mistral’s API. It’s priced at $0.1 per million input tokens and $0.3 per million output tokens, tokens being the raw bits of data that AI models work with. Devstral isn’t a small model per se, but it’s on the smaller side at 24 billion parameters.
OpenAI enhances Responses API with GPT-4o image generation, remote MCP integration, and enterprise-grade tools for building advanced AI agents- developers can now perform searches across multiple vector stores and apply attribute-based filtering to retrieve only the most relevant content
OpenAI is rolling out a set of significant updates to its newish Responses API, aiming to make it easier for developers and enterprises to build intelligent, action-oriented agentic applications. The Responses API provides visibility into model decisions, access to real-time data, and integration capabilities that allowed agents to retrieve, reason over, and act on information. A key addition in this update is support for remote MCP servers. Developers can now connect OpenAI’s models to external tools and services such as Stripe, Shopify, and Twilio using only a few lines of code. This capability enables the creation of agents that can take actions and interact with systems users already depend on. To support this evolving ecosystem, OpenAI has joined the MCP steering committee. The update brings new built-in tools to the Responses API that enhance what agents can do within a single API call. A variant of OpenAI’s hit GPT-4o native image generation model is now available through the API under the model name “gpt-image-1.” It includes potentially helpful and fairly impressive new features like real-time streaming previews and multi-turn refinement. This enables developers to build applications that can produce and edit images dynamically in response to user input. Additionally, the Code Interpreter tool is now integrated into the Responses API, allowing models to handle data analysis, complex math, and logic-based tasks within their reasoning processes. The tool helps improve model performance across various technical benchmarks and allows for more sophisticated agent behavior. The file search functionality has also been upgraded. Developers can now perform searches across multiple vector stores and apply attribute-based filtering to retrieve only the most relevant content. This improves the precision of information agents use, enhancing their ability to answer complex questions and operate within large knowledge domains. Background mode allows for long-running asynchronous tasks, addressing issues of timeouts or network interruptions during intensive reasoning. Reasoning summaries, a new addition, offer natural-language explanations of the model’s internal thought process, helping with debugging and transparency. Encrypted reasoning items provide an additional privacy layer for Zero Data Retention customers. These allow models to reuse previous reasoning steps without storing any data on OpenAI servers, improving both security and efficiency.