Google DeepMind today pulled the curtain back on AlphaEvolve, an artificial-intelligence agent that can invent brand-new computer algorithms — then put them straight to work inside the company’s vast computing empire. AlphaEvolve pairs Google’s Gemini LLMs with an evolutionary approach that tests, refines, and improves algorithms automatically. The system has already been deployed across Google’s data centers, chip designs, and AI training systems — boosting efficiency and solving mathematical problems that have stumped researchers for decades. “AlphaEvolve is a Gemini-powered AI coding agent that is able to make new discoveries in computing and mathematics,” explained Matej Balog, a researcher at Google DeepMind. “It can discover algorithms of remarkable complexity — spanning hundreds of lines of code with sophisticated logical structures that go far beyond simple functions.” One algorithm it discovered has been powering Borg, Google’s massive cluster management system. This scheduling heuristic recovers an average of 0.7% of Google’s worldwide computing resources continuously — a staggering efficiency gain at Google’s scale. The discovery directly targets “stranded resources” — machines that have run out of one resource type (like memory) while still having others (like CPU) available. AlphaEvolve’s solution is especially valuable because it produces simple, human-readable code that engineers can easily interpret, debug, and deploy. Perhaps most impressively, AlphaEvolve improved the very systems that power itself. It optimized a matrix multiplication kernel used to train Gemini models, achieving a 23% speedup for that operation and cutting overall training time by 1%. For AI systems that train on massive computational grids, this efficiency gain translates to substantial energy and resource savings.
Broadridge Financial Solutions awarded patent for its LLM orchestration of machine learning agents used in its AI bond trading platform; patented features include explainability of the output generated, compliance verification and user profile attributes
Broadridge Financial Solutions has been awarded a U.S. patent for its large language model orchestration of machine learning agents, which is used in BondGPT and BondGPT+. These applications provide timely, secure, and accurate responses to natural language questions using OpenAI GPT models and multiple AI agents. The BondGPT+ enterprise application integrates clients’ proprietary data, third-party datasets, and personalization features, improving efficiency and saving time for users. Broadridge continues to work closely with clients to integrate AI into their workflows. Other significant features patented in U.S. Patent No. 11,765,405 include: Explainability as to how the output of the patented methods of LLM orchestration of machine learning agents was generated through a “Show your work” feature that offers step-by-step transparency; A multi-agent adversarial feature for enhanced accuracy; and An AI-powered compliance verification feature, based on custom compliance rules configured to an enterprise’s unique compliance and risk management processes. The use of User Profile attributes such as user role to inform data retrieval and security.
OpenAI’s new Codex carries out coding tasks in isolated software containers that don’t have web access and allows developers to customize those production environments, review the code and fix bugs with an accuracy rate of 75%
OpenAI debuted a new AI agent, Codex, that can help developers write code and fix bugs. The tool is available through a sidebar in ChatGPT’s interface. One button in the sidebar configures Codex to generate new code based on user instructions, while another allows it to answer questions about existing code. Prompt responses take between one and 30 minutes to generate based on the complexity of the request. Codex is powered by a new AI model called codex-1. It’s a version of o3, OpenAI’s most capable reasoning model, that has been optimized for programming tasks. The ChatGPT developer fine-tuned Codex by training it on a set of real-world coding tasks. Those tasks involved a range of software environments. A piece of software that runs well in one environment, such as a cloud platform, may not run as efficiently on a Linux server or a developer’s desktop, if at all. As a result, an AI model’s training dataset must include technical information about every environment that it will be expected to use. OpenAI used reinforcement learning to train codex-1. It’s a way of developing AI models that relies on trial and error to boost output quality. When a neural network completes a task correctly, it’s given a virtual reward, while incorrect answers lead to penalties that encourage the algorithm to come up with a better approach. In a series of coding tests carried out by OpenAI, Codex achieved an accuracy rate of 75%. That’s 5% better than the most capable, hardware-intensive version of o3. OpenAI’s first-generation reasoning model, o1, scored 11%. Codex carries out coding tasks in isolated software containers that don’t have web access. According to OpenAI, the agent launches a separate container for each task. Developers can customize those development environments by uploading a text file called AGENTS.md. The file may describe what programs Codex should install, how AI-generated code should be tested for bugs and related details. Using AGENTS.md, developers can ensure that the container in which Codex generates code is configured the same way as the production system on which the code will run. That reduces the need to modify the code before releasing it to production. Developers can monitor Codex while it’s generating code. After the tool completes a task, it provides technical data that can be used to review each step of the workflow. It’s possible to request revisions if the code doesn’t meet project requirements.
Salesforce to acquire Convergence.ai to accelerate the development of next-gen AI agents that can navigate dynamic interfaces and adapt in real time to manage web-based workflows and multi-step processes
Salesforce plans to acquire Convergence.ai to accelerate the development of its next-generation AI agents. The company signed a definitive agreement for the acquisition and expects Convergence’s team and technology to play a “central role” in advancing its AI agent platform, Agentforce. The acquisition is expected to close in the second quarter of Salesforce’s fiscal year 2026, subject to customary closing conditions. “The next wave of customer interaction and employee productivity will be driven by highly capable AI agents that can navigate the complexities of today’s digital work,” Adam Evans, executive vice president and general manager, Salesforce AI Platform at Salesforce, said. “Convergence’s innovative approach to building adaptive, intelligent agents is incredibly impressive.” Convergence’s technology enables AI agents to navigate dynamic interfaces and adapt in real time so they can manage things like web-based workflows and multi-step processes. The company’s talent is also expected to contribute to deep research, task automation and industry-specific solutions that will advance Salesforce’s broader AI roadmap.
UiPath automations and agents can now integrate directly into Microsoft Copilot Studio to automate complex end-to-end processes at scale
UiPath announced new capabilities that enable the orchestration of Microsoft Copilot Studio agents alongside UiPath and other third-party agents using UiPath Maestro™, an enterprise orchestration solution to seamlessly coordinate agents, robots, and people across complex processes. Developers can now orchestrate Microsoft Copilot Studio agents directly from Maestro. This capability builds on bi-directional integration between the UiPath Platform™ and Microsoft Copilot Studio recently announced by Microsoft, that facilitates seamless interaction between UiPath and Microsoft agents and automations — allowing customers to automate complex end-to-end processes, enable contextual decision-making, improve scalability, and unlock new levels of productivity. Developers can now embed UiPath automations and AI agents directly into Microsoft Copilot Studio and integrate Copilot agents within UiPath Studio— all while orchestrating seamlessly across platforms with UiPath Maestro. UiPath Maestro can leverage the bi-directional integration with Copilot Studio to give customers built-in capabilities to build, manage, and orchestrate agents built in Microsoft Copilot Studio and other platforms in a controlled and scalable way—all while driving tangible business outcomes. Johnson Controls enhanced an existing automation—originally built with UiPath robots and Power Automate—by adding a UiPath agent for confidence-based document extraction. The result: a 500% return on investment and projected savings of 18,000 hours annually that were previously spent on manual document review. The integration extends other new capabilities that elevate business processes and drive smarter outcomes with agentic automation across departments and platforms.
Nvidia’s new AI marketplace to offer developers a unified interface to tap into an expanded list of GPU cloud providers for AI workloads in addition to hyperscalers
Nvidia is launching an AI marketplace for developers to tap an expanded list of graphics processing unit (GPU) cloud providers in addition to hyperscalers. Called DGX Cloud Lepton, the service acts as a unified interface linking developers to a decentralized network of cloud providers that offer Nvidia’s GPUs for AI workloads. Typically, developers must rely on cloud hyperscalers like Amazon Web Services, Microsoft Azure or Google Cloud to access GPUs. However, with GPUs in high demand, Nvidia seeks to open the availability of GPUs from an expanded roster of cloud providers beyond hyperscalers. When one cloud provider has some idle GPUs in between jobs, these chips will be available in the marketplace for another developer to tap. The marketplace will include GPU cloud providers CoreWeave, Crusoe, Lambda, SoftBank and others. The move comes as Nvidia looks to address growing frustration among startups, enterprises and researchers over limited GPU availability. With AI model training requiring vast compute resources — especially for large language models and computer vision systems — developers often face long wait times or capacity shortages. Nvidia CEO Jensen Huang said that the computing power needed to train the next stage of AI has “grown tremendously.”
Microsoft’s new tools can build and manage multi-agent workflows and simulate agent behavior locally before deploying to the cloud while ensuring interoperability across different open-source frameworks like MCP and Agent2Agent
Microsoft Corp. is rolling out a suite of new tools and services that are designed to accelerate the development and deployment of the autonomous assistants called artificial intelligence agents across its platforms. The Azure AI Foundry Agent Service is now generally available, allowing developers to build, manage, and scale AI agents that automate business processes. It supports multi-agent workflows, meaning specialized agents can collaborate on complex tasks. The service integrates with various Microsoft services and supports open protocols like Agent2Agent and Model Context Protocol, ensuring interoperability across different agent frameworks. To streamline deployment and testing, Microsoft has introduced a unified runtime that merges the Semantic Kernel SDK and AutoGen framework, enabling developers to simulate agent behavior locally before deploying to the cloud. The service also includes AgentOps, a set of monitoring and optimization tools, and allows developers to use Azure Cosmos DB for thread storage. Another major announcement is Copilot Tuning, a feature that lets businesses fine-tune Microsoft 365 Copilot using their own organizational data. This means law firms can create AI agents that generate legal documents in their house style, while consultancies can build Q&A agents based on their regulatory expertise. The feature will be available in June through the Copilot Tuning Program, but only for organizations with at least 5,000 Microsoft 365 Copilot licenses. Microsoft is also previewing new developer tools for Microsoft Teams, including secure peer-to-peer communication via the A2A protocol, agent memory for contextual user experiences, and improved development environments for JavaScript and C#.
Nvidia DGX Spark and DGX Station personal AI supercomputers to enable developers to prototype, fine-tune and inference models with networking speeds of up to 800Gb/s for high-speed connectivity and multi-station scaling
Nvidia announced that Taiwan’s system manufacturers are set to build Nvidia DGX Spark and DGX Station systems. Growing partnerships with Acer, Gigabyte and MSI will extend the availability of DGX Spark and DGX Station personal AI supercomputers. Powered by the Nvidia Grace Blackwell platform, DGX Spark and DGX Station will enable developers to prototype, fine-tune and inference models from the desktop to the data center. DGX Spark is equipped with the Nvidia GB10 Grace Blackwell Superchip and fifth-generation Tensor Cores. It delivers up to 1 petaflop of AI compute and 128GB of unified memory, and enables seamless exporting of models to Nvidia DGX Cloud or any accelerated cloud or data center infrastructure. Built for the most demanding AI workloads, DGX Station features the Nvidia GB300 Grace Blackwell Ultra Desktop Superchip, which offers up to 20 petaflops of AI performance and 784GB of unified system memory. The system also includes the Nvidia ConnectX-8 SuperNIC, supporting networking speeds of up to 800Gb/s for high-speed connectivity and multi-station scaling. DGX Station can serve as an individual desktop for one user running advanced AI models using local data, or as an on-demand, centralized compute node for multiple users. The system supports Nvidia Multi-Instance GPU technology to partition into as many as seven instances — each with its own high-bandwidth memory, cache and compute cores — serving as a personal cloud for data science and AI development teams. To give developers a familiar user experience, DGX Spark and DGX Station mirror the software architecture that powers industrial-strength AI factories. Both systems use the Nvidia DGX operating system, preconfigured with the latest Nvidia AI software stack, and include access to Nvidia NIM microservices and Nvidia Blueprints. Developers can use common tools, such as PyTorch, Jupyter and Ollama, to prototype, fine-tune and perform inference on DGX Spark and seamlessly deploy to DGX Cloud or any accelerated data center or cloud infrastructure.
NLWeb from Microsoft combines semi-structured data, RSS and LLMs to turn any website into an AI app powered by natural language, that lets visitors query the contents using their voice
Microsoft has launched NLWeb, an open-source project that aims to transform any existing website into an artificial intelligence-powered application by integrating natural language capabilities. The project, which was announced at Microsoft Build 2025, aims to provide developers with the fastest and easiest way to turn any website into an AI app powered by the large language model of their choice. Once integrated, people can query the contents of any website using their voice, just as they do with AI assistants such as ChatGPT or Microsoft Copilot. NLWeb uses semistructured data from websites like Schema.org and RSS information, combining these with Language Learning Models (LLMs) to create a natural language interface accessible to both humans and AI agents. The project is technology-agnostic, supporting major operating systems besides Windows, such as Android, iOS, and Linux. Microsoft aims to bring the benefits of generative AI search directly to every website and is building NLWeb with an eye toward future AI agents.
Accenture, Dell and NVIDIA partner to offer a full-stack solution for rapidly scaling AI in private, on-prem environments through one-click deployment, modular, reusable frameworks, automated workflows, and dynamic cloud-to-edge orchestration
Accenture in collaboration with Dell Technologies and NVIDIA, is providing an AI solution built on Dell Technologies infrastructure with NVIDIA AI Enterprise software. This helps organizations – particularly those within regulated industries or those with substantial investments in on-premises infrastructure – capitalize on the burgeoning opportunities of agentic AI. This collaboration extends the reach of the Accenture AI Refinery™ platform, bringing agentic AI capabilities with a one-click deployment to Dell’s high-performance, NVIDIA-accelerated infrastructure, helping companies realize value more quickly and reduce total cost of ownership. Accenture will further facilitate AI deployment with NVIDIA Enterprise AI Factory validated design, a guide for organizations to build on-premise AI factories leveraging NVIDIA Blackwell and a broad ecosystem of AI partners. The solution helps organizations rapidly scale AI in private, on-prem environments. It provides support for key requirements, including data sovereignty and compliance to help meet regulatory and data residency mandates; resiliency and high availability to meet business continuity requirements, security and privacy controls needed for air-gapped environments or restricted network zones; ultra-low latency for real-time uses cases like manufacturing or healthcare imagining; and edge or offline use cases critical for remote, disconnected environments where reliable internet access is limited or unavailable. Preconfigured packages integrate Accenture’s AI Refinery and the Dell AI Factory with NVIDIA, which includes NVIDIA Enterprise AI software, streamlining data transfer and indexing to empower data-driven agentic insights. This unified, full-stack solution helps to accelerate enterprise AI transformation by enabling rapid service prototyping with modular, reusable frameworks, automated workflows, and dynamic cloud-to-edge orchestration.