Nvidia announced a significant expansion of the Nvidia Omniverse Blueprint for AI factory digital twins, now available as a preview. The expanded blueprint will equip engineering teams to design, simulate and optimize entire AI factories in physically accurate virtual environments, enabling early issue detection and the development of smarter, more reliable facilities. Built on reference architectures for Nvidia GB200 NVL72-powered AI factories, the blueprint taps into Universal Scene Description (OpenUSD) asset libraries. This allows developers to aggregate detailed 3D and simulation data representing all aspects of the data center into a single, unified model, enabling them to design and simulate advanced AI infrastructure optimized for efficiency, throughput and resiliency. Siemens is building 3D models according to the blueprint and engaging with the simulation-ready, or SimReady, standardization effort, while Delta Electronics is adding models of its equipment. Because these are built with OpenUSD, users get accurate simulations of their facility equipment. Jacobs is helping test and optimize the end-to-end blueprint workflow. Connections to the Cadence Reality Digital Twin Platform and ETAP provide thermal and power simulation, enabling engineering teams to test and optimize power, cooling and networking long before construction begins. These contributions help Nvidia and its partners reshape how AI infrastructure is built to achieve smarter designs, avoid downtime and get the most out of AI factories. The OpenUSD-based models within the blueprint are inherently SimReady, designed from the ground up to be physics-based. This is especially valuable for developing and testing physical AI and agentic AI within these AI factories, enabling rapid and large-scale industrial AI simulations of power and cooling systems, building automation and overall IT operations. A key enhancement to this blueprint is the SimReady standardization workflow.
Nvidia’s new AI marketplace to offer developers a unified interface to tap into an expanded list of GPU cloud providers for AI workloads in addition to hyperscalers
Nvidia is launching an AI marketplace for developers to tap an expanded list of graphics processing unit (GPU) cloud providers in addition to hyperscalers. Called DGX Cloud Lepton, the service acts as a unified interface linking developers to a decentralized network of cloud providers that offer Nvidia’s GPUs for AI workloads. Typically, developers must rely on cloud hyperscalers like Amazon Web Services, Microsoft Azure or Google Cloud to access GPUs. However, with GPUs in high demand, Nvidia seeks to open the availability of GPUs from an expanded roster of cloud providers beyond hyperscalers. When one cloud provider has some idle GPUs in between jobs, these chips will be available in the marketplace for another developer to tap. The marketplace will include GPU cloud providers CoreWeave, Crusoe, Lambda, SoftBank and others. The move comes as Nvidia looks to address growing frustration among startups, enterprises and researchers over limited GPU availability. With AI model training requiring vast compute resources — especially for large language models and computer vision systems — developers often face long wait times or capacity shortages. Nvidia CEO Jensen Huang said that the computing power needed to train the next stage of AI has “grown tremendously.”
Microsoft’s new tools can build and manage multi-agent workflows and simulate agent behavior locally before deploying to the cloud while ensuring interoperability across different open-source frameworks like MCP and Agent2Agent
Microsoft Corp. is rolling out a suite of new tools and services that are designed to accelerate the development and deployment of the autonomous assistants called artificial intelligence agents across its platforms. The Azure AI Foundry Agent Service is now generally available, allowing developers to build, manage, and scale AI agents that automate business processes. It supports multi-agent workflows, meaning specialized agents can collaborate on complex tasks. The service integrates with various Microsoft services and supports open protocols like Agent2Agent and Model Context Protocol, ensuring interoperability across different agent frameworks. To streamline deployment and testing, Microsoft has introduced a unified runtime that merges the Semantic Kernel SDK and AutoGen framework, enabling developers to simulate agent behavior locally before deploying to the cloud. The service also includes AgentOps, a set of monitoring and optimization tools, and allows developers to use Azure Cosmos DB for thread storage. Another major announcement is Copilot Tuning, a feature that lets businesses fine-tune Microsoft 365 Copilot using their own organizational data. This means law firms can create AI agents that generate legal documents in their house style, while consultancies can build Q&A agents based on their regulatory expertise. The feature will be available in June through the Copilot Tuning Program, but only for organizations with at least 5,000 Microsoft 365 Copilot licenses. Microsoft is also previewing new developer tools for Microsoft Teams, including secure peer-to-peer communication via the A2A protocol, agent memory for contextual user experiences, and improved development environments for JavaScript and C#.
Nvidia DGX Spark and DGX Station personal AI supercomputers to enable developers to prototype, fine-tune and inference models with networking speeds of up to 800Gb/s for high-speed connectivity and multi-station scaling
Nvidia announced that Taiwan’s system manufacturers are set to build Nvidia DGX Spark and DGX Station systems. Growing partnerships with Acer, Gigabyte and MSI will extend the availability of DGX Spark and DGX Station personal AI supercomputers. Powered by the Nvidia Grace Blackwell platform, DGX Spark and DGX Station will enable developers to prototype, fine-tune and inference models from the desktop to the data center. DGX Spark is equipped with the Nvidia GB10 Grace Blackwell Superchip and fifth-generation Tensor Cores. It delivers up to 1 petaflop of AI compute and 128GB of unified memory, and enables seamless exporting of models to Nvidia DGX Cloud or any accelerated cloud or data center infrastructure. Built for the most demanding AI workloads, DGX Station features the Nvidia GB300 Grace Blackwell Ultra Desktop Superchip, which offers up to 20 petaflops of AI performance and 784GB of unified system memory. The system also includes the Nvidia ConnectX-8 SuperNIC, supporting networking speeds of up to 800Gb/s for high-speed connectivity and multi-station scaling. DGX Station can serve as an individual desktop for one user running advanced AI models using local data, or as an on-demand, centralized compute node for multiple users. The system supports Nvidia Multi-Instance GPU technology to partition into as many as seven instances — each with its own high-bandwidth memory, cache and compute cores — serving as a personal cloud for data science and AI development teams. To give developers a familiar user experience, DGX Spark and DGX Station mirror the software architecture that powers industrial-strength AI factories. Both systems use the Nvidia DGX operating system, preconfigured with the latest Nvidia AI software stack, and include access to Nvidia NIM microservices and Nvidia Blueprints. Developers can use common tools, such as PyTorch, Jupyter and Ollama, to prototype, fine-tune and perform inference on DGX Spark and seamlessly deploy to DGX Cloud or any accelerated data center or cloud infrastructure.
First successful demonstration of quantum error correction of qudits for quantum computers used a reinforcement learning algorithm to optimize
A Yale University study published in Nature has demonstrated the first-ever experimental quantum error correction for higher-dimensional quantum units using qudits, a quantum system that holds quantum information and can exist in more than two states. The researchers used a reinforcement learning algorithm to optimize the systems as ternary and quaternary quantum memories. The experiment pushed past the break-even point for error correction, showcasing a more practical and hardware-efficient method for quantum error correction by harnessing the power of a larger Hilbert space. The increased photon loss and dephasing rates of GKP qudit states can lead to a modest reduction in the lifetime of the quantum information encoded in logical qudits, but in return, it provides access to more logical quantum states in a single physical system. The findings demonstrate the promise of realizing robust and scalable quantum computers and could lead to breakthroughs in cryptography, materials science, and drug discovery.
Quantum Machines cuts calibration time from hours to minutes by combining open-source framework, modular architecture, reusable components, combining them into complex workflows and instantly sharing protocols with ecosystem
Quantum Machines announced the release of Qualibrate (which the company spells QUAlibrate), an open-source framework for calibrating quantum computers. It cuts quantum computer calibration time from hours to minutes. By addressing one of quantum computing’s most critical scaling bottlenecks, Quantum Machines‘ new framework enables fast, modular calibration and fosters a global ecosystem for sharing and advancing calibration protocols. By creating an open ecosystem, Qualibrate enables researchers and companies worldwide to build upon each other’s advances, accelerating the path to practical quantum computers. To properly initialize and maintain a quantum computer’s performance, calibration must be performed not just once, but frequently during operation to compensate for system drift. Qualibrate enables researchers and quantum engineers to create reusable calibration components, combine them into complex workflows, and execute calibrations through an intuitive interface. The platform abstracts away hardware complexities, allowing teams to focus on quantum system logic rather than low-level details. Qualibrate’s open-source nature and modular architecture mean that when researchers develop new calibration protocols, these innovations can be immediately shared, validated, and built upon by the broader quantum computing community. Companies can also develop proprietary solutions on top of Qualibrate that leverage advanced approaches like quantum system simulation and deep learning algorithms. This creates an ecosystem where fundamental calibration advances can be shared openly and enables specialized tools that push the boundaries of performance. Along with the framework, Quantum Machines is releasing its first calibration graph for superconducting quantum computers, providing a complete calibration solution that can be immediately deployed and customized.
Onboarding metrics- Time To First Value, Onboarding CSAT, Customer Outcome Achievement, and Onboarding Risks, should regularly be reported in leadership meetings and board decks
Onboarding is a critical metric in software companies, often overlooked in discussions about annual recurring revenue (ARR), churn, customer acquisition cost (CAC), and customer lifetime value (LTV). It is the first step in the customer journey and defines every subsequent experience. A great onboarding experience can reinforce customers’ decisions, while a confusing one can erode trust. Great onboarding creates upstream value, as it leads to expansion, retention, and referrals. Positive customer metrics are often downstream of onboarding, such as expansion, retention, and referrals. Companies should regularly report onboarding performance in leadership meetings and board decks, as it is an important leading indicator of customer health, revenue growth, and operational efficiency. Some onboarding metrics suggested by the author include Time To First Value, Onboarding Customer Satisfaction (CSAT), Customer Outcome Achievement, and Onboarding Risks. These metrics are not vanity metrics but early warning systems for potential issues. Onboarding should be a cross-functional effort that spans product, sales, marketing, and leadership. Product needs to ensure an intuitive experience, sales set realistic expectations, marketing supports post-sale engagement, and leadership invests in tools, processes, and people to make it work at scale. Ignoring onboarding can lead to wasted CAC, delayed launches, and dissatisfied customers. Companies that nail onboarding often see faster time to revenue recognition, higher net retention, new champions, and use cases. Treating onboarding like a mirror can help companies identify misalignment, friction, and missed opportunities. By investing in onboarding, companies can become growth engines and revenue levers, leading to better financial outcomes.
UBS is using OpenAI and Synthesia’s AI models to create virtual avatars of its analysts, and turning AI-generated script from the analyst’s reports into a realistic, short-form video; expects to scale 5,000 videos from 1,000 per year now
UBS is using AI models to create video avatars of its analysts to share with clients. Tapping models from OpenAI and Synthesia, UBS has already built virtual versions of around 35 of its 720 analysts, with plans for a wider deployment. The analysts visit a studio where Synthesia captures their likeness and voices. Then, a language model reads the analyst’s reports, generates a script, and turns it into a realistic AI-generated video. The avatars are being deployed to help save analysts time and in response to the rising popularity of short-form videos driven by the likes of TikTok. “There are two drivers for it: the client driver and the efficiency driver…It is helping you scale your video capabilities in a way that clients are asking you for, and ultimately saving you time to do your research and meet with clients,” says Scott Solomon, head, global research technology, UBS. Signing up for an avatar is optional for analysts, while Solomon concedes the technology is not yet perfect, struggling with some accents. However, the bank is looking to ramp up production and put out 5,000 videos a year. “We publish about 50,000 documents a year, [but video production] has been fixed at about 1,000 a year, because that’s basically our studio capacity. But the number of views on those videos has gone up dramatically,” Solomon said.
Cardlytics solution allows any merchant with digital channels and a loyalty program to become a publisher on its platform and roll out targeted card-linked offers targeted on purchase data from a third-party vendor
Cardlytics announced the general availability of Cardlytics Rewards Platform (CRP), a new solution that provides publishers the opportunity to enhance their customer loyalty programs with card-linked offers. With CRP, a merchant with digital channels and a loyalty program can now become a publisher on the Cardlytics network and offer more value to their customers. This opens up Cardlytics’ supply to new verticals beyond financial services – such as retail and restaurants – and provides advertisers increased exposure, reach and engagement with consumers where they are already transacting. Publishers can also boost engagement with their customers by incentivizing them to earn rewards on their purchases and improving the shopping experience, helping to create a flywheel for CRP partners. CRP is an extension of Cardlytics’ core platform for financial institution partners, with the same advertiser offers flowing seamlessly to new publisher channels. Offers on CRP are delivered within a publisher’s loyalty program and targeted based on purchase data from a third-party vendor. After opting in to receive offers and connecting their bank account information, customers can activate offers and earn rewards in the form of the publisher’s loyalty currency, such as points or loyalty cash, which can be used for future purchases. Amit Gupta, CEO of Cardlytics said “By enabling our advertisers to become publishers, we are unlocking new opportunities for growth and redefining what it means to be a partner in our ecosystem.”
ShopRite stores deploy VoCoVo’s all-in-one wireless headsets for associates; with full-duplex audio, team members can talk and listen at the same time through an environment noise-canceling (ENC) microphone that’s purpose-built for noisy retail settings
Thirty ShopRite stores in the metro New York area have rolled out VoCoVo Series 5 Pro Headsets. VoCoVo’s all-in-one wireless headsets aim to help retail teams communicate more easily and naturally with co-workers and customers across the store. According to VoCoVo, its Series 5 Pro Headset is a fully integrated communication solution that seamlessly connects retail team members, enhances customer interactions and syncs with other connected store technology. Among its key features are telephony integration, enabling associates to make, receive and transfer customer calls directly from their headsets, and connectivity with curbside pickup services, self-checkout systems and more. These integrations convert real-time notifications into instant voice alerts, ensuring quick and efficient responses from store teams. The wireless headsets provide retail associates, both in-store and beyond, with seamless, secure communication via DECT, a technology for high-quality, scalable voice connectivity. Thanks to full-duplex audio, team members can talk and listen at the same time through an environment noise-canceling (ENC) microphone that’s purpose-built for noisy retail settings. Designed for durability, the waterproof, dustproof and impact-resistant headsets can withstand extreme temperatures while delivering up to 40 hours of standby time per charge.
