Nvidia is launching an AI marketplace for developers to tap an expanded list of graphics processing unit (GPU) cloud providers in addition to hyperscalers. Called DGX Cloud Lepton, the service acts as a unified interface linking developers to a decentralized network of cloud providers that offer Nvidia’s GPUs for AI workloads. Typically, developers must rely on cloud hyperscalers like Amazon Web Services, Microsoft Azure or Google Cloud to access GPUs. However, with GPUs in high demand, Nvidia seeks to open the availability of GPUs from an expanded roster of cloud providers beyond hyperscalers. When one cloud provider has some idle GPUs in between jobs, these chips will be available in the marketplace for another developer to tap. The marketplace will include GPU cloud providers CoreWeave, Crusoe, Lambda, SoftBank and others. The move comes as Nvidia looks to address growing frustration among startups, enterprises and researchers over limited GPU availability. With AI model training requiring vast compute resources — especially for large language models and computer vision systems — developers often face long wait times or capacity shortages. Nvidia CEO Jensen Huang said that the computing power needed to train the next stage of AI has “grown tremendously.”