• Menu
  • Skip to right header navigation
  • Skip to main content
  • Skip to primary sidebar

DigiBanker

Bringing you cutting-edge new technologies and disruptive financial innovations.

  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In
  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In

Nvidia unveils its collaborative “gigawatt AI factories” to support the next gen of AI models based on its Vera Rubin architecture; enables 150% more power transmission by using 800VDC power delivery and 100% liquid cooling

October 15, 2025 //  by Finnovate

Nvidia is collaborating with more than 70 partners on the design of more efficient “gigawatt AI factories” to support the next generation of artificial intelligence models. The gigawatt AI factories envisioned by Nvidia will utilize Vera Rubin NVL144, which is an open architecture rack server based on a 100% liquid-cooled design. It’s designed to support the company’s next-generation Vera Rubin graphics processing units, which are expected to launch in 2027. The architecture will enable companies to scale their data centers exponentially, with a central printed circuit board midplane that enables faster assembly, and modular expansion bays for networking and inference to be added as needed. Nvidia said it’s donating the Vera Rubin NVL144 architecture to the Open Compute Project as an open standard, so that any company will be able to implement it in its own data centers. It also talked about how its ecosystem partners are ramping up support for the Nvidia Kyber server rack design, which will ultimately be able to connect 576 Rubin Ultra GPUs when they become available.  The Vera Rubin NVL144 architecture is designed to support the roll-out of 800-volt direct current data centers for the gigawatt era, and Nvidia hopes it will become the foundation of new “AI factories,” or data centers that are optimized for AI workloads. Nvidia explained that Vera Rubin NVL144 is all about preparing for the future, with the flexible architecture designed to scale up over time to support advanced reasoning engines and the demands of autonomous AI agents. It’s based on the existing Nvidia MGX modular architecture, which means it’s compatible with numerous third-party components and systems from more than 50 ecosystem partners. With the new architecture, data center operators will be able to mix and match different components in a modular fashion in order to customize their AI factories. Nvidia also revealed the growing support for its Nvidia Kyber rack server architecture, which is designed to support the infrastructure that will power clusters of 576 Vera Rubin GPUs. Like Vera Rubin NVL144, Nvidia Kyber features several innovations in terms of 800 VDC power delivery, liquid cooling and mechanical design.

Read Article

Category: Additional Reading

Previous Post: « ExaGrid’s multi-layer Retention Time-Lock for ransomware recovery protects backup data from deletion or encryption by combining a non-network-facing Repository Tier, tiered air gap, delayed delete policy and immutable data objects
Next Post: Verifone launches Commander Fleet to accept WEX and other fleet cards through a single POS; unifying heavy/light fleet and consumer transactions along with fleet data capture like odometer readings »

Copyright © 2025 Finnovate Research · All Rights Reserved · Privacy Policy
Finnovate Research · Knyvett House · Watermans Business Park · The Causeway Staines · TW18 3BA · United Kingdom · About · Contact Us · Tel: +44-20-3070-0188

We use cookies to provide the best website experience for you. If you continue to use this site we will assume that you are happy with it.