Supermicro introduced its Data Center Building Block Solutions, designed to simplify the deployment of liquid-cooled AI infrastructure. The offering includes servers, storage, networking, racks, liquid cooling systems, software, services, and support. As an expansion of Supermicro’s System Building Block Solutions, DCBBS adopts a standardized, yet flexible solution architecture, vastly expanded in scope to handle the most demanding AI data center training and inference workloads, enabling easier data center planning, buildout, and operation – all while reducing cost. “Supermicro’s DCBBS enables clients to easily construct data center infrastructure with the fastest time-to-market and time-to-online advantage, deploying as quickly as three months,” said Charles Liang, president and CEO of Supermicro. “With our total solution coverage, including designing data center layouts and network topologies, power and battery backup-units, DCBBS simplifies and accelerates AI data center buildouts leading to reduced costs and improved quality.” DCBBS offers packages of pre-validated data center-level scalable units, including a 256-node AI Factory DCBBS scalable unit, designed to alleviate the burden of prolonged data center design by providing a streamlined package of floor plans, rack elevations, bill of materials, and more. Supermicro provides comprehensive first-party services to ensure project success, starting from consultation to on-site deployment and continued on-site support. DCBBS is customizable at the system-level, rack cluster-level, and data center-level to meet virtually any project requirements. Along with DLC-2 technology, DCBBS also helps customers save up to 40% power, reducing 60% data center footprint, and decreasing 40% water consumption, all of which leads to 20% lower TCO. Solutions from Supermicro include up to 256 Liquid Cooled 4U Supermicro NVIDIA HGX system nodes, each system equipped with 8 NVIDIA Blackwell GPUs (2,048 GPUs in total), interconnected with up to 800Gb/s NVIDIA Quantum-X800 InfiniBand or NVIDIA Spectrum X Ethernet networking platform. The compute fabric is supported by elastically scalable tiered storage with high-performance PCIe Gen5 NVMe, TCO optimized Data Lake nodes, and resilient management system nodes for continuous uninterrupted operation.