1-800-526-8650
 
 
GPU technology can bring unprecedented performance to a broad spectrum of workloads
up to 5X, 10X, … 100X
improvements in performance and efficiency. These workloads span from the rapidly growing generative AI market to enterprise inferencing, product design, visualization, and to the intelligent edge. Supermicro has built a portfolio of workload-optimized systems for optimal GPU performance and efficiency across this broad spectrum of workloads.
 
 
Use Cases
 
• Large Language Models (LLMs)
• Autonomous Driving Training
• Recommender Systems
 
 
Opportunities and Challenges
 
• Continuous growth of data set size
• High performance everything: GPUs, memory, storage and network fabric
• Pool of GPU memory to fit large AI models and interconnect bandwidth for fast training
 

Key Technologies
 
• NVIDIA HGX H100 SXM 8-GPU/4-GPU
• GPU/GPU interconnect (NVLink and NVSwitch), up to 900GB/s – 7x greater than PCIe 5.0
• Dedicated high performance, high capacity GPU memory 
• High throughput networking and storage per GPU enabling NVIDIA GPUDirect RDMA and Storage.
 

Solution Stack
 
• DL Frameworks: TensorFlow, PyTorch
• Transformers: BERT, GPT, Vision Transformer
• NVIDIA AI Enterprise Frameworks (NVIDIA Nemo, Metropolis, Riva, Morpheus, Merlin
• NVIDIA Base Command (infrastructure software libraries, workload orchestration, cluster management)
• High performance storage (NVMe) for training cache
• Scale-out storage for raw data (data lake)
 
 

 
 
 
HGX H100 Systems
Multi-Architecture Flexibility with Future-Proof, Open-Standards-Based Design
 
 
 
Benefits & Advantages
 
• High performance GPU interconnect up to 900GB/s - 7x better performance than PCIe
• Superior thermal design supports maximum power/performance CPUs and GPUs
• Dedicated networking and storage per GPU with up to double the NVIDIA GPUDirect throughput of the previous generation
• Modular architecture for storage and I/O configuration flexibility with front and rear I/O options
 
 
 
Key Features
• 4 or 8 next-generation H100 SXM GPUs with NVLink, NVSwitch interconnect
• Dual 4th Gen Intel® Xeon® Scalable processors or AMD EPYC™ 9004 series processors
• Supports PCIe 5.0, DDR5 and Compute Express Link (CXL) 1.1+
• Innovative modular architecture designed for flexibility and futureproofing in 8U or 4U.
• Optimized thermal capacity and airflow to support CPUs up to 350W and GPUs up to 700W with air cooling and optional liquid cooling
• PCIe 5.0 x16 1:1 networking slots for GPUs up to 400Gbps each supporting GPUDirect Storage and RDMA and up to 16 U.2 NVMe drive bays
 
 

 
Medium
SYS-421GU-TNXR
 
• 4U 4-GPU
• NVIDIA HGX H100 SXM 4-GPU
• 6 U.2 NVMe Drives
• 8 PCIe 5.0 x16 networking slots
 

 
Large
SYS-821GE-TNHR / AS -8125GS-TNHR
 
• 8U 8-GPU
• NVIDIA HGX H100 SXM 8-GPU
• 16 U.2 NVMe Drives
• 8 PCIe 5.0 x16 networking slots
 
 
 
 
 
 
 
 
 
We offer the most appropriate GUP solutions for your business need. Call Now 1-800-5628650 or Click the "Quote Form" button to get a detail solution quote. 
 
 
Quote Form
Search Filters
Model
Chipset
CPU
Memory
HDD
Power Supply
Price
SYS-521GE-TNRT
Intel® C7414th Gen Intel® Xeon® Scalable ProcessorsUp to 8TB DIMM24x 2.5" hot-swap HDD Bays2700W Redundant PSU
Please Inquire

Add To Cart