Leading AI computing provider fueling innovations across AI scenarios
Most comprehensive AI server portfolio, covering solutions from single node to cluster, full-stack managing software and application optimizing services.
Leading performance in model training and inference, fueling R&D and application across AI scenarios.
Trusted by CSP and leading high-tech companies in AI+Science, AI+Graphics, and AIGC worldwide.
Cooperating with leading AI chip vendors developing mature OAI solutions.
Main Application Scenarios | Model | Height | Processor | Accelerator Card | Memory | Storage | Cooling Mode | Details |
AI inference, deep learning, metaverse, AIGC, AI+Science | KR4268V2 | 4U | 2x 4th Gen Intel® Xeon® Scalable Processors, TDP 350W | Supports 8x Dual-slot FHFL PCIe interface GPU cards, | 32x DDR5 DIMMs, up to 4800MT/s | 24x 2.5 or 12x 3.5-inch SAS/SATA drive bays in the front, supports up to 16x NVME or E3.S Built-in 2x M.2 NVME/SATA SSD | Air cooling | More |
2x AMD EPYCTM 9004 Series Processors, Max cTDP 400W | Up to 10x full-height full-length double-width PCIe interface GPU cards | 24x DDR5 DIMMs, up to 4800MT/s | 24x 2.5 or 12x 3.5-inch SAS/SATA drive bays in the front, supports up to 16x NVME or E3.S Built-in 2x M.2 NVME SSD | |||||
AI training, AIGC, metaverse | KR6288V2 | 6U | 2x 4th Gen Intel® Xeon® Scalable Processors, TDP 350W | 1x NVIDIA HGX-Hopper-8GPU module, TDP up to 700W per GPU | 32x DDR5 DIMMs, up to 4800MT/s | 24x 2.5’ SSD, up to 16x NVMe U.2 | Air cooling | More |
2x AMD EPYCTM 9004 Series Processors, Max cTDP 400W | 1x NVIDIA HGX-Hopper-8GPU module, TDP up to 700W per GPU | 24x DDR5 DIMMs, up to 4800MT/s | 24x 2.5' SSD, up to 16x NVMe U.2 | |||||
Pre-training, fine-tuning | KR6298V2 | 6U | 2x 4th Gen Intel® Xeon® Scalable Processors, TDP 350W | 8xIntel Gaudi2/PVC OAMs, TDP up to 600W per OAM | 32x DDR5 DIMMs, up to 4800MT/s | 24x 2.5' SSD, up to 16x NVMe U.2 | Air cooling | More |
MotusAI | MotusAI | Fine-grained scheduling of GPU, Data acceleration strategies, Efficient distributed training, Fault tolerance mechanism | More |