What are you looking for ?
Advertise with us
ATP

Supermicro Previews Performance Intel-Based X14 Servers for AI, HPC, and Critical Enterprise Workloads

Architecture upgrade includes performance-optimized CPUs, Next-Gen GPU support, upgraded Memory MRDIMMs, 400GbE networking, storage options including E1.S and E3.S drives, and Direct-to-Chip Liquid Cooling, based on upcoming Xeon 6900 Series Processors with P-Cores.

Supermicro, Inc. is previewing re-designed X14 server platforms which will leverage next-gen technologies to maximize performance for compute-intensive workloads and applications.

Supermicro X14 Servers Intro

Building on the success of the company’s optimized X14 servers that launched in June 2024, these systems feature upgrades across the board, supporting a never-before-seen 256 performance cores (P-cores) in a single node, memory support up for MRDIMMs at 8,800MT/s, and compatibility with next-gen SXM, OAM, and PCIe GPUs. This combination can accelerate AI and compute as well as significantly reduce the time and cost of large-scale AI training, high-performance computing, and complex data analytics tasks. Approved customers can secure early access to complete, full-production systems via the firm‘s Early Ship Program or for remote testing with Supermicro JumpStart.

We continue to add to our already comprehensive Data Center Building Block solutions with these new platforms, which will offer unprecedented performance, and new advanced features,” said Charles Liang, president and CEO. “Supermicro is ready to deliver these high-performance solutions at rack-scale with the industry’s most comprehensive direct-to-chip liquid cooled, total rack integration services, and a global manufacturing capacity of up to 5,000 racks per month including 1,350 liquid cooled racks. With our worldwide manufacturing capabilities, we can deliver fully optimized solutions which accelerate our time-to-delivery like never before, while also reducing TCO.

Supermicro X14 Servers Scheme1These X14 systems feature re-designed architectures including 10U and multi-node form factors to enable support for next-gen GPUs and higher CPU densities, updated memory slot configurations with 12 memory channels per CPU and new MRDIMMs which provide up to 37% better memory performance compared to DDR5-6400 DIMMS. In addition, upgraded storage interfaces will support higher drive densities, and more systems with liquid cooling integrated directly into the server architecture.

Additions to Supermicro X14 family comprise more than 10 systems, several of which are completely new architectures in 3 distinct, workload-specific categories:

  • GPU-optimized platforms designed for pure performance and enhanced thermal capacity to support the highest-wattage GPUs. System architectures have been built from the ground up for large-scale AI training, LLMs, GenAI, 3D media, and virtualization applications.
  • High compute-density multi-nodes including SuperBlade and the all-new FlexTwin, which leverage direct-to-chip liquid cooling to significantly increase the number of performance cores in a standard rack compared to previous-gen of systems.
  • Hyper rackmounts combine single or dual socket architectures with flexible I/O and storage configurations in traditional form factors to help enterprises and data centers scale up and out as their workloads evolve.

The X14 performance-optimized systems will support the soon-to-be-released Xeon 6900 series processors with P-cores and will also offer socket compatibility to support Xeon 6900 series processors with E-cores in 1Q25. This designed-in feature allows workload-optimized systems for either performance-per-core or performance-per-watt.

The new Intel Xeon 6900 series processors with P-cores are our most powerful ever, with more cores and exceptional memory bandwidth and I/O to achieve new degrees of performance for AI and compute-intensive workloads,” said Ryan Tabrah, VP and GM, Xeon 6, Intel Corp. “Our continued partnership with Supermicro will result in some of the industry’s most powerful systems that are ready to meet the ever-heightening demands of modern AI and high-performance computing.

When configured with Xeon 6900 series processors with P-cores, Supermicro systems support FP16 instructions on the built-in Intel AMX accelerator to further enhance AI workload performance. These systems include 12 memory channels per CPU with support for both DDR5-6400 and MRDIMMs up to 8,800MT/s, CXL 2.0, and feature more extensive support for high-density, industry-standard EDSFF E1.S and E3.S NVMe drives.

Supermicro Sys 422ga Nbrt LccLiquid cooling solutions
Complementing this expanded X14 product portfolio arethe firm‘s rack-scale integration and liquid cooling capabilities. With an industry global manufacturing capacity, extensive rack-scale integration and testing facilities, and a suite of management software solutions, the company designs, builds, tests, validates, and delivers complete solutions at any scale in a matter of weeks.

In addition, the company offers a complete in-house developed liquid cooling solution including cold plates for CPUs, GPUs and memory, cooling distribution units, cooling distribution manifolds, hoses, connectors, and cooling towers. Liquid cooling can be included in rack-level integrations to further increase system efficiency, reduce instances of thermal throttling, and lower both the TCO and TCE of data center deployments.

2024 X14 P Cores Preview

Upcoming Supermicro X14 performance-optimized systems include:

  • GPU-optimized – The highest performance X14 systems designed for large-scale AI training, large language models (LLMs), GenAI and HPC, and supporting 8 of the latest-gen SXM5 and SXM6 GPUs. These systems are available in air-cooled or liquid-cooled configurations.

  • PCIe GPU – Designed for maximum GPU flexibility, supporting up to 10 double-width PCIe 5.0 accelerator cards in a thermally-optimized 5U chassis. These servers are for media, collaborative design, simulation, cloud gaming, and virtualization workloads.

  • Intel Gaudi 3 AI Accelerators – Supermicro also plans to deliver the industry’s 1st AI server based on the Intel Gaudi 3 accelerator hosted by Xeon 6 processors. The system is expected to increase efficiency and lower the cost of large-scale AI model training and AI inferencing. The system features 8xGaudi 3 accelerators on an OAM universal baseboard, 6xintegrated OSFP ports for cost-effective scale-out networking, and an open platform designed to use a community-based, open-source software stack, requiring no software licensing costs.

  • SuperBlade – X14 6U high-performance, density-optimized, and energy-efficient SuperBlade maximizes rack density, with up to 100 servers and 200 GPUs/rack. Optimized for AI, HPC, and other compute-intensive workloads, each node features air cooling or direct-to-chip liquid cooling to maximize efficiency and achieve the lowest PUE with a best TCO, as well as connectivity up to 4 integrated Ethernet switches with 100Gb/s uplinks and front I/O supporting a range of flexible networking options up to 400G InfiniBand or 400GbE/node.

  • FlexTwin – The X14 FlexTwin architecture is designed to provide maximum compute power and density in a multi-node configuration with up to 24,576 performance cores in a 48U rack. Optimized for HPC and other compute-intensive workloads, each node features direct-to-chip liquid cooling only to maximize efficiency and reduce instances of CPU thermal throttling, as well as HPC Low Latency front and rear I/O supporting a range of flexible networking options up to 400Gb/s/node.

  • Hyper – X14 Hyper is a rackmount platform designed to deliver the highest performance for demanding AI, HPC, and enterprise applications, with single or dual socket configurations supporting double-width PCIe GPUs for maximum workload acceleration. Both air cooling and direct-to-chip liquid cooling models are available to facilitate the support of top-bin CPUs without thermal limitations and reduce data center cooling costs while also increasing efficiency.

Articles_bottom
ExaGrid
AIC
ATTOtarget="_blank"
OPEN-E
RAIDON