NVIDIA GTC 2025: Asus Unveils AI POD Featuring NVIDIA GB300 NVL72
Also showcasing latest AI servers in Blackwell and HGX family line-up
This is a Press Release edited by StorageNewsletter.com on March 24, 2025 at 2:01 pmAt NVIDIA GTC 2025, Asus (AsusTeK Computer Inc.) showcasing latest Asus AI POD with the NVIDIA GB300 NVL72 platform.
The company also announced that it has already garnered substantial order placements, marking a significant milestone in the technology industry. The company’s XA NB3I-E12 features HGX B300 NVL16 delivers breakthrough performance to meet the evolving needs in every data center
At the forefront of AI innovation, the firm also presents the latest AI servers in the Blackwell and HGX family line-up. These include XA NB3I-E12 powered by NVIDIA B300 NVL16, ESC NB8-E11 with NVIDIA DGX B200 8-GPU, ESC N8-E11V with NVIDIA HGX H200 and ESC8000A-E13P/ESC8000-E12P will support NVIDIA RTX PRO 6000 Blackwell Server Edition with MGX architecture. Asus is positioned to provide comprehensive infrastructure solutions in combination with the NVIDIA AI Enterprise and NVIDIA Omniverse platforms, empowering clients to accelerate their time to market.
AI POD with NVIDIA GB300 NVL72
By integrating the immense power of the NVIDIA GB300 NVL72 server platform, Asus AI POD offers processing capabilities – empowering enterprises to tackle massive AI challenges with ease. Built with NVIDIA Blackwell Ultra, GB300 NVL72 leads the new era of AI with optimized compute, increased memory, and high-performance networking, delivering breakthrough performance.
It’s equipped with 72 NVIDIA Blackwell Ultra GPUs and 36 Grace CPUs in a rack-scale design delivering increased AI FLOPs, providing up to 40TB of high-speed memory/rack. It also includes networking platform integration with NVIDIA Quantum-X800 InfiniBand and Spectrum-X Ethernet, SXM7 and SOCAMM modules designed for serviceability, 100% liquid-cooled design and support for trillion-parameter LLM inference and training with NVIDIA.
RS501A-E12-RS12U server front and rear
The company has shown expertise in building NVIDIA GB200 NVL72 infrastructure from the ground up. To achieve peak computing efficiency with software-defined storage (SDS) architectural paradigm, also on show is RS501A-E12-RS12U. This SDS server effectively reduces the latency of data training and inferencing, and complements NVIDIA GB200 NVL72. The firm presents extensive service scope from hardware to cloud-based applications, covering architecture design, advanced cooling solutions, rack installation, large validation/deployment and AI platforms to significantly harness its extensive expertise to empower clients to achieve AI infrastructure excellence.
Kaustubh Sanghani, VP, GPU products, NVIDIA, commented: “NVIDIA is working with Asus to drive the next wave of innovation in data centers. Leading Asus servers combined with the Blackwell Ultra platform will accelerate training and inference, enabling enterprises to unlock new possibilities in areas such as AI reasoning and agentic AI.”
GPU servers for heavy GenAI workloads
Asus also showcased a series of NVIDIA-certified servers, supporting applications and workflows built with the NVIDIA AI Enterprise and Omniverse platforms.
ESC N8-E11V server
The company’s 10U ESC NB8-E11 is equipped with the NVIDIA Blackwell HGX B200 8-GPU for unmatched AI performance. The XA NB3I-E12 features HGX B300 NVL16, featuring increased AI FLOPS, 2.3TB of HBM3e memory, and networking platform integration with NVIDIA Quantum-X800 InfiniBand and Spectrum-X Ethernet, Blackwell Ultra delivers performance for AI reasoning, agentic AI and video inference applications to meet the evolving needs in every data center.
Finally, the 7U ESC N8-E11V dual-socket server is powered by 8xNVIDIA H200 GPUs, supports both air-cooled and liquid-cooled options, and is engineered to provide effective cooling and innovative components.
Scalable servers to master AI inference optimization
ESC8000A-E13P front and rear
Asus also presents server and edge AI options for AI inferencing – the ESC8000 series embedded with the latest NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. The ESC8000-E12P is a high-density 4U server for 8 dual-slot high-end NVIDIA H200 GPUs and support software suite on NVIDIA AI Enterprise and Omniverse. Also, it’s fully compatible with NVIDIA MGX architecture to ensure flexible scalability and fast, large-scale deployment. Additionally, the ESC8000A-E13P, 4U NVIDIA MGX server, supports 8 dual-slot NVIDIA H200 GPUs, provides integration, optimization and scalability for modern data centers and dynamic IT environments.
Groundbreaking AI supercomputer, Asus Ascent GX10
Ascent GX10
The company also announces its AI supercomputer, Ascent GX10, in a compact package. Powered by the state-of-the-art NVIDIA GB10 Grace Blackwell Superchip, it delivers 1,000 AI TOPS performance, making it ideal for demanding workloads. Ascent GX10 is equipped with a Blackwell GPU, a 20-core Arm CPU, and 128GB of memory, supporting AI models with up to 200-billion parameters. This device places the formidable capabilities of a petaflop-scale AI supercomputer directly onto the desks of developers, AI researchers and data scientists around the globe.
PE8000G
Asus IoT showcases its Edge AI computers at GTC, featuring the PE2100N with NVIDIA Jetson AGX Orin, delivering 275 TOPS for GenAI and robotics. The PE8000G supports dual 450W NVIDIA RTX GPUs, excelling in real-time perception AI. With rugged designs and wide operating temperature, both are ideal for computer vision, autonomous vehicles and intelligent video analytics.
Availability:
Asus AI infrastructure solutions and servers are available WW.