What are you looking for ?
Advertise with us
Advertise with us

SC24: Weka AI Storage Cluster Built On Nvidia Grace CPU Superchips

Including storage server from Supermicro powered by Weka Data Platform software and Arm Neoverse V2 cores using NVIDIA Grace CPU Superchip and ConnectX-7 and BlueField-3 networking to accelerate enterprise AI workloads

At SC24, WekaIO, Inc. previewed an industry’s 1st high-performance storage solution for the NVIDIA Grace CPU Superchip.

Weka Nvidia Smc Sc24

The solution will run on a storage server from Supermicro, Inc. powered by Weka Data Platform software and Arm Neoverse V2 cores using the NVIDIA Grace CPU Superchip and ConnectX-7 and BlueField-3 networking to accelerate enterprise AI workloads with unmatched performance density and power efficiency.

Fueling next-gen of AI innovation 
Today’s AI and HPC workloads demand lightning-fast data access, but most data centers face increasing space and power constraints.

Nvidia Grace integrates the level of performance offered by a x86-64 2-socket workstation or server platform into a single module. Grace CPU Superchips are powered by 144 high-performance Arm Neoverse V2 cores that deliver 2x the energy efficiency of traditional x86 servers. Nvidia ConnectX-7 NICs and BlueField-3 SuperNICs feature purpose-built RDMA/RoCE acceleration, delivering high-throughput, low-latency network connectivity at up to 400Gb/s speeds. The combination of the Weka Data Platform’s zero-copy software architecture running on the Supermicro Petascale storage server minimizes I/O bottlenecks and reduces AI pipeline latency to enhance GPU utilization and accelerate AI model training and inference to improve time to 1st token, discoveries, and insights while reducing power consumption and associated costs.

Key benefits of solution include:

  • Extreme speed and scalability for enterprise AI: The Nvidia Grace CPU Superchip, with 144 high-performance Arm Neoverse V2 cores connected by a high-performance custom-designed Nvidia Scalable Coherency Fabric, delivers the performance of a dual-socket x86 CPU server at half the power. The Nvidia ConnectX-7 NICs and Nvidia BlueField-3 SuperNICs provide high-performance networking, essential for enterprise AI workloads. Paired with the Weka Data Platform’s AI-native architecture, which accelerates time to 1st token by up to 10x, the solution ensures optimal performance across AI data pipelines at virtually any scale.

  • Optimal resource utilization: The Weka Data Platform, combined with Grace CPUs’ LPDDR5X memory architecture, ensures up to 1TB/s of memory bandwidth and data flow, eliminating bottlenecks. Integrating Weka’s distributed architecture and kernel-bypass technology, organizations can achieve faster AI model training, reduced epoch times, and higher inference speeds, making it the ideal solution for scaling AI workloads efficiently.

  • Energy and space efficiency: The Weka Data Platform delivers 10-50x increased GPU stack efficiency to handle large-scale AI and HPC workloads. Additionally, through data copy reduction and cloud elasticity, the Weka platform can shrink data infrastructure footprints by 4-7x and reduce carbon output – avoiding up to 260 tons of CO2e/PB stored annually and lowering energy costs by 10x. Paired with the Grace CPU Superchip’s 2x energy efficiency compared to leading x86 servers, customers can do more with less, meeting sustainability goals while boosting AI performance.

AI is transforming how enterprises around the world innovate, create, and operate, but the sharp increase in its adoption has drastically increased data center energy consumption, which is expected to double by 2026, according to the International Atomic Agency,” said Nilesh Patel, CPO, Weka. “Weka is excited to partner with Nvidia, Arm, and Supermicro to develop high-performance, energy-efficient solutions for next-generation data centers that drive enterprise AI and high-performance workloads while accelerating the processing of large amounts of data and reducing time to actionable insights.”

Weka has developed a powerful storage solution with Supermicro that integrates seamlessly with the Nvidia Grace CPU Superchip to improve the efficiency of at-scale, data-intensive AI workloads. The solution will provide fast data access while reducing energy consumption, enabling data-driven organizations to turbocharge their AI infrastructure,” said Ivan Goldwasser, director, data center CPUs, Nvidia Corp.

Supermicro’s upcoming ARS-121L-NE316R Petascale storage server is the first storage optimized server using the Nvidia Grace Superchip CPU,” said Patrick Chiu, senior director, storage product management, Supermicro, Inc. “The system design features 16 high-performance Gen5 E3.S NVMe SSD bays along with 3 PCIe Gen 5 networking slots, which support up to 2 Nvidia ConnectX 7 or BlueField-3 SuperNIC networking adapters and one OCP 3.0 network adapter. The system is ideal for high-performance storage workloads like AI, data analytics, and hyperscale cloud applications. Our collaboration with Nvidia and Weka has resulted in a data platform enabling customers to make their data centers more power efficient while adding new AI processing capabilities.

AI innovation requires a new approach to silicon and system design that balances performance with power efficiency. Arm is proud to be working with Nvidia, Weka and Supermicro to deliver a highly performant enterprise AI solution that delivers exceptional value and uncompromising energy efficiency,” said David Lecomber, director, HPC, Arm Ltd.

The storage solution from Weka and Supermicro using Nvidia Grace CPU Superchips will be commercially available in early 2025.

Articles_bottom
ExaGrid
AIC
ATTOtarget="_blank"
OPEN-E