What are you looking for ?
Advertise with us
Advertise with us

From Nvidia, Mellanox 400Gb/s IB for Exascale AI Supercomputing

7th gen of Mellanox IB provides low latency and doubles data throughput with NDR 400Gb/s and adds In-Network Computing engines to provide additional acceleration.

Nvidia Corp. introduced the next gen of Mellanox 400G IB, giving AI developers and scientific researchers a fast networking performance.

Nvidia Mellanox Infiniband Ndr 400g

As computing requirements continue to grow in areas such as drug discovery, climate research and genomics, Mellanox 400G IB is accelerating this work through a leap in performance offered on the world’s only fully offloadable, in-network computing platform.

The 7th gen of Mellanox IB provides low latency and doubles data throughput with NDR 400Gb/s and adds the company’s In-Network Computing engines to provide additional acceleration.

Manufacturers – including Atos, Dell Technologies, Fujitsu, Gigabyte, Inspur, Lenovo and Supermicro – plan to integrate Mellanox 400G IB into their enterprise solutions and HPC offerings. These commitments are complemented by support from storage infrastructure partners including DDN (DataDirect Networks, Inc.) and IBM Storage.

The most important work of our customers is based on AI and increasingly complex applications that demand faster, smarter, more scalable networks,” said Gilad Shainer, SVP, networking. “The Nvidia Mellanox 400G IB’s massive throughput and smart acceleration engines let HPC, AI and hyperscale cloud infrastructures achieve unmatched performance with less cost and complexity.

This announcement builds on Mellanox IB’s lead as an industry’s robust solution for AI supercomputing. The NDR 400G IB offers 3x the switch port density and boosts AI acceleration power by 32x. In addition, it surges switch system aggregated bi-directional throughput 5x, to 1.64Pb/s, enabling users to run larger workloads with fewer constraints.

Expanding ecosystem for expanding workloads
Early interest in the next gen of Mellanox IB is coming from large scientific research organizations.

“Microsoft Azure’s partnership with NVIDIA Networking stems from our shared passion for helping scientists and researchers drive innovation and creativity through scalable HPC and AI. In HPC, Azure HBv2 VMs are the first to bring HDR IB to the cloud and achieve supercomputing scale and performance for MPI customer applications with demonstrated scaling to eclipse 80,000 cores for MPI HPC,” said Nidhi Chappell, head, product, Azure HPC and AI, Microsoft Corp.In AI, to meet the high-ambition needs of AI innovation, the Azure NDv4 VMs also leverage HDR IB with 200Gb/s per GPU, a massive total of 1.6Tb/s of interconnect bandwidth per VM, and scale to thousands of GPUs under the same low-latency IB fabric to bring AI supercomputing to the masses. Microsoft applauds the continued innovation in NVIDIA’s Mellanox IB product line, and we look forward to continuing our strong partnership together.

High-performance interconnects are cornerstone technologies required for xascale and beyond. Los Alamos National Laboratory continues to be at the forefront of HPC networking technologies,” said Steve Poole, chief architect, next-gen platforms, Los Alamos National Laboratory. “The Lab will continue their relationship working with Nvidia in evaluating and analyzing their latest 400Gb/s technology aimed at solving the diverse workload requirements at Los Alamos.

Amid the new age of exascale computing, researchers and scientists are pushing the limits of applying mathematical modeling to quantum chemistry, molecular dynamics and civil safety,” said Professor Thomas Lippert, head, Jülich Supercomputing Centre. “We are committed to leveraging the next gen of Mellanox IB to further our track record of building Europe’s leading, next-gen HPCs.

IB continues to maintain its pace of innovation and performance, underlining the differentiation that has made it the most commonly used high-performance server and storage interconnect for HPC and AI systems,” said Addison Snell, CEO, Intersect360 Research. “As applications continue to demand increased network throughput, the need for high-performance solutions, such as Nvidia Mellanox 400G IB, has the potential to keep expanding into new use cases and markets.

Nvidia Mellanox 400g Infiniband

Product specs and availability
Offloading operations is crucial for AI workloads. The 3rd gen Mellanox SHARP technology allows deep learning training operations to be offloaded and accelerated by the IB network, resulting in 32x higher AI acceleration power. When combined with the company’s Magnum IO software stack, it provides out-of-the-box accelerated scientific computing.

Edge switches, based on the Mellanox IB architecture, carry an aggregated bi-directional throughput of 51.2Tb/s, with a landmark capacity of more than 66.5 billion packets per second. The modular switches, based on Mellanox IB, will carry up to an aggregated bi-directional throughput of 1.64Pb/s, 5x higher than the last gen.

The Mellanox IB architecture is based on industry standards to ensure backwards and future compatibility and protect data center investments. Solutions based on the architecture are expected to sample in 2Q21.

Read also:
Nvidia Finally Completes Acquisition of Mellanox for $7 Billion
Combining compute and networking for HPC, no layoffs according to Nvidia CEO [with our comments]
April 29, 2020 | Press Release

Articles_bottom
ExaGrid
AIC
ATTOtarget="_blank"
OPEN-E