Saratoga Speed and Mellanox Help Lawrence Livermore National Laboratory
On modest sized computing platforms integrating 50TB flash storage
This is a Press Release edited by StorageNewsletter.com on November 24, 2014 at 2:51 pmComputer scientists at Lawrence Livermore National Laboratory (LLNL) have combined research in graph algorithms and data-intensive runtime systems to achieve results on the Graph500 benchmark.
The scientists’ work has been enabled by technology supplied by Saratoga Speed, Inc. and Mellanox Technologies, Ltd.
The results achieved by the LLNL team demonstrate the ability to solve large graph problems on modest sized computing platforms by integrating flash storage into the memory hierarchy of these systems. LLNL’s external memory graph framework, HavoqGT, and the Data Intensive Memory Mapped Runtime (DI-MMAP), were the basis for the Graph500 calculation.
DI-MMAP and HavoqGT were used to solve a scale 37 graph problem on a single quad socket Xeon E7-4870 v2 server with 50TB of network-attached flash storage. The server was connected to two Altamont XP all-flash arrays from Saratoga Speed over a Mellanox FDR 56Gb IB interconnect. Other scale 37 entries on the Graph500 list required clusters of 4096 nodes or larger to process the 2.2 trillion edges.
Sharad Mehrotra, Saratoga Speed’s CEO, remarked: “Tier-0 and tier-1 all-flash storage is one of the biggest inflections in the IT industry. As the world’s leading technology centers focus on big and fast data problems, we are very excited to be partnering with LLNL and Mellanox in delivering these outstanding results via our leapfrog storage platforms.”
“We are delighted that our collaboration with LLNL and Saratoga Speed has achieved record breaking results on the Graph500 benchmark,” said Gilad Shainer, VP marketing, Mellanox. “This is a great example of how new innovative algorithms can take advantage of high performance IB interconnect solutions and flash storage to optimize graph computations for analyzing big data.“
Robin Goldstone, a member of LLNL’s HPC advanced technologies office said: “This is a really exciting result that highlights our approach of leveraging HPC to solve challenging large-scale data science problems. Having a single server replacing a multi-thousand node cluster demonstrates how flash storage can be used as a cost-effective replacement for DRAM.“