What are you looking for ?
Advertise with us
RAIDON

Argonne Leadership Computing Facility to Deploy Cray ClusterStor E1000

In collaboration with HPE to expand HPC storage capacity to 200PB

Hewlett Packard Enterprise Development LP and the Argonne Leadership Computing Facility (ALCF), a US Department of Energy office of science user facility, announced that ALCF will deploy the Cray ClusterStor E1000, a parallel storage solution, as its newest storage system.

Argonne Leadership Computing Facility To Deploy Cray Clusterstor E1000

The collaboration supports ALCF’s scientific research in areas such as earthquake seismic activity, aerospace turbulence and shock-waves, physical genomics and more. The latest deployment advances storage capacity for ALCF’s workloads that require converged modeling, simulation, AI and analytics workloads, in preparation for Aurora, ALCF’s forthcoming Exascale HPC, powered by HPE and Intel, and the first-of-its-kind expected to be delivered in the US in 2021.

The ClusterStor E1000 system utilizes purpose-built software and hardware features to meet storage requirements of any size with fewer drives. Designed to support the exascale era, which is characterized by the explosion of data and converged workloads, it will power ALCF’s future Aurora HPC to target a multitude of data-intensive workloads required to make fast discoveries.

ALCF is leveraging exascale era technologies by deploying infrastructure required for converged workloads in modeling, simulation, AI and analytics,” said Peter Ungaro, SVP and GM, HPC and AI, HPE. “Our recent introduction of the Cray ClusterStor E1000 is delivering ALCF unmatched scalability and performance to meet next-gen HPC storage needs to support emerging, data-intensive workloads. We look forward to continuing our collaboration with ALCF and empowering its research community to unlock new value.

ALCF’s 2 storage systems, which it has named Grand and Eagle, are using the ClusterStor E1000 system to gain a HPC storage solution to manage growing converged workloads that today’s offerings cannot support.

When Grand launches, it will benefit ALCF’s legacy petascale machines, providing increased capacity for the Theta compute system and enabling new levels of performance for not just traditional checkpoint-restart workloads, but also for complex workflows and metadata-intensive work,” said Mark Fahey, director of operations, ALCF.

Eagle will help support the ever-increasing importance of data in the day-to-day activities of science,” said Michael E. Papka, director, ALCF. “By leveraging our experience with our current data-sharing system, Petrel, this new storage will help eliminate barriers to productivity and improve collaborations throughout the research community.

The 2 systems will gain a total of 200PB of storage, and through the ClusterStor E1000’s software and hardware designs, will more accurately align data flows with target workloads.

ALCF’s Grand and Eagle systems will help researchers accelerate a range of scientific discoveries across disciplines, and are each assigned to address the following:

  • Computational capacity: ALCF’s Grand provides 150PB of center-wide storage and new levels of I/O performance to support massive computational needs for its users.
  • Simplified data-sharing: ALCF’s Eagle provides a 50PB community file system to make data-sharing easier than ever among ALCF users, their collaborators and with third parties.

ALCF plans to deliver its Grand and Eagle storage systems in early 2020. The systems will initially connect to existing ALCF HPCs powered by HPE HPC systems: Theta, based on the Cray XC40-AC and Cooley, based on the Cray CS-300. ALCF’s Grand, which is capable of 1TB/s bandwidth, will be optimized to support converged simulation science and data-intensive workloads once the Aurora Exascale HPC is operational.

Articles_bottom
ExaGrid
AIC
ATTOtarget="_blank"
OPEN-E