Storage Most Strategic Part of HPC – DDN
Mixed I/O most difficult performance problem and burst buffers to take storage to exascale levels
This is a Press Release edited by StorageNewsletter.com on November 17, 2015 at 3:17 pmDataDirect Networksw, Inc. (DDN) announced the results of its annual HPC Trends survey, which showed that HPC end-users view storage as the most strategic part of the HPC data center and that managing mixed I/O performance and rapid data growth remain the biggest challenges for HPC organizations going into 2016.
Conducted by DDN for the third consecutive year, the survey polled a cross-section of end-users managing data intensive infrastructures worldwide representing hundreds of petabytes of storage investment. Respondents included individuals responsible for high performance compute, networking and storage systems from financial services, government, higher education, life Sciences, manufacturing, national labs and oil and gas organizations.
The data under management in each of these organizations is staggering.
Of organizations surveyed:
- 68% manage or use more than one petabyte of storage; and
- More than 25% manage or use more than 10PB of storage, which represents a 69% increase versus 2014 survey results.
How much storage does your team manage or use?
Storage is the fastest growing segment in IT spending in HPC (1), and according to an overwhelming majority of survey respondents (77%) data and storage has now become the most strategic part of the HPC data center as end-users seek to solve data access, workflow and analytics challenges to accelerate time to results.
The diverse set of applications driving this ever growing data in large scale environment and analytical workflows places rigorous demands on storage infrastructures and creates unique challenges for HPC users.
As the graph below highlights:
- Performance ranks as the number one storage and big data challenge by approximately two-thirds (66%) of those polled; and
- Mixed I/O performance was cited as the biggest concern by more than half (53%) of the respondents, which represented an eight%age point increase compared with last year’s survey results.
What is your biggest storage / big data challenge?
As shown below, more than half of respondents (56%) identify storage I/O as the main bottleneck in analytics workflows.
Storage IO is the biggest bottleneck in analytics workflows
These responses illustrate how performance issues have surged as big data environments grapple with a proliferation of diverse applications creating mixed IO patterns and stifling the performance of their storage infrastructure.
Only a small percentage of respondents believe today’s file systems and data management technologies will be able to scale to Exascale levels, while almost 2/3 of respondents believe new innovation will be required.
This belief is exemplified in respondents’ views on addressing performance issues:
- 58% view burst buffers to be the most likely technology to push storage to the next level as users seek faster and more efficient ways to offload I/O from compute resources, to separate storage bandwidth acquisition from capacity acquisition and to support parallel file systems to meet Exascale requirements.
Given the intense performance requirements of HPC environments as well as security, data access, workflow management and cost considerations, it is not surprising that according to 75% of those surveyed, security and data sharing complexity remain the largest impediments to increased collaboration across sites.
This concern is underscored by respondents’ overwhelming preference for private clouds:
- By a ratio of 3:1 private clouds were preferred over public clouds for HPC users’ existing cloud deployments.
With storage performance a critical requirement for today’s large scale and petascale-level data centers, site-wide file systems like those at Oakridge National Lab (ORNL), National Energy Research Scientific Computing Center (NERSC) and Texas Advanced Computing Center (TACC) continue to be a significant infrastructure trend in HPC environments, according to more than two-thirds (67%) of HPC customers polled. Site-wide file systems allow architects to either consolidate multiple computers on the same system and/or provide the flexibility to upgrade storage and servers as needed.
“The results of DDN’s annual HPC Trends Survey reaffirms the key issues and requisite requirements we hear from HPC users regularly. Handling mixed I/O workloads and resolving I/O bottlenecks in data intensive workloads are critical issues DDN has worked fervently to address within our suite of high performance end-to-end solutions,” said Molly Rector, CMO, EVP product management and worldwide marketing, DDN. “Survey respondents were clear that storage has become the most strategic part of the data center, and users rely on DDN to deliver continued technology innovation that enables research and business organizations to accelerate strategic insights and to speed time to results. DDN remains committed to the relentless innovation that raises the bar to meet the surging I/O performance, scale, content distribution and collaboration demands of HPC – in ways that are faster, smarter, more efficient and more cost effective than anyone else in the market.“
(1) Intersect360 Research: Worldwide HPC 2014 Total Market Model and 2015-2019 Forecast, June 2015.