R&D: Read Latency Variation Aware Performance Optimization on High-Density NAND Flash-Based Storage Systems
Experimental results show that proposed method can improve read performance by 45.7% on average compared with state-of-the-art works and significantly reduce tail latency at 95–99.99th percentiles.
This is a Press Release edited by StorageNewsletter.com on August 23, 2022 at 2:00 pmCCF Transactions on High Performance Computing has published an article written by Liang Shi,School of Computer Science and Technology, East China Normal University, Shanghai, China, and Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, China, Yina Lv, Longfei Luo, Changlong Li, School of Computer Science and Technology, East China Normal University, Shanghai, China, Chun Jason Xue, College of Computer Science, City University of Hong Kong, Hong Kong, China, and Edwin H.-M. Sha, School of Computer Science and Technology, East China Normal University, Shanghai, China.
Abstract: “High-density NAND flash memory has been recommended as a storage medium in edge computing and intelligent storage systems. However, recent studies show that the read latency of this kind of NAND flash is increasing. The reason comes from at least two aspects: First, high-density flash memory generally adopts multiple bits per cell technique, where the access latency of the most significant bit is largely increased. Second, due to the reliability variation among these bits, the access latency of the most significant bit is further increased, which will seriously affect the read performance and even cause the tail latency. This paper proposes a read latency variation aware performance optimization scheme, RLV, to accelerate both data and metadata access to maximize the read performance and reduce the tail latency. RLV includes three parts: First, a read latency variation aware data placement scheme is proposed by accelerating the hot data accesses, including a data identification method and a fine-grained data migration method. Second, a new caching method is proposed to cache data from slow pages and minimize the migration cost, which includes an assisted caching method and a migration tagged caching method. Third, a life-stage aware metadata placement scheme is further proposed to speed up metadata access. Experimental results show that the proposed method can improve the read performance by 45.7% on average compared with state-of-the-art works and significantly reduce the tail latency at 95–99.99th percentiles.“