R&D: Enhancing Optimal Read Voltage Prediction for 3D NAND Flash Memory Through Data Augmentation Techniques
Evaluations on multiple real chip show that proposal significantly reduces number of error bits from 4\% to 15\% and read retry counts from 3\% to 13.2\%.
This is a Press Release edited by StorageNewsletter.com on February 18, 2025 at 2:33 pmSSRN has published an article written by Xiangyu Yao, Guanyu Wu, Yina Lv, Xiamen University, China, Jie Zhang, Peking University, China, Xinbiao Gan, National University of Defense Technology, China, and Qiao Li, Xiamen University, China.
Abstract: “3D NAND flash memory is currently facing read-retry-induced performance degradation, which necessitates frequent adjustments to the read voltage to counteract voltage shifting. The lowest raw bit errors occur at the optimal read voltage (ORV). Because of the complex disturbance on flash cells’ voltage, finding the ORV requires extensive chip testing under various conditions, including program/erase (P/E) cycles and retention time, which can be both time-consuming and costly. Machine learning offers a more efficient alternative by predicting ORV based on a subset of test data or by modeling the ORV’s trend over time. However, these machine-learning techniques often demand significant computational resources and rely on large datasets for prediction accuracy. In this paper, we first make an in-depth analysis of various machine-learning models and fitting models for their performance in ORV prediction, revealing that their performance deteriorates when constrained by limited test data. To address the challenge of limited test data, we propose a novel method to accurately and robustly predict the ORV for high-density 3D NAND flash memory through data augmentation. Specifically, with limited test data, we utilize fitting models to estimate the ORV under P/E cycles and retention time that are not tested for data collection. Then, we integrate both the available test data and the augmented data to train and predict the ORV using machine-learning multiple models. Evaluations on multiple real chip show that our proposal significantly reduces the number of error bits from 4\% to 15\% and read retry counts from 3\% to 13.2\%.“