期刊论文详细信息
Optimizing for In-Memory Deep Learning With Emerging Memory Technology
Article; Early Access
关键词: RANDOM TELEGRAPH NOISE;    SYSTEM;    MODEL;   
DOI  :  10.1109/TNNLS.2023.3285488
来源: SCIE
【 摘 要 】

In-memory deep learning executes neural network models where they are stored, thus avoiding long-distance communication between memory and computation units, resulting in considerable savings in energy and time. In-memory deep learning has already demonstrated orders of magnitude higher performance density and energy efficiency. The use of emerging memory technology (EMT) promises to increase density, energy, and performance even further. However, EMT is intrinsically unstable, resulting in random data read fluctuations. This can translate to nonnegligible accuracy loss, potentially nullifying the gains. In this article, we propose three optimization techniques that can mathematically overcome the instability problem of EMT. They can improve the accuracy of the in-memory deep learning model while maximizing its energy efficiency. Experiments show that our solution can fully recover most models' state-of-the-art (SOTA) accuracy and achieves at least an order of magnitude higher energy efficiency than the SOTA.

【 授权许可】

Free   

  文献评价指标  
  下载次数:0次 浏览次数:0次