期刊论文详细信息
Advanced Intelligent Systems
Flash Memory Array for Efficient Implementation of Deep Neural Networks
Jinfeng Kang1  Peng Huang1  Yachen Xiang1  Runze Han1  Xiaoyan Liu1  Yihao Shan1 
[1] Institute of Microelectronics Peking University Beijing 100871 China;
关键词: analog neural networks;    flash memory;    in-memory computing;    multiplication-and-accumulation units;    neural networks;    spiking neural networks;   
DOI  :  10.1002/aisy.202000161
来源: DOAJ
【 摘 要 】

The advancement of artificial intelligence applications is promoted by developing deep neural networks (DNNs) with increasing sizes and putting forward higher computing power requirements of the processing devices. However, due to the process scaling of complementary metal–oxide–semiconductor technology approaches to the end and the bottleneck of data transmission in the von‐Neumann architecture, traditional processing devices are increasingly challenging to meet the requirements of deeper and deeper neural networks. In‐memory computing based on nonvolatile memories has emerged as one of the most promising solutions to overcome the bottleneck of data transmission in the von‐Neumann architecture. Herein, systematic implementation of the novel flash memory array‐based in‐memory computing paradigm for DNNs from the device level to the architecture level is presented. The methodology to construct multiplication‐and‐accumulation units with different structures, hardware implementation schemes of various neural networks, and the discussion of reliability are included. The results show the hardware implementations of the flash memory array‐based in‐memory computing paradigm for DNN own excellent characteristics such as low‐cost, high computing flexibility, and high robustness. With these advantages, in‐memory computing paradigms based on flash memory arrays show significant benefits to achieve high scalability and DNNs’ energy efficiency.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:6次