期刊论文详细信息
Electronics
In-Memory Computing Architecture for a Convolutional Neural Network Based on Spin Orbit Torque MRAM
Yao-Tung Tsou1  Jing-Lin Syu1  Sy-Yen Kuo2  Jun-Ying Huang2  Ching-Ray Chang3 
[1] Department of Communications Engineering, Feng Chia University, Taichung 407, Taiwan;Department of Electrical Engineering, National Taiwan University, Taipei 106, Taiwan;Quantum Information Center, Chung Yuan Christian University, Taoyuan 320, Taiwan;
关键词: convolution neural network;    computing in memory;    processing in memory;    distributed arithmetic;    MRAM;    SOT-MRAM;   
DOI  :  10.3390/electronics11081245
来源: DOAJ
【 摘 要 】

Recently, numerous studies have investigated computing in-memory (CIM) architectures for neural networks to overcome memory bottlenecks. Because of its low delay, high energy efficiency, and low volatility, spin-orbit torque magnetic random access memory (SOT-MRAM) has received substantial attention. However, previous studies used calculation circuits to support complex calculations, leading to substantial energy consumption. Therefore, our research proposes a new CIM architecture with small peripheral circuits; this architecture achieved higher performance relative to other CIM architectures when processing convolution neural networks (CNNs). We included a distributed arithmetic (DA) algorithm to improve the efficiency of the CIM calculation method by reducing the excessive read/write times and execution steps of CIM-based CNN calculation circuits. Furthermore, our method also uses SOT-MRAM to increase the calculation speed and reduce power consumption. Compared with CIM-based CNN arithmetic circuits in previous studies, our method can achieve shorter clock periods and reduce read times by up to 43.3% without the need for additional circuits.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:4次