期刊论文详细信息
JOURNAL OF MAGNETISM AND MAGNETIC MATERIALS 卷:489
On-chip learning for domain wall synapse based Fully Connected Neural Network
Article
Bhowmik, Debanjan1  Saxena, Utkarsh1  Dankar, Apoory1  Verma, Anand1  Kaushik, Divya1  Chatterjee, Shouri1  Singh, Utkarsh2 
[1] Indian Inst Technol Delhi, Dept Elect Engn, New Delhi 110016, India
[2] Delhi Technol Univ, Dept Elect & Commun Engn, New Delhi 110042, India
关键词: Spin orbit torque;    Domain wall device;    Hardware neural network;    Neuromorphic computing;   
DOI  :  10.1016/j.jmmm.2019.165434
来源: Elsevier
PDF
【 摘 要 】

Spintronic devices are considered as promising candidates in implementing neuromorphic systems or hardware neural networks, which are expected to perform better than other existing computing systems for certain data classification and regression tasks. In this paper, we simulate with micromagnetic framework a spin orbit torque driven domain wall based synaptic device, based on existing theoretical and experimental studies of current driven domain wall motion in heavy metal/ferromagnet heterostructures. Next we design a feedforward Fully Connected Neural Network (FCNN) with no hidden layer using several such domain wall devices as synapses and transistor based analog circuits, which we also simulate using analog circuit simulator, as neurons. An analog peripheral feedback circuit is also designed using transistors, which at every iteration computes the change in weights of the synapses needed to train the network using Stochastic Gradient Descent (SGD) method. Subsequently it sends write current pulses to the domain wall based synaptic devices which move the domain walls and update the weights of the synapses. Next we demonstrate through simulating on-chip learning of the designed FCNN on the MNIST database of handwritten digits that our FCNN trains itself in hardware through continuous update of the weights in the synapses. Previous simulation reports of spintronic FCNN do not show such peripheral circuits needed for on-chip learning and hence only show off-chip learning, where the final weights of the network are first calculated in a separate computer and then directly stored in the synapses. We obtain fairly high training and test accuracy for on-chip learning of our network. We also report energy dissipated in the synaptic devices for the training in this paper.

【 授权许可】

Free   

【 预 览 】
附件列表
Files Size Format View
10_1016_j_jmmm_2019_165434.pdf 4954KB PDF download
  文献评价指标  
  下载次数:3次 浏览次数:0次