期刊论文详细信息
Frontiers in Psychology
Doing the Impossible: Why Neural Networks Can Be Trained at All
Nathan O. Hodas1 
关键词: deep learning;    training;    curse of dimensionality;    mutual information;    correlation;    neural networks;    information theory;   
DOI  :  10.3389/fpsyg.2018.01185
学科分类:心理学(综合)
来源: Frontiers
PDF
【 摘 要 】

As deep neural networks grow in size, from thousands to millions to billions of weights, the performance of those networks becomes limited by our ability to accurately train them. A common naive question arises: if we have a system with billions of degrees of freedom, don't we also need billions of samples to train it? Of course, the success of deep learning indicates that reliable models can be learned with reasonable amounts of data. Similar questions arise in protein folding, spin glasses and biological neural networks. With effectively infinite potential folding/spin/wiring configurations, how does the system find the precise arrangement that leads to useful and robust results? Simple sampling of the possible configurations until an optimal one is reached is not a viable option even if one waited for the age of the universe. On the contrary, there appears to be a mechanism in the above phenomena that forces them to achieve configurations that live on a low-dimensional manifold, avoiding the curse of dimensionality. In the current work we use the concept of mutual information between successive layers of a deep neural network to elucidate this mechanism and suggest possible ways of exploiting it to accelerate training. We show that adding structure to the neural network leads to higher mutual information between layers. High mutual information between layers implies that the effective number of free parameters is exponentially smaller than the raw number of tunable weights, providing insight into why neural networks with far more weights than training points can be reliably trained.

【 授权许可】

CC BY   

【 预 览 】
附件列表
Files Size Format View
RO201901226846755ZK.pdf 938KB PDF download
  文献评价指标  
  下载次数:10次 浏览次数:8次