期刊论文详细信息
Frontiers in Robotics and AI
Grammars for Games: A Gradient-Based Framework for Optimization in Deep Learning
David eBalduzzi1 
[1] Victoria University of Wellington;
关键词: Game theory;    optimization;    neural networks;    representation learning;    deep learning;   
DOI  :  10.3389/frobt.2015.00039
来源: DOAJ
【 摘 要 】

Deep learning is currently the subject of intensive study. However, fundamental concepts such as representations are not formally defined -- researchers know them when they see them -- and there is no common language for describing and analyzing algorithms. This essay proposes an abstract framework that identifies the essential features of current practice and may provide a foundation for future developments.The backbone of almost all deep learning algorithms is backpropagation, which is simply a gradient computation distributed over a neural network. The main ingredients of the framework are thus, unsurprisingly: (i) game theory, to formalize distributed optimization; and (ii) communication protocols, to track the flow of zeroth and first-order information. The framework allows natural definitions of semantics (as the meaning encoded in functions), representations (as functions whose semantics is chosen to optimized a criterion) and grammars (as communication protocols equipped with first-order convergence guarantees).Much of the essay is spent discussing examples taken from the literature. The ultimate aim is to develop a graphical language for describing the structure of deep learning algorithms that backgrounds the details of the optimization procedure and foregrounds how the components interact.Inspiration is taken from probabilistic graphical models and factor graphs, which capture the essential structural features of multivariate distributions.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:1次