期刊论文详细信息
Entropy
Learning Functions and Approximate Bayesian Computation Design: ABCD
Markus Hainy1  Werner G. Müller1 
[1] Department of Applied Statistics, Johannes Kepler University, 4040 Linz, Austria; E-Mails:
关键词: learning;    Shannon information;    majorization;    optimum experimental design;    approximate Bayesian computation;   
DOI  :  10.3390/e16084353
来源: mdpi
PDF
【 摘 要 】

A general approach to Bayesian learning revisits some classical results, which study which functionals on a prior distribution are expected to increase, in a preposterior sense. The results are applied to information functionals of the Shannon type and to a class of functionals based on expected distance. A close connection is made between the latter and a metric embedding theory due to Schoenberg and others. For the Shannon type, there is a connection to majorization theory for distributions. A computational method is described to solve generalized optimal experimental design problems arising from the learning framework based on a version of the well-known approximate Bayesian computation (ABC) method for carrying out the Bayesian analysis based on Monte Carlo simulation. Some simple examples are given.

【 授权许可】

CC BY   
© 2014 by the authors; licensee MDPI, Basel, Switzerland

【 预 览 】
附件列表
Files Size Format View
RO202003190023078ZK.pdf 215KB PDF download
  文献评价指标  
  下载次数:5次 浏览次数:15次