会议论文详细信息
30th International Conference on Machine Learning
Fast Dual Variational Inference for NonConjugate Latent Gaussian Models
Mohammad Emtiyaz Khan emtiyaz.khan@epfl.ch ; Numerical Analysis and Optimization ; IBM T.J. Watson Research Center ; Yorktown Heights ; NY ; USA ; Department of Computer Science ; University of British Columbia ; Vancouver ; Canada
PID  :  118099
来源: CEUR
PDF
【 摘 要 】

Latent Gaussian models (LGMs) are widely used in statistics and machine learning. Bayesian inference in nonconjugate LGMs is difficult due to intractable integrals in volving the Gaussian prior and nonconjugate likelihoods. Algorithms based on variational Gaussian (VG) approximations are widely employed since they strike a favorable bal ance between accuracy, generality, speed, and ease of use. However, the structure of the optimization problems associated with these approximations remains poorly understood, and standard solvers take too long to con verge. We derive a novel dual variational in ference approach that exploits the convexity property of the VG approximations. We ob tain an algorithm that solves a convex op timization problem, reduces the number of variational parameters, and converges much faster than previous methods. Using real world data, we demonstrate these advantages on a variety of LGMs, including Gaussian process classification, and latent Gaussian Markov random fields. Proceedings of the 30 th International Conference on Ma chine Learning, Atlanta, Georgia, USA, 2013. JMLR:

【 预 览 】
附件列表
Files Size Format View
Fast Dual Variational Inference for NonConjugate Latent Gaussian Models 445KB PDF download
  文献评价指标  
  下载次数:53次 浏览次数:33次