学位论文详细信息
Generative modeling of sequential data
Generative Modeling, Sequential Modeling, Generative Adversarial Networks, Probabilistic Modeling, Method of Moments
Subakan, Y. Cem
关键词: Generative Modeling, Sequential Modeling, Generative Adversarial Networks, Probabilistic Modeling, Method of Moments;   
Others  :  https://www.ideals.illinois.edu/bitstream/handle/2142/100972/SUBAKAN-DISSERTATION-2018.pdf?sequence=1&isAllowed=y
美国|英语
来源: The Illinois Digital Environment for Access to Learning and Scholarship
PDF
【 摘 要 】

In this thesis, we investigate various approaches for generative modeling, with a special emphasis on sequential data. Namely, we develop methodologies to deal with issues regarding representation (modeling choices), learning paradigm (e.g. maximum likelihood, method of moments, adversarial training), and optimization. For the representation aspect, we make the following contributions: -We argue that using a multi-modal latent representation (unlike popular methods such as variational autoencoders or generative adversarial networks) significantly enhances the generative model learning performance, as evidenced by the experiments we conduct on handwritten digit dataset (MNIST) and celebrity faces dataset (CELEB-A). -We prove that the standard factorial Hidden Markov model defined in the literature is not statistically identifiable. We propose two alternative identifiable models, and show their validity on unsupervised source separation examples. -We experimentally show that using a convolutional neural network architecture provides performance boost over time agnostic methods such as non-negative matrix factorization, and auto-encoders.-We experimentally show that using a recurrent neural network with a diagonal recurrent matrix increases the convergence speed and final accuracy of the model in most cases in a symbolic music modeling task. For the learning paradigm aspect, we make the following contributions: -We propose a method of moment based parameter learning framework for Hidden Markov Models (HMMs) with special transition structures such as mixture of HMMs, switching HMMs and HMMs with mixture emissions.-We propose a new generative model learning method which does approximate maximum likelihood parameter estimation for implicit generative models. -We argue that using an implicit generative model for audio source separation increases the performance over models which specify a cost function, such as NMF or autoencoders trained via maximum likelihood. We show performance improvement in speech mixtures created from the TIMIT dataset. For the optimization aspect, we make the following contributions: -We show that using the method of moment framework we propose in this thesis boosts the model performance when used as an initialization scheme for the expectation maximization algorithm.-We propose new optimization algorithms for identifiable alternatives to Factorial HMM. -We propose a two-step optimization algorithm for learning implicit generative models which efficiently learns multi-modal latent representations.

【 预 览 】
附件列表
Files Size Format View
Generative modeling of sequential data 6869KB PDF download
  文献评价指标  
  下载次数:31次 浏览次数:22次