期刊论文详细信息
Entropy
Self-Supervised Variational Auto-Encoders
Jakub M. Tomczak1  Ioannis Gatopoulos2 
[1] Department of Computer Science, Vrije Universiteit Amsterdam, De Boelelaan 1111, 1081 HV Amsterdam, The Netherlands;Institute of Informatics, Universiteit van Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands;
关键词: deep generative modeling;    probabilistic modeling;    deep learning;    non-learnable transformations;   
DOI  :  10.3390/e23060747
来源: DOAJ
【 摘 要 】

Density estimation, compression, and data generation are crucial tasks in artificial intelligence. Variational Auto-Encoders (VAEs) constitute a single framework to achieve these goals. Here, we present a novel class of generative models, called self-supervised Variational Auto-Encoder (selfVAE), which utilizes deterministic and discrete transformations of data. This class of models allows both conditional and unconditional sampling while simplifying the objective function. First, we use a single self-supervised transformation as a latent variable, where the transformation is either downscaling or edge detection. Next, we consider a hierarchical architecture, i.e., multiple transformations, and we show its benefits compared to the VAE. The flexibility of selfVAE in data reconstruction finds a particularly interesting use case in data compression tasks, where we can trade-off memory for better data quality and vice-versa. We present the performance of our approach on three benchmark image data (Cifar10, Imagenette64, and CelebA).

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次