Journal of Imaging | 卷:7 |
Variational Autoencoder for Image-Based Augmentation of Eye-Tracking Data | |
Colm Loughnane1  Federica Cilia2  Jean-Luc Guérin3  Romuald Carette3  Gilles Dequen3  Mahmoud Elbattah3  | |
[1] Faculty of Science and Engineering, University of Limerick, V94 T9PX Limerick, Ireland; | |
[2] Laboratoire CRP-CPO, Université de Picardie Jules Verne, 80000 Amiens, France; | |
[3] Laboratoire Modélisation, Information, Systèmes (MIS), Université de Picardie Jules Verne, 80080 Amiens, France; | |
关键词: deep learning; variational autoencoder; data augmentation; eye-tracking; | |
DOI : 10.3390/jimaging7050083 | |
来源: DOAJ |
【 摘 要 】
Over the past decade, deep learning has achieved unprecedented successes in a diversity of application domains, given large-scale datasets. However, particular domains, such as healthcare, inherently suffer from data paucity and imbalance. Moreover, datasets could be largely inaccessible due to privacy concerns, or lack of data-sharing incentives. Such challenges have attached significance to the application of generative modeling and data augmentation in that domain. In this context, this study explores a machine learning-based approach for generating synthetic eye-tracking data. We explore a novel application of variational autoencoders (VAEs) in this regard. More specifically, a VAE model is trained to generate an image-based representation of the eye-tracking output, so-called scanpaths. Overall, our results validate that the VAE model could generate a plausible output from a limited dataset. Finally, it is empirically demonstrated that such approach could be employed as a mechanism for data augmentation to improve the performance in classification tasks.
【 授权许可】
Unknown