Frontiers in Neuroscience | |
MEG-Based Detection of Voluntary Eye Fixations Used to Control a Computer | |
Bogdan L. Kozyrskiy1  Ivan P. Zubarev2  Anastasia O. Ovchinnikova3  Sergei L. Shishkin4  Anatoly N. Vasilyev5  | |
[1] Department of Data Science, EURECOM, Biot, France;Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, Espoo, Finland;Department of Physics of Extreme States of Matter, National Research Nuclear University MEPhI, Moscow, Russia;Laboratory for Neurocognitive Technologies, NRC Kurchatov Institute, Moscow, Russia;Laboratory for Neurophysiology and Neuro-Computer Interfaces, M. V. Lomonosov Moscow State University, Moscow, Russia;MEG Center, Moscow State University of Psychology and Education, Moscow, Russia; | |
关键词: MEG; brain-computer interface; hybrid brain-computer interface; gaze-based interaction; convolutional neural network; classification; | |
DOI : 10.3389/fnins.2021.619591 | |
来源: DOAJ |
【 摘 要 】
Gaze-based input is an efficient way of hand-free human-computer interaction. However, it suffers from the inability of gaze-based interfaces to discriminate voluntary and spontaneous gaze behaviors, which are overtly similar. Here, we demonstrate that voluntary eye fixations can be discriminated from spontaneous ones using short segments of magnetoencephalography (MEG) data measured immediately after the fixation onset. Recently proposed convolutional neural networks (CNNs), linear finite impulse response filters CNN (LF-CNN) and vector autoregressive CNN (VAR-CNN), were applied for binary classification of the MEG signals related to spontaneous and voluntary eye fixations collected in healthy participants (n = 25) who performed a game-like task by fixating on targets voluntarily for 500 ms or longer. Voluntary fixations were identified as those followed by a fixation in a special confirmatory area. Spontaneous vs. voluntary fixation-related single-trial 700 ms MEG segments were non-randomly classified in the majority of participants, with the group average cross-validated ROC AUC of 0.66 ± 0.07 for LF-CNN and 0.67 ± 0.07 for VAR-CNN (M ± SD). When the time interval, from which the MEG data were taken, was extended beyond the onset of the visual feedback, the group average classification performance increased up to 0.91. Analysis of spatial patterns contributing to classification did not reveal signs of significant eye movement impact on the classification results. We conclude that the classification of MEG signals has a certain potential to support gaze-based interfaces by avoiding false responses to spontaneous eye fixations on a single-trial basis. Current results for intention detection prior to gaze-based interface’s feedback, however, are not sufficient for online single-trial eye fixation classification using MEG data alone, and further work is needed to find out if it could be used in practical applications.
【 授权许可】
Unknown