Frontiers in Neuroscience | |
Learning Cortical Parcellations Using Graph Neural Networks | |
Thomas J. Grabowski2  David R. Haynor3  Kristian M. Eschenburg4  | |
[1] Department of Bioengineering, University of Washington, Seattle, WA, United States;Department of Neurology, University of Washington Medical Center, Seattle, WA, United States;Department of Radiology, University of Washington Medical Center, Seattle, WA, United States;Integrated Brain Imaging Center, University of Washington Medical Center, Seattle, WA, United States; | |
关键词: graph neural network; parcellation; functional connectivity; representation learning; segmentation; brain; | |
DOI : 10.3389/fnins.2021.797500 | |
来源: DOAJ |
【 摘 要 】
Deep learning has been applied to magnetic resonance imaging (MRI) for a variety of purposes, ranging from the acceleration of image acquisition and image denoising to tissue segmentation and disease diagnosis. Convolutional neural networks have been particularly useful for analyzing MRI data due to the regularly sampled spatial and temporal nature of the data. However, advances in the field of brain imaging have led to network- and surface-based analyses that are often better represented in the graph domain. In this analysis, we propose a general purpose cortical segmentation method that, given resting-state connectivity features readily computed during conventional MRI pre-processing and a set of corresponding training labels, can generate cortical parcellations for new MRI data. We applied recent advances in the field of graph neural networks to the problem of cortical surface segmentation, using resting-state connectivity to learn discrete maps of the human neocortex. We found that graph neural networks accurately learn low-dimensional representations of functional brain connectivity that can be naturally extended to map the cortices of new datasets. After optimizing over algorithm type, network architecture, and training features, our approach yielded mean classification accuracies of 79.91% relative to a previously published parcellation. We describe how some hyperparameter choices including training and testing data duration, network architecture, and algorithm choice affect model performance.
【 授权许可】
Unknown