| Frontiers in Computational Neuroscience | |
| Segmenting Brain Tumor Using Cascaded V-Nets in Multimodal MR Images | |
| Zhanhao Mo1  He Sui1  Bing Zhang2  Yu Sun3  Rui Hua4  Yaozong Gao4  Feng Shi4  Quan Huo4  | |
| [1] China-Japan Union Hospital of Jilin University, Changchun, China;Department of Radiology, Affiliated Drum Tower Hospital of Nanjing University Medical School, Nanjing, China;School of Biological Science and Medical Engineering, Southeast University, Nanjing, China;Shanghai United Imaging Intelligence, Co., Ltd., Shanghai, China; | |
| 关键词: deep learning; brain tumor; segmentation; V-Net; multimodal; magnetic resonance imaging; | |
| DOI : 10.3389/fncom.2020.00009 | |
| 来源: DOAJ | |
【 摘 要 】
In this work, we propose a novel cascaded V-Nets method to segment brain tumor substructures in multimodal brain magnetic resonance imaging. Although V-Net has been successfully used in many segmentation tasks, we demonstrate that its performance could be further enhanced by using a cascaded structure and ensemble strategy. Briefly, our baseline V-Net consists of four levels with encoding and decoding paths and intra- and inter-path skip connections. Focal loss is chosen to improve performance on hard samples as well as balance the positive and negative samples. We further propose three preprocessing pipelines for multimodal magnetic resonance images to train different models. By ensembling the segmentation probability maps obtained from these models, segmentation result is further improved. In other hand, we propose to segment the whole tumor first, and then divide it into tumor necrosis, edema, and enhancing tumor. Experimental results on BraTS 2018 online validation set achieve average Dice scores of 0.9048, 0.8364, and 0.7748 for whole tumor, tumor core and enhancing tumor, respectively. The corresponding values for BraTS 2018 online testing set are 0.8761, 0.7953, and 0.7364, respectively. We also evaluate the proposed method in two additional data sets from local hospitals comprising of 28 and 28 subjects, and the best results are 0.8635, 0.8036, and 0.7217, respectively. We further make a prediction of patient overall survival by ensembling multiple classifiers for long, mid and short groups, and achieve accuracy of 0.519, mean square error of 367240 and Spearman correlation coefficient of 0.168 for BraTS 2018 online testing set.
【 授权许可】
Unknown