Human-centric Computing and Information Sciences | |
Medical image processing with contextual style transfer | |
Yin Xu1  Yan Li1  Byeong-Seok Shin1  | |
[1] Department of Electrical and Computer Engineering, Inha University, 100 Inha-ro, Michuhol-gu, 22212, Incheon, Korea; | |
关键词: Medical Image; Contextual transfer; Deep learning; Segmentation; | |
DOI : 10.1186/s13673-020-00251-9 | |
来源: Springer | |
【 摘 要 】
With recent advances in deep learning research, generative models have achieved great achievements and play an increasingly important role in current industrial applications. At the same time, technologies derived from generative methods are also under a wide discussion with researches, such as style transfer, image synthesis and so on. In this work, we treat generative methods as a possible solution to medical image augmentation. We proposed a context-aware generative framework, which can successfully change the gray scale of CT scans but almost without any semantic loss. By producing target images that with specific style / distribution, we greatly increased the robustness of segmentation model after adding generations into training set. Besides, we improved 2– 4% pixel segmentation accuracy over original U-NET in terms of spine segmentation. Lastly, we compared generations produced by networks when using different feature extractors (Vgg, ResNet and DenseNet) and made a detailed analysis on their performances over style transfer.
【 授权许可】
CC BY
【 预 览 】
Files | Size | Format | View |
---|---|---|---|
RO202104284818805ZK.pdf | 1932KB | download |