期刊论文详细信息
Frontiers in Bioengineering and Biotechnology
A New Method for CTC Images Recognition Based on Machine Learning
Qiliang Zhou1  Pingping Bing1  Binsheng He1  Shijun Li2  Chao Peng3  Hai Yu3  Qingqing Lu4  Yuebin Liang4  Geng Tian4  Jidong Lang4 
[1] Academician Workstation, Changsha Medical University, Changsha, China;Department of Pathology, Chifeng Municipal Hospital, Chifeng, China;Geneis (Beijing) Co., Ltd., Beijing, China;Qingdao Geneis Institute of Big Data Mining and Precision Medicine, Qingdao, China;
关键词: circulating tumor cells (CTCs);    imFISH;    machine learning;    image segmentation;    CNN network;   
DOI  :  10.3389/fbioe.2020.00897
来源: DOAJ
【 摘 要 】

Circulating tumor cells (CTCs) derived from primary tumors and/or metastatic tumors are markers for tumor prognosis, and can also be used to monitor therapeutic efficacy and tumor recurrence. Circulating tumor cells enrichment and screening can be automated, but the final counting of CTCs currently requires manual intervention. This not only requires the participation of experienced pathologists, but also easily causes artificial misjudgment. Medical image recognition based on machine learning can effectively reduce the workload and improve the level of automation. So, we use machine learning to identify CTCs. First, we collected the CTC test results of 600 patients. After immunofluorescence staining, each picture presented a positive CTC cell nucleus and several negative controls. The images of CTCs were then segmented by image denoising, image filtering, edge detection, image expansion and contraction techniques using python’s openCV scheme. Subsequently, traditional image recognition methods and machine learning were used to identify CTCs. Machine learning algorithms are implemented using convolutional neural network deep learning networks for training. We took 2300 cells from 600 patients for training and testing. About 1300 cells were used for training and the others were used for testing. The sensitivity and specificity of recognition reached 90.3 and 91.3%, respectively. We will further revise our models, hoping to achieve a higher sensitivity and specificity.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次