| 卷:5 | |
| Labelling instructions matter in biomedical image analysis | |
| Article | |
| 关键词: QUALITY; | |
| DOI : 10.1038/s42256-023-00625-5 | |
| 来源: SCIE | |
【 摘 要 】
Biomedical image analysis algorithm validation depends on high-quality annotation of reference datasets, for which labelling instructions are key. Despite their importance, their optimization remains largely unexplored. Here we present a systematic study of labelling instructions and their impact on annotation quality in the field. Through comprehensive examination of professional practice and international competitions registered at the Medical Image Computing and Computer Assisted Intervention Society, the largest international society in the biomedical imaging field, we uncovered a discrepancy between annotators' needs for labelling instructions and their current quality and availability. On the basis of an analysis of 14,040 images annotated by 156 annotators from four professional annotation companies and 708 Amazon Mechanical Turk crowdworkers using instructions with different information density levels, we further found that including exemplary images substantially boosts annotation performance compared with text-only descriptions, while solely extending text descriptions does not. Finally, professional annotators constantly outperform Amazon Mechanical Turk crowdworkers. Our study raises awareness for the need of quality standards in biomedical image analysis labelling instructions. High-quality annotation of datasets is critical for machine-learning-based biomedical image analysis. However, a detailed examination of recent image competitions reveals a gap between annotators' needs and quality of labelling instructions. It is also found that annotator performance can be substantially improved by providing exemplary images.
【 授权许可】
Free