尘肺胸片人工智能读片三种模型效能比较研究

Comparison on efficiency of three artificial intelligence-based models to read pneumoconiosis chest radiographs

  • 摘要:
    背景 尘肺病的人工诊断受到多种因素的影响,容易出现漏诊误诊情况。人工智能在医学影像领域的发展十分迅速,考虑能否利用人工智能实现对尘肺病影像的读片。

    目的 基于深度卷积神经网络的方法构建三种深度学习模型,进行尘肺病有无的诊断。评价三种模型诊断效能并进行对比,挑选出最优的模型。

    方法 收集7家医院在2017年6月—2020年12月拍摄的数字X线摄影(DR)胸片并对胸片进行质量控制。收集的DR胸片中尘肺诊断阳性为阳性组,无尘肺胸片为阴性组。由通过阅片考核的专家对收集胸片进行标注,标注过程中不断进行基于最大期望算法的一致性考核。标注后的数据经数据清洗、数据归档、预处理后纳入训练集和验证集。构建TMNet、ResNet-50和ResNeXt-50三种深度卷积神经网络模型,采用十折交叉验证法对模型进行训练,得到最优的模型。收集未纳入训练集和验证集的500例DR胸片,经由5位资深专家共同鉴定作为金标准成为测试集。通过测试,得到三种模型的准确率、灵敏度、特异度、受试者工作特征曲线下面积(AUC)等指标,评价三种模型性能并进行对比。

    结果 本研究共收集训练集与验证集DR胸片24867张,其中阳性组6978例,阴性组17889例。收集胸片中肺部异常情况如气胸、肺结核等共有312例。共有9名专家对胸片进行了标注,尘肺异常(不分期)的标注一致性率均在88%以上,尘肺分期的标注一致性率从84.68%到93.66%不等。TMNet的诊断准确率为95.20%,灵敏度达到99.66%,特异度为88.61%,AUC值为0.987。ResNeXt的诊断准确率为87.00%,灵敏度达到89.93%,特异度为82.67%,AUC值为0.911。ResNet的诊断准确率为84.00%,灵敏度达到85.91%,特异度为81.19%,AUC值为0.912。TMNet模型的上述指标均高于ResNeXt-50和ResNet-50模型。TMNet与另外两个模型AUC差值的差异有统计学意义(P<0.001)。

    结论 三种卷积神经网络模型均可有效诊断尘肺病的有无,其中TMNet的效能最好。

     

    Abstract:
    Background Diagnosis of pneumoconiosis by radiologist reading chest X-ray images is affected by many factors and is prone to misdiagnosis/missed diagnosis. With the rapid development of artificial intelligence in the field of medical imaging, whether artificial intelligence can be used to read images of pneumoconiosis deserves consideration.

    Objective Three deep learning models for identifying presence of pneumoconiosis were constructed based on deep convolutional neural network. An optimal model was selected by comparing diagnostic efficiency of the three models.

    Methods Digital radiography (DR) chest images were collected between June 2017 and December 2020 from 7 hospitals and standard radiograph quality control protocol was also followed. The DR chest images with positive results were classified into the positive group, while those without pneumoconiosis were classified into the negative group. The collected chest radiographs were labeled by experts who had passed the assessment of reading radiographs,and the experts were constantly assessed for consistency in the labeling process based on an expectation-maximization algorithm. The labeled data were cleaned, archived, and preprocessed, and then were grouped into a training set and a verification set. Three deep convolutional neural network models TMNet, ResNet-50, and ResNeXt-50 were constructed and trained by ten-fold cross-validation method to obtain an optimal model. Five hundred cases of DR chest radiographs that were not included in the training set and the validation set were collected, and identified by five senior experts as the gold standard, named the test set. The accuracy rate, sensitivity, specificity, area under curve (AUC), and other indexes of the three models were derived after testing, and the efficiency of the three models was evaluated and compared.

    Results A total of 24867 DR chest radiographs of the training set and the validation set were collected in this study, including 6978 images in the positive group and 17889 images in the negative group. There were 312 cases of pulmonary abnormalities such as pneumothorax and pulmonary tuberculosis. A total of nine experts labeled the chest radiographs, the labeling consistency rate of pneumoconiosis (non-staging) was above 88%, and the labeling consistency rate of pneumoconiosis staging ranged from 84.68% to 93.66%. The diagnostic accuracy, sensitivity, specificity, and AUC of TMNet were 95.20%, 99.66%, 88.61%, and 0.987, respectively. The indicators of ResNeXt were 87.00%, 89.93%, 82.67%, and 0.911, respectively. Those of ResNet were 84.00%, 85.91%, 81.19%, and 0.912, respectively. All these indexes of TMNet were higher than those of ResNeXt-50 and ResNet-50 models. The AUC differences between TMNet and the other two models were both statistically significant (P<0.001).

    Conclusion All the three convolutional neural network models can effectively diagnose the presence of pneumoconiosis, among which TMNet provides the best efficiency.

     

/

返回文章
返回