Development of a deep learning-based image quality control system to detect and filter out ineligible slit-lamp images: A multicenter study

•DLIQCS can accurately detect and filter out ineligible slit-lamp images.•DLIQCS can identify the causes that lead to the generation of ineligible images.•DLIQCS has the potential to decrease the negative influence from ineligible images.•DLIQCS may improve the performance of AI-based image analysis...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computer methods and programs in biomedicine 2021-05, Vol.203, p.106048-106048, Article 106048
Hauptverfasser: Li, Zhongwen, Jiang, Jiewei, Chen, Kuan, Zheng, Qinxiang, Liu, Xiaotian, Weng, Hongfei, Wu, Shanjun, Chen, Wei
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•DLIQCS can accurately detect and filter out ineligible slit-lamp images.•DLIQCS can identify the causes that lead to the generation of ineligible images.•DLIQCS has the potential to decrease the negative influence from ineligible images.•DLIQCS may improve the performance of AI-based image analysis in the real world.•DLIQCS can be used to train new photographers of slit-lamp imaging. Previous studies developed artificial intelligence (AI) diagnostic systems only using eligible slit-lamp images for detecting corneal diseases. However, images of ineligible quality (including poor-field, defocused, and poor-location images), which are inevitable in the real world, can cause diagnostic information loss and thus affect downstream AI-based image analysis. Manual evaluation for the eligibility of slit-lamp images often requires an ophthalmologist, and this procedure can be time-consuming and labor-intensive when applied on a large scale. Here, we aimed to develop a deep learning-based image quality control system (DLIQCS) to automatically detect and filter out ineligible slit-lamp images (poor-field, defocused, and poor-location images). We developed and externally evaluated the DLIQCS based on 48,530 slit-lamp images (19,890 individuals) that were derived from 4 independent institutions using different types of digital slit lamp cameras. To find the best deep learning model for the DLIQCS, we used 3 algorithms (AlexNet, DenseNet121, and InceptionV3) to train models. The area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and accuracy were leveraged to assess the performance of each algorithm for the classification of poor-field, defocused, poor-location, and eligible images. In an internal test dataset, the best algorithm DenseNet121 had AUCs of 0.999, 1.000, 1.000, and 1.000 in the detection of poor-field, defocused, poor-location, and eligible images, respectively. In external test datasets, the AUCs of the best algorithm DenseNet121 for identifying poor-field, defocused, poor-location, and eligible images were ranged from 0.997 to 0.997, 0.983 to 0.995, 0.995 to 0.998, and 0.999 to 0.999, respectively. Our DLIQCS can accurately detect poor-field, defocused, poor-location, and eligible slit-lamp images in an automated fashion. This system may serve as a prescreening tool to filter out ineligible images and enable that only eligible images would be transferred to the subsequent AI diagnostic systems.
ISSN:0169-2607
1872-7565
DOI:10.1016/j.cmpb.2021.106048