MTPret: Improving X-Ray Image Analytics With Multitask Pretraining

While deep neural networks (DNNs) have been widely used in various X-ray image analytics tasks such as classification, segmentation, detection, etc., there frequently needs to collect and annotate a huge amount of training data to train a model for every single task. In this work, we proposed a mult...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on artificial intelligence 2024-09, Vol.5 (9), p.4799-4812
Hauptverfasser: Liao, Weibin, Wang, Qingzhong, Li, Xuhong, Liu, Yi, Chen, Zeyu, Huang, Siyu, Dou, Dejing, Xu, Yanwu, Xiong, Haoyi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:While deep neural networks (DNNs) have been widely used in various X-ray image analytics tasks such as classification, segmentation, detection, etc., there frequently needs to collect and annotate a huge amount of training data to train a model for every single task. In this work, we proposed a multitask self-supervised pretraining strategy MTPret to improve the performance of DNNs in various X-ray analytics tasks. MTPret first trains the backbone to learn visual representations from multiple datasets of different tasks through contrastive learning, then MTPret leverages a multitask continual learning to learn discriminative features from various downstream tasks. To evaluate the performance of MTPret , we collected eleven X-ray image datasets from different body parts, such as heads, chest, lungs, bones, and etc., for various tasks to pretrain backbones, and fine-tuned the networks on seven of the tasks. The evaluation results on top of the seven tasks showed MTPret outperformed a large number of baseline methods, including other initialization strategies, pretrained models, and task-specific algorithms in recent studies. In addition, we also performed experiments based on two external tasks, where the datasets of external tasks have not been used in pretraining. The excellent performance of MTPret further confirmed the generalizability and superiority of the proposed multitask self-supervised pretraining.
ISSN:2691-4581
2691-4581
DOI:10.1109/TAI.2024.3400750