VQAMix: Conditional Triplet Mixup for Medical Visual Question Answering

Medical visual question answering (VQA) aims to correctly answer a clinical question related to a given medical image. Nevertheless, owing to the expensive manual annotations of medical data, the lack of labeled data limits the development of medical VQA. In this paper, we propose a simple yet effec...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on medical imaging 2022-11, Vol.41 (11), p.3332-3343
Hauptverfasser: Gong, Haifan, Chen, Guanqi, Mao, Mingzhi, Li, Zhen, Li, Guanbin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Medical visual question answering (VQA) aims to correctly answer a clinical question related to a given medical image. Nevertheless, owing to the expensive manual annotations of medical data, the lack of labeled data limits the development of medical VQA. In this paper, we propose a simple yet effective data augmentation method, VQAMix, to mitigate the data limitation problem. Specifically, VQAMix generates more labeled training samples by linearly combining a pair of VQA samples, which can be easily embedded into any visual-language model to boost performance. However, mixing two VQA samples would construct new connections between images and questions from different samples, which will cause the answers for those new fabricated image-question pairs to be missing or meaningless. To solve the missing answer problem, we first develop the Learning with Missing Labels (LML) strategy, which roughly excludes the missing answers. To alleviate the meaningless answer issue, we design the Learning with Conditional-mixed Labels (LCL) strategy, which further utilizes language-type prior to forcing the mixed pairs to have reasonable answers that belong to the same category. Experimental results on the VQA-RAD and PathVQA benchmarks show that our proposed method significantly improves the performance of the baseline by about 7% and 5% on the averaging result of two backbones, respectively. More importantly, VQAMix could improve confidence calibration and model interpretability, which is significant for medical VQA models in practical applications. All code and models are available at https://github.com/haifangong/VQAMix .
ISSN:0278-0062
1558-254X
DOI:10.1109/TMI.2022.3185008