B-mode ultrasound-based CAD by learning using privileged information with dual-level missing modality completion

Learning using privileged information (LUPI) has shown its effectiveness to improve the B-mode ultrasound (BUS) based computer-aided diagnosis (CAD) by transferring knowledge from the elasticity ultrasound (EUS). However, LUPI only performs transfer learning between the paired data with shared label...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computers in biology and medicine 2024-11, Vol.182, p.109106, Article 109106
Hauptverfasser: Wang, Xiao, Ren, Xinping, Jin, Ge, Ying, Shihui, Wang, Jun, Li, Juncheng, Shi, Jun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Learning using privileged information (LUPI) has shown its effectiveness to improve the B-mode ultrasound (BUS) based computer-aided diagnosis (CAD) by transferring knowledge from the elasticity ultrasound (EUS). However, LUPI only performs transfer learning between the paired data with shared labels, and cannot handle the scenario of modality imbalance. In order to conduct the supervised transfer learning between the paired ultrasound data together with the additional single-modal BUS images, a novel multi-view LUPI algorithm with Dual-Level Modality Completion, named DLMC-LUPI, is proposed to improve the performance of BUS-based CAD. The DLMC-LUPI implements both image-level and feature-level (dual-level) completions of missing EUS modality, and then performs multi-view LUPI for knowledge transfer. Specifically, in the dual-level modality completion stage, a variational autoencoder (VAE) model for feature generation and a novel generative adversarial network (VAE-based GAN) model for image generation are sequentially trained. The proposed VAE-based GAN can improve the synthesis quality of EUS images by adopting the features generated by VAE from the BUS images as the model constrain to make the features generated from the synthesized EUS images more similar to them. In the multi-view LUPI stage, two feature vectors are generated from the real or pseudo images as two source domains, and then fed them to the multi-view support vector machine plus classifier for model training. The experiments on two ultrasound datasets indicate that the DLMC-LUPI outperforms all the compared algorithms, and it can effectively improve the performance of single-modal BUS-based CAD. •A DLMC-LUPI is proposed to complete image- and feature-level EUS data, enhancing single-modal BUS-based CAD performance.•A VAE-based GAN model uses VAE-generated features as constraints to enhance the synthesis quality of EUS images.•We further propose to apply two PIs based on the dual-level modality completion for transfer.
ISSN:0010-4825
1879-0534
1879-0534
DOI:10.1016/j.compbiomed.2024.109106