A unified multimodal classification framework based on deep metric learning

Multimodal classification algorithms play an essential role in multimodal machine learning, aiming to categorize distinct data points by analyzing data characteristics from multiple modalities. Extensive research has been conducted on distilling multimodal attributes and devising specialized fusion...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural networks 2025-01, Vol.181, p.106747, Article 106747
Hauptverfasser: Peng, Liwen, Jian, Songlei, Li, Minne, Kan, Zhigang, Qiao, Linbo, Li, Dongsheng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Multimodal classification algorithms play an essential role in multimodal machine learning, aiming to categorize distinct data points by analyzing data characteristics from multiple modalities. Extensive research has been conducted on distilling multimodal attributes and devising specialized fusion strategies for targeted classification tasks. Nevertheless, current algorithms mainly concentrate on a specific classification task and process data about the corresponding modalities. To address these limitations, we propose a unified multimodal classification framework proficient in handling diverse multimodal classification tasks and processing data from disparate modalities. UMCF is task-independent, and its unimodal feature extraction module can be adaptively substituted to accommodate data from diverse modalities. Moreover, we construct the multimodal learning scheme based on deep metric learning to mine latent characteristics within multimodal data. Specifically, we design the metric-based triplet learning to extract the intra-modal relationships within each modality and the contrastive pairwise learning to capture the inter-modal relationships across various modalities. Extensive experiments on two multimodal classification tasks, fake news detection and sentiment analysis, demonstrate that UMCF can extract multimodal data features and achieve superior classification performance than task-specific benchmarks. UMCF outperforms the best fake news detection baselines by 2.3% on average regarding F1 scores. [Display omitted] •A unified multimodal classification framework that can handle various multimodal classification tasks.•Flexibly process data from multiple modalities, including images, texts, audio, and videos.•Metric-based triplet learning to extract intra-modal relationships in every modality.•Contrastive pairwise learning to capture inter-modal relationships across multiple modalities.
ISSN:0893-6080
1879-2782
1879-2782
DOI:10.1016/j.neunet.2024.106747