Multi-modal fusion representation method and system based on semantic similarity matching

The invention discloses a multi-modal fusion representation method and system based on semantic similarity matching, and the method comprises the steps: obtaining a target text, carrying out the preprocessing, and extracting feature words in the target text; the feature words are expanded based on d...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: LIU QING, DAI QINGYUN, LAI PEIYUAN
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The invention discloses a multi-modal fusion representation method and system based on semantic similarity matching, and the method comprises the steps: obtaining a target text, carrying out the preprocessing, and extracting feature words in the target text; the feature words are expanded based on dictionaries, pictures and texts, a plurality of expanded dictionary vectors, expanded picture vectors and expanded text vectors are obtained, and corresponding feature vectors are generated; obtaining a reference word according to the current retrieval scene, performing traversal comparison on the reference word and the feature vector, obtaining a matching degree according to similarity calculation, and filtering to obtain the feature vector with the highest matching degree; and performing multi-modal weighted fusion on the dictionary feature vector, the picture feature vector and the text feature vector to form a feature word multi-modal feature vector in the current retrieval scene. According to the method, throu