Sentence modeling via multiple word embeddings and multi-level comparison for semantic textual similarity
•Encoding sentence via multiple pre-trained word embeddings.•Evaluating sentence pairs via multi-levels comparison.•The approach achieves strong performances on semantic textual similarity tasks.•The approach does not rely on linguistic resources. Recently, using a pretrained word embedding to repre...
Gespeichert in:
Veröffentlicht in: | Information processing & management 2019-11, Vol.56 (6), p.102090, Article 102090 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •Encoding sentence via multiple pre-trained word embeddings.•Evaluating sentence pairs via multi-levels comparison.•The approach achieves strong performances on semantic textual similarity tasks.•The approach does not rely on linguistic resources.
Recently, using a pretrained word embedding to represent words achieves success in many natural language processing tasks. According to objective functions, different word embedding models capture different aspects of linguistic properties. However, the Semantic Textual Similarity task, which evaluates similarity/relation between two sentences, requires to take into account of these linguistic aspects. Therefore, this research aims to encode various characteristics from multiple sets of word embeddings into one embedding and then learn similarity/relation between sentences via this novel embedding. Representing each word by multiple word embeddings, the proposed MaxLSTM-CNN encoder generates a novel sentence embedding. We then learn the similarity/relation between our sentence embeddings via Multi-level comparison. Our method M-MaxLSTM-CNN consistently shows strong performances in several tasks (i.e., measure textual similarity, identify paraphrase, recognize textual entailment). Our model does not use hand-crafted features (e.g., alignment features, Ngram overlaps, dependency features) as well as does not require pre-trained word embeddings to have the same dimension. |
---|---|
ISSN: | 0306-4573 1873-5371 |
DOI: | 10.1016/j.ipm.2019.102090 |