Long short-term memory network for learning sentences similarity using deep contextual embeddings

Semantic text similarity (STS) is a challenging issue for natural language processing due to linguistic expression variability and ambiguities. The degree of the likelihood between the two sentences is calculated by sentence similarity. It plays a prominent role in many applications like information...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of information technology (Singapore. Online) 2021-08, Vol.13 (4), p.1633-1641
Hauptverfasser: Meshram, Suraj, Anand Kumar, M.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Semantic text similarity (STS) is a challenging issue for natural language processing due to linguistic expression variability and ambiguities. The degree of the likelihood between the two sentences is calculated by sentence similarity. It plays a prominent role in many applications like information retrieval (IR), plagiarism detection (PD), question answering platform and text paraphrasing, etc. Now, deep contextualised word representations became a better way for feature extraction in sentences. It has shown exciting experimental results from recent studies. In this paper, we propose a deep contextual long semantic textual similarity network. Deep contextual mechanisms for collecting high-level semantic knowledge is used in the LSTM network. Through implementing architecture in multiple datasets, we have demonstrated our model’s effectiveness. By applying architecture to various semantic similarity datasets, we showed the usefulness of our model’s on regression and classification dataset. Detailed experimentation and results show that the proposed deep contextual model performs better than the human annotation.
ISSN:2511-2104
2511-2112
DOI:10.1007/s41870-021-00686-y