Employing Siamese MaLSTM Model and ELMO Word Embedding for Quora Duplicate Questions Detection
Quora is an expanding online platform, that contains a growing collection of questions and answers generated by users. The content on this platform is managed by its users which involves creating, editing, and organization. Due to the vast number of users, it is not uncommon to find multiple questio...
Gespeichert in:
Veröffentlicht in: | IEEE access 2024-01, Vol.12, p.1-1 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Quora is an expanding online platform, that contains a growing collection of questions and answers generated by users. The content on this platform is managed by its users which involves creating, editing, and organization. Due to the vast number of users, it is not uncommon to find multiple questions with similar intents, leading to the problem of duplicate and identical questions. Detection of these duplicates could effectively lead to a more efficient search for high-quality answers, ultimately improving the user experience for both readers and writers on Quora. This study utilizes the dataset of Question Pairs for Quora obtained from Kaggle for identifying questions that are duplicates or identical. To vectorize the questions and for model training, six types of word embeddings are implemented including GoogleNewsVector, FastText crawl, FastText crawl sub-words, bidirectional encoder representations from transformers (BERT), robustly optimized BERT pretraining approach (RoBERTa), and embeddings from language models (ELMO) containing 100 dimensions. The Siamese Manhattan long short-term memory (MaLSTM) neural network model, where Ma is Manhattan distance, is applied with ELMO word embedding to predict duplicate questions in the dataset. Experimental results demonstrate that the proposed model attained an accuracy of 95.68% which surpasses the state-of-the-art models. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2024.3367978 |