So2al-wa-Gwab: A New Arabic Question-Answering Dataset Trained on Answer Extraction Models
Question answering (QA) is the task of responding to questions posed by users automatically. A question-answering system is divided into three main components: question analysis, information retrieval, and answer extraction. This paper has focused only on the answer extraction part. In the past coup...
Gespeichert in:
Veröffentlicht in: | ACM transactions on Asian and low-resource language information processing 2023-08, Vol.22 (8), p.1-21, Article 205 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Question answering (QA) is the task of responding to questions posed by users automatically. A question-answering system is divided into three main components: question analysis, information retrieval, and answer extraction. This paper has focused only on the answer extraction part. In the past couple of years, many QA systems have been developed and become mature and ready for use in different languages. Nevertheless, the advancement of Arabic QA systems still faces different obstacles and a lack of relevant resources and tools for researchers. This paper presents the So2al-wa-Gwab dataset since the publicly available datasets include various faults, such as the use of machine translation to build the data, a short context size, and a small number of question-answer pairings. Thus, this new dataset avoids the aforementioned drawbacks. Furthermore, in this paper, we have trained three deep learning models, namely, Bi-Directional flow network (BiDAF), QA Network (QANet), and BERT model, and tested them on seven different datasets, thus providing a comprehensive comparison between existing Arabic QA datasets. The obtained results emphasize that machine-translated datasets fall back when compared with human-annotated data. Also, the QA task becomes harder as the context, from which to extract the answer, becomes larger. |
---|---|
ISSN: | 2375-4699 2375-4702 |
DOI: | 10.1145/3605550 |