A Benchmark Dataset with Larger Context for Non-Factoid Question Answering over Islamic Text
Accessing and comprehending religious texts, particularly the Quran (the sacred scripture of Islam) and Ahadith (the corpus of the sayings or traditions of the Prophet Muhammad), in today's digital era necessitates efficient and accurate Question-Answering (QA) systems. Yet, the scarcity of QA...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Accessing and comprehending religious texts, particularly the Quran (the
sacred scripture of Islam) and Ahadith (the corpus of the sayings or traditions
of the Prophet Muhammad), in today's digital era necessitates efficient and
accurate Question-Answering (QA) systems. Yet, the scarcity of QA systems
tailored specifically to the detailed nature of inquiries about the Quranic
Tafsir (explanation, interpretation, context of Quran for clarity) and Ahadith
poses significant challenges. To address this gap, we introduce a comprehensive
dataset meticulously crafted for QA purposes within the domain of Quranic
Tafsir and Ahadith. This dataset comprises a robust collection of over 73,000
question-answer pairs, standing as the largest reported dataset in this
specialized domain. Importantly, both questions and answers within the dataset
are meticulously enriched with contextual information, serving as invaluable
resources for training and evaluating tailored QA systems. However, while this
paper highlights the dataset's contributions and establishes a benchmark for
evaluating QA performance in the Quran and Ahadith domains, our subsequent
human evaluation uncovered critical insights regarding the limitations of
existing automatic evaluation techniques. The discrepancy between automatic
evaluation metrics, such as ROUGE scores, and human assessments became
apparent. The human evaluation indicated significant disparities: the model's
verdict consistency with expert scholars ranged between 11% to 20%, while its
contextual understanding spanned a broader spectrum of 50% to 90%. These
findings underscore the necessity for evaluation techniques that capture the
nuances and complexities inherent in understanding religious texts, surpassing
the limitations of traditional automatic metrics. |
---|---|
DOI: | 10.48550/arxiv.2409.09844 |