The Bart-based Model for Scientific Articles Summarization
With the development of deep learning techniques, many models have been proposed for abstractive text summarization. However, the problem of summarizing source documents while preserving their integrity persists due to token restrictions and the inability to adequately extract semantic word relation...
Gespeichert in:
Veröffentlicht in: | J.UCS (Annual print and CD-ROM archive ed.) 2024-12, Vol.30 (13), p.1807-1828 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | With the development of deep learning techniques, many models have been proposed for abstractive text summarization. However, the problem of summarizing source documents while preserving their integrity persists due to token restrictions and the inability to adequately extract semantic word relations between different sentences. To overcome this problem, a fine-tuning BART-based model was proposed, which generates a scientific summary by selecting important words contained in the text in the input document. The input text consists of terminology and keywords from the source document. The proposed model is based on the working principle of graph-based methods. Thus, the proposed model can summarize the source document with as few words as possible that are relevant to the content. The proposed model was compared with baseline models and the results of human evaluation. The experimental results demonstrate that the proposed model outperforms the baseline methods with a 37.60 ROUGE-L score. |
---|---|
ISSN: | 0948-695X 0948-6968 |
DOI: | 10.3897/jucs.115121 |