Label informed hierarchical transformers for sequential sentence classification in scientific abstracts

Segmenting scientific s into discourse categories like background, objective, method, result, and conclusion is useful in many downstream tasks like search, recommendation and summarization. This task of classifying each sentence in the into one of a given set of discourse categories is called seque...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Expert systems 2023-07, Vol.40 (6), p.n/a
Hauptverfasser: Tokala, Yaswanth Sri Sai Santosh, Aluru, Sai Saketh, Vallabhajosyula, Anoop, Sanyal, Debarshi Kumar, Das, Partha Pratim
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Segmenting scientific s into discourse categories like background, objective, method, result, and conclusion is useful in many downstream tasks like search, recommendation and summarization. This task of classifying each sentence in the into one of a given set of discourse categories is called sequential sentence classification. Existing machine learning‐based approaches to this problem consider the content of only the to obtain the neural representation of each sentence, which is then labelled with a discourse category. But this ignores the semantic information offered by the discourse labels themselves. In this paper, we propose LIHT, Label Informed Hierarchical Transformers – a method for sequential sentence classification that explicitly and hierarchically exploits the semantic information in the labels to learn label‐aware neural sentence representations. The hierarchical model helps to capture not only the fine‐grained interactions between the discourse labels and the words in the at the sentence level but also the potential dependencies that may exist in the label sequence. Thus, LIHT generates label‐aware contextual sentence representations that are then labelled with a conditional random field. We evaluate LIHT on three publicly available datasets, namely, PUBMED‐RCT, NICTA‐PIBOSO and CS. The incremental gain in F1‐score in all the three cases over the respective state‐of‐the‐art approaches is around 1%. Though the gains are modest, LIHT establishes a new performance benchmark for this task and is a novel technique of independent interest. We also perform an ablation study to identify the contribution of each component of LIHT in the observed performance, and a case study to visualize the roles of the different components of our model.
ISSN:0266-4720
1468-0394
DOI:10.1111/exsy.13238