ESIE-BERT: Enriching sub-words information explicitly with BERT for intent classification and slot filling

Natural language understanding (NLU) has two core tasks: intent classification and slot filling. The success of pre-training language models resulted in a significant breakthrough in the two tasks. The architecture based on autoencoding (BERT-based model) can optimize the two tasks jointly. However,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neurocomputing (Amsterdam) 2024-07, Vol.591, p.127725, Article 127725
Hauptverfasser: Guo, Yu, Xie, Zhilong, Chen, Xingyan, Chen, Huangen, Wang, Leilei, Du, Huaming, Wei, Shaopeng, Zhao, Yu, Li, Qing, Wu, Gang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Natural language understanding (NLU) has two core tasks: intent classification and slot filling. The success of pre-training language models resulted in a significant breakthrough in the two tasks. The architecture based on autoencoding (BERT-based model) can optimize the two tasks jointly. However, we note that BERT-based models convert each complex token into multiple sub-tokens by the Wordpiece algorithm, which generates an out-of-alignment between the length of the tokens and the labels. This leads to BERT-based models not performing well in label prediction, limiting model performance improvement. Many existing models can address this issue, but some hidden semantic information is discarded during fine-tuning. We addressed the problem by introducing a novel joint method on top of BERT. This method explicitly models multiple sub-token features after the Wordpiece tokenization, contributing to both tasks. Our proposed method effectively extracts contextual features from complex tokens using the Sub-words Attention Adapter (SAA), preserving overall utterance information. Additionally, we propose an Intent Attention Adapter (IAA) to acquire comprehensive sentence features, assisting users in predicting intent. Experimental results confirm that our proposed model exhibits significant improvements on two public benchmark datasets. Specifically, the slot-filling F1 score improves from 96.5 to 98.2 (an absolute increase of 1.7%) on the Airline Travel Information Systems (ATIS) dataset.
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2024.127725