Unsupervised Chunking with Hierarchical RNN
In Natural Language Processing (NLP), predicting linguistic structures, such as parsing and chunking, has mostly relied on manual annotations of syntactic structures. This paper introduces an unsupervised approach to chunking, a syntactic task that involves grouping words in a non-hierarchical manne...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In Natural Language Processing (NLP), predicting linguistic structures, such
as parsing and chunking, has mostly relied on manual annotations of syntactic
structures. This paper introduces an unsupervised approach to chunking, a
syntactic task that involves grouping words in a non-hierarchical manner. We
present a two-layer Hierarchical Recurrent Neural Network (HRNN) designed to
model word-to-chunk and chunk-to-sentence compositions. Our approach involves a
two-stage training process: pretraining with an unsupervised parser and
finetuning on downstream NLP tasks. Experiments on the CoNLL-2000 dataset
reveal a notable improvement over existing unsupervised methods, enhancing
phrase F1 score by up to 6 percentage points. Further, finetuning with
downstream tasks results in an additional performance improvement.
Interestingly, we observe that the emergence of the chunking structure is
transient during the neural model's downstream-task training. This study
contributes to the advancement of unsupervised syntactic structure discovery
and opens avenues for further research in linguistic theory. |
---|---|
DOI: | 10.48550/arxiv.2309.04919 |