Revisiting the Practical Effectiveness of Constituency Parse Extraction from Pre-trained Language Models
Constituency Parse Extraction from Pre-trained Language Models (CPE-PLM) is a recent paradigm that attempts to induce constituency parse trees relying only on the internal knowledge of pre-trained language models. While attractive in the perspective that similar to in-context learning, it does not r...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Constituency Parse Extraction from Pre-trained Language Models (CPE-PLM) is a
recent paradigm that attempts to induce constituency parse trees relying only
on the internal knowledge of pre-trained language models. While attractive in
the perspective that similar to in-context learning, it does not require
task-specific fine-tuning, the practical effectiveness of such an approach
still remains unclear, except that it can function as a probe for investigating
language models' inner workings. In this work, we mathematically reformulate
CPE-PLM and propose two advanced ensemble methods tailored for it,
demonstrating that the new parsing paradigm can be competitive with common
unsupervised parsers by introducing a set of heterogeneous PLMs combined using
our techniques. Furthermore, we explore some scenarios where the trees
generated by CPE-PLM are practically useful. Specifically, we show that CPE-PLM
is more effective than typical supervised parsers in few-shot settings. |
---|---|
DOI: | 10.48550/arxiv.2211.00479 |