The Return of Lexical Dependencies: Neural Lexicalized PCFGs

In this paper we demonstrate that . This contrasts to the most popular current methods for grammar induction, which focus on discovering constituents dependencies. Previous approaches to marry these two disparate syntactic formalisms (e.g., lexicalized PCFGs) have been plagued by sparsity, making th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Transactions of the Association for Computational Linguistics 2020-01, Vol.8, p.647-661
Hauptverfasser: Zhu, Hao, Bisk, Yonatan, Neubig, Graham
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In this paper we demonstrate that . This contrasts to the most popular current methods for grammar induction, which focus on discovering constituents dependencies. Previous approaches to marry these two disparate syntactic formalisms (e.g., lexicalized PCFGs) have been plagued by sparsity, making them unsuitable for unsupervised grammar induction. However, in this work, we present novel neural models of lexicalized PCFGs that allow us to overcome sparsity problems and effectively induce both constituents and dependencies within a single model. Experiments demonstrate that this unified framework results in stronger results on both representations than achieved when modeling either formalism alone.
ISSN:2307-387X
2307-387X
DOI:10.1162/tacl_a_00337