DeepStruct: Pretraining of Language Models for Structure Prediction
We introduce a method for improving the structural understanding abilities of language models. Unlike previous approaches that finetune the models with task-specific augmentation, we pretrain language models on a collection of task-agnostic corpora to generate structures from text. Our structure pre...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We introduce a method for improving the structural understanding abilities of
language models. Unlike previous approaches that finetune the models with
task-specific augmentation, we pretrain language models on a collection of
task-agnostic corpora to generate structures from text. Our structure
pretraining enables zero-shot transfer of the learned knowledge that models
have about the structure tasks. We study the performance of this approach on 28
datasets, spanning 10 structure prediction tasks including open information
extraction, joint entity and relation extraction, named entity recognition,
relation classification, semantic role labeling, event extraction, coreference
resolution, factual probe, intent detection, and dialogue state tracking. We
further enhance the pretraining with the task-specific training sets. We show
that a 10B parameter language model transfers non-trivially to most tasks and
obtains state-of-the-art performance on 21 of 28 datasets that we evaluate. |
---|---|
DOI: | 10.48550/arxiv.2205.10475 |