NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework

Proceedings of the 39th International Conference on Machine Learning, PMLR 162:25438-25451, 2022 Pretrained language models have become the standard approach for many NLP tasks due to strong performance, but they are very expensive to train. We propose a simple and efficient learning framework, TLM,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Yao, Xingcheng, Zheng, Yanan, Yang, Xiaocong, Yang, Zhilin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!