Masked ELMo: An evolution of ELMo towards fully contextual RNN language models
This paper presents Masked ELMo, a new RNN-based model for language model pre-training, evolved from the ELMo language model. Contrary to ELMo which only uses independent left-to-right and right-to-left contexts, Masked ELMo learns fully bidirectional word representations. To achieve this, we use th...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper presents Masked ELMo, a new RNN-based model for language model
pre-training, evolved from the ELMo language model. Contrary to ELMo which only
uses independent left-to-right and right-to-left contexts, Masked ELMo learns
fully bidirectional word representations. To achieve this, we use the same
Masked language model objective as BERT. Additionally, thanks to optimizations
on the LSTM neuron, the integration of mask accumulation and bidirectional
truncated backpropagation through time, we have increased the training speed of
the model substantially. All these improvements make it possible to pre-train a
better language model than ELMo while maintaining a low computational cost. We
evaluate Masked ELMo by comparing it to ELMo within the same protocol on the
GLUE benchmark, where our model outperforms significantly ELMo and is
competitive with transformer approaches. |
---|---|
DOI: | 10.48550/arxiv.2010.04302 |