PMI-Masking: Principled masking of correlated spans
Masking tokens uniformly at random constitutes a common flaw in the pretraining of Masked Language Models (MLMs) such as BERT. We show that such uniform masking allows an MLM to minimize its training objective by latching onto shallow local signals, leading to pretraining inefficiency and suboptimal...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Masking tokens uniformly at random constitutes a common flaw in the
pretraining of Masked Language Models (MLMs) such as BERT. We show that such
uniform masking allows an MLM to minimize its training objective by latching
onto shallow local signals, leading to pretraining inefficiency and suboptimal
downstream performance. To address this flaw, we propose PMI-Masking, a
principled masking strategy based on the concept of Pointwise Mutual
Information (PMI), which jointly masks a token n-gram if it exhibits high
collocation over the corpus. PMI-Masking motivates, unifies, and improves upon
prior more heuristic approaches that attempt to address the drawback of random
uniform token masking, such as whole-word masking, entity/phrase masking, and
random-span masking. Specifically, we show experimentally that PMI-Masking
reaches the performance of prior masking approaches in half the training time,
and consistently improves performance at the end of training. |
---|---|
DOI: | 10.48550/arxiv.2010.01825 |