Connecting degree and polarity: An artificial language learning study
We investigate a new linguistic generalization in pre-trained language models (taking BERT (Devlin et al., 2019) as a case study). We focus on degree modifiers (expressions like slightly, very, rather, extremely) and test the hypothesis that the degree expressed by a modifier (low, medium or high de...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We investigate a new linguistic generalization in pre-trained language models
(taking BERT (Devlin et al., 2019) as a case study). We focus on degree
modifiers (expressions like slightly, very, rather, extremely) and test the
hypothesis that the degree expressed by a modifier (low, medium or high degree)
is related to the modifier's sensitivity to sentence polarity (whether it shows
preference for affirmative or negative sentences or neither). To probe this
connection, we apply the Artificial Language Learning experimental paradigm
from psycholinguistics to a neural language model. Our experimental results
suggest that BERT generalizes in line with existing linguistic observations
that relate degree semantics to polarity sensitivity, including the main one:
low degree semantics is associated with preference towards positive polarity. |
---|---|
DOI: | 10.48550/arxiv.2109.06333 |