Naturalistic Causal Probing for Morpho-Syntax
Probing has become a go-to methodology for interpreting and analyzing deep neural models in natural language processing. However, there is still a lack of understanding of the limitations and weaknesses of various types of probes. In this work, we suggest a strategy for input-level intervention on n...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Probing has become a go-to methodology for interpreting and analyzing deep
neural models in natural language processing. However, there is still a lack of
understanding of the limitations and weaknesses of various types of probes. In
this work, we suggest a strategy for input-level intervention on naturalistic
sentences. Using our approach, we intervene on the morpho-syntactic features of
a sentence, while keeping the rest of the sentence unchanged. Such an
intervention allows us to causally probe pre-trained models. We apply our
naturalistic causal probing framework to analyze the effects of grammatical
gender and number on contextualized representations extracted from three
pre-trained models in Spanish: the multilingual versions of BERT, RoBERTa, and
GPT-2. Our experiments suggest that naturalistic interventions lead to stable
estimates of the causal effects of various linguistic properties. Moreover, our
experiments demonstrate the importance of naturalistic causal probing when
analyzing pre-trained models. |
---|---|
DOI: | 10.48550/arxiv.2205.07043 |